Tanh error function, usually better but can require a lower learning rate.
Constant array consisting of the names for the activation function, so that the name of an activation function can be received by:
Periodical cosinus activation function.
Periodical cosinus activation function.
Unable to allocate memory
Unable to open configuration file for reading
Unable to open configuration file for writing
Unable to open train data file for reading
Unable to open train data file for writing
Error reading info from configuration file
Error reading connections from configuration file
Error reading neuron info from configuration file
Error reading training data from file
Unable to train with the selected activation function
Unable to use the selected activation function
Unable to use the selected training algorithm
Index is out of bound
The number of input neurons in the ann and data don’t match
No error
The number of output neurons in the ann and data don’t match
Scaling parameters not present
Irreconcilable differences between two struct fann_train_data structures
Trying to take subset which is not within the training set
Wrong version of configuration file
Number of connections not equal to the number expected
The parameters for create_standard are wrong, either too few parameters provided or a negative/very high value provided
Fast (sigmoid like) activation function defined by David Elliott
Fast (symmetric sigmoid like) activation function defined by David Elliott
Standard linear error function.
Constant array consisting of the names for the training error functions, so that the name of an error function can be received by:
Tanh error function, usually better but can require a lower learning rate.
Gaussian activation function.
Symmetric gaussian activation function.
Linear activation function.
Bounded linear activation function.
Bounded linear activation function.
Each layer only has connections to the next layer
Each layer has connections to all following layers
Constant array consisting of the names for the network types, so that the name of an network type can be received by:
Sigmoid activation function.
Stepwise linear approximation to sigmoid.
Symmetric sigmoid activation function, aka.
Stepwise linear approximation to symmetric sigmoid.
Periodical sinus activation function.
Periodical sinus activation function.
Stop criterion is number of bits that fail.
Stop criterion is Mean Square Error (MSE) value.
Constant array consisting of the names for the training stop functions, so that the name of a stop function can be received by:
Threshold activation function.
Threshold activation function.
Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set.
Standard backpropagation algorithm, where the weights are updated after each training pattern.
Constant array consisting of the names for the training algorithms, so that the name of an training function can be received by:
A more advanced batch training algorithm which achieves good results for many problems.
A more advanced batch training algorithm which achieves good results for many problems.
THE SARPROP ALGORITHM: A SIMULATED ANNEALING ENHANCEMENT TO RESILIENT BACK PROPAGATION http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.47.8197&rep=rep1&type=pdf
Each layer only has connections to the next layer
Each layer has connections to all following layers
Stop criteria is number of bits that fail.
Stop criteria is Mean Square Error (MSE) value.
Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set.
Standard backpropagation algorithm, where the weights are updated after each training pattern.
A more advanced batch training algorithm which achieves good results for many problems.
A more advanced batch training algorithm which achieves good results for many problems.