
Type: Bug

Status: Resolved

Priority: Critical

Resolution: Fixed

Affects Version/s: None

Fix Version/s: 7.0.0

Component/s: None

Labels:

Environment:This is the NeuralNetworks.ecl module here: https://github.com/hpccsystems/eclml/

Pull Request URL:

Target version:
The XOR problem is a simple but very good test of an implementation of a multilayered feed forward neural network that learns using backpropagration. Why? Because 1) it is a very simple problem and 2) it can't be learned unless you have at least one hidden layer in your neural network. NeuralNetworks.ecl COULD NOT LEARN IT which means there is something wrong with the implementation.
I have attached 2 test versions of the xor problem. NeuralNetworks.ecl cannot learn either version (These tests were created by modifying the following test in the eclml github repository: NeuralNetworks_test.ecl). For the version that has 1 dependent variable (output), we want the output to be 1.0 ONLY IF the independent variables (inputs) are 0.0, 1.0 OR 1.0, 0.0.
For the version that has 2 outputs, we want the output to be 0.0, 1.0 IF the inputs are 0.0, 1.0 OR 1.0, 0.0 otherwise we want the output to be 1.0, 0.0 IF the input is either 0.0, 0.0 OR 1.0, 1.0.
For both versions I tried different values for ALPHA, LAMBDA, and MaxIter. None of the values I tried worked.