After pip install bybrain
, the PyBrain the quick start essentially goes as follows:
from pybrain.tools.shortcuts import buildNetwork from pybrain.structure import TanhLayer from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer # Create a neural network with two inputs, three hidden, and one output net = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer) # Create a dataset that matches NN input/output sizes: xor = SupervisedDataSet(2, 1) # Add input and target values to dataset # Values correspond to XOR truth table xor.addSample((0, 0), (0,)) xor.addSample((0, 1), (1,)) xor.addSample((1, 0), (1,)) xor.addSample((1, 1), (0,)) trainer = BackpropTrainer(net, xor) #trainer.trainUntilConvergence() for epoch in range(1000): trainer.train() |
However, it does not work, which can be seen by running the following test?
testdata = xor trainer.testOnData(testdata, verbose = True) # Works if you are lucky! |
Kristina Striegnitz code has written and published an XOR example that works more reliably. The code is effectively reproduced below, in case the original should disappear:
# ... continued from above # Create a recurrent neural network with four hidden nodes (default is SigmoidLayer) net = buildNetwork(2, 4, 1, recurrent = True) # Train the network using arguments for learningrate and momentum trainer = BackpropTrainer(net, xor, learningrate = 0.01, momentum = 0.99, verbose = True) for epoch in range(1000): trainer.train() # This should work every time... trainer.testOnData(testdata, verbose = True) |