我在Python中实现了以下神经网络来解决异或问题。我的神经网络由2个神经元的输入层、2个神经元的1个隐藏层和1个神经元的输出层组成。我使用Sigmoid函数作为隐藏层和输出层的激活函数。有人能解释一下我做错了什么吗?
import numpy
import scipy.special
class NeuralNetwork:
def __init__(self, inputNodes, hiddenNodes, outputNodes, learningRate):
self.iNodes = inputNodes
self.hNodes = hiddenNodes
self.oNodes = outputNodes
self.wIH = numpy.random.normal(0.0, pow(self.iNodes, -0.5), (self.hNodes, self.iNodes))
self.wOH = numpy.random.normal(0.0, pow(self.hNodes, -0.5), (self.oNodes, self.hNodes))
self.lr = learningRate
self.activationFunction = lambda x: scipy.special.expit(x)
def train(self, inputList, targetList):
inputs = numpy.array(inputList, ndmin=2).T
targets = numpy.array(targetList, ndmin=2).T
#print(inputs, targets)
hiddenInputs = numpy.dot(self.wIH, inputs)
hiddenOutputs = self.activationFunction(hiddenInputs)
finalInputs = numpy.dot(self.wOH, hiddenOutputs)
finalOutputs = self.activationFunction(finalInputs)
outputErrors = targets - finalOutputs
hiddenErrors = numpy.dot(self.wOH.T, outputErrors)
self.wOH += self.lr * numpy.dot((outputErrors * finalOutputs * (1.0 - finalOutputs)), numpy.transpose(hiddenOutputs))
self.wIH += self.lr * numpy.dot((hiddenErrors * hiddenOutputs * (1.0 - hiddenOutputs)), numpy.transpose(inputs))
def query(self, inputList):
inputs = numpy.array(inputList, ndmin=2).T
hiddenInputs = numpy.dot(self.wIH, inputs)
hiddenOutputs = self.activationFunction(hiddenInputs)
finalInputs = numpy.dot(self.wOH, hiddenOutputs)
finalOutputs = self.activationFunction(finalInputs)
return finalOutputs
nn = NeuralNetwork(2, 2, 1, 0.01)
data = [[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 0]]
epochs = 10
for e in range(epochs):
for record in data:
inputs = numpy.asfarray(record[1:])
targets = record[0]
#print(targets)
#print(inputs, targets)
nn.train(inputs, targets)
print(nn.query([0, 0]))
print(nn.query([1, 0]))
print(nn.query([0, 1]))
print(nn.query([1, 1]))
几个原因。
>
我认为你不应该采取一切的激活函数,尤其是在你的查询函数中。我认为你已经混淆了神经元对神经元权重(wIH和wOH)与激活值的概念。
由于你的混乱,你错过了在训练中重复使用查询函数的想法。你应该把它想象成向输出前馈激活水平,将结果与目标输出进行比较,给出一个错误数组,然后使用sigmoid函数的导数向后反馈以调整权重。
我会把函数和它的派生函数放在里面,而不是从s不西导入,因为它们太简单了。此外,“建议”使用tanh和d/dx. tanh作为隐藏层函数(不记得为什么,这个简单的网络可能不需要)
# transfer functions
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# derivative of sigmoid
def dsigmoid(y):
return y * (1.0 - y)
# using tanh over logistic sigmoid for the hidden layer is recommended
def tanh(x):
return np.tanh(x)
# derivative for tanh sigmoid
def dtanh(y):
return 1 - y*y
最后,你也许能弄清楚我刚才用神经网络做了什么,在这里只使用numpyhttps://github.com/paddywwoof/Machine-Learning/blob/master/perceptron.py