Can we have neural network emulate XOR logic gate with single neuron in the hidden layer?

I came across following neural networks emulating logical XOR gate:

Approach 1:

Approach 2:

But today, I came across below one:

I dont get how this behaves as XOR, especially what does those numbers 1.5 and 0.9 on neurons mean?

Assuming those behave as scaling neurons, I tried to code the behavior in python:

x1 = 0
x2 = 0
y = -2*(1.5*(x1+x2))+x1+x2
print((%s,%s:%s)%(x1,x2,y))

x1 = 0
x2 = 1
y = -2*(1.5*(x1+x2))+x1+x2
print((%s,%s:%s)%(x1,x2,y))

x1 = 1
x2 = 0
y = -2*(1.5*(x1+x2))+x1+x2
print((%s,%s:%s)%(x1,x2,y))

x1 = 1
x2 = 1
y = -2*(1.5*(x1+x2))+x1+x2
print((%s,%s:%s)%(x1,x2,y))

The output was:

(0,0:0.0)
(0,1:-2.0)
(1,0:-2.0)
(1,1:-4.0)

Am still not able to grasp the logic behind this version of XOR gate.

Topic perceptron neural-network

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.