In the neural network context, used in a variety of applications, binarised networks, which describe both weights and activations as single-bit binary values, provide computationally attractive solutions. A lightweight binarised neural network system can be constructed using only logic gates and counters together with a two-valued activation function unit. However, binarised neural networks represent the weights and the neuron outputs with only one bit, making them sensitive to bit-flipping errors. Binarised weights and neurons are manipulated by the utilisation of bitstream processing with regard to stochastic computing to cope with this error sensitivity. Stochastic computing is shown to provide robustness for bit errors on data while being built on a hardware structure, whose implementation is simplified by a novel subtraction-free implementation of the neuron activation.