Reconstruction is used for the concept Restricted Boltzmann Machine (RBM), it describes a phase where the structure reconstructs (generates) visible samples from the hidden states of the layers. For more detail, you can refer to:
https://stackoverflow.com/questions/4105538/restricted-boltzmann-machine-reconstruction
Backpropagation is something different entirely; you can find backpropagation in Deep Neural Nets, Convolutional Nets, RBMs (in some way) and so on. Assume a Deep Neural Net with N hidden layers. Upon training we feed the input forward through the randomly initialized weights up to the last neuron. From the output of the last neuron, we calculate our loss using a cost function that calculates the error between predicted and the true output. This is forward propagation.
Knowing the loss and the loss function, we start to take derivatives backwards to find gradients of weights and biases of each layer using the chain rule, all back to the input side. This is called backpropagation. After backpropagation, we update all of the weights and biases of N layers with the gradients for each layer we calculated. Then we do forward propagation again, backpropagation again, update again and all again until we achieve our goal (which is usually minimizing the error as solving hopefully a convex optimization problem). You can also refer to:
https://www.youtube.com/watch?v=x_Eamf8MHwU for backpropagation
Also a final note: Gradients of RBM algorithm are not able to be calculated via classic backpropagation, a method called as contrastive divergence is used for the calculation. You can also have a look at the RBM paper on this matter and more, by Geoffrey Hinton:
https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf
Hope I could help, good luck!