No, adding noise will not help to regularise the model.
This won't help your model generalise better to unseen inputs. It will probably just make your model perform generally worse. Any modifications you make to the training instances should be learnable.
Take an image classification problem as an example (images of a cat or not cat): randomly converting pixels to white in every image will not help your model learn generalisable features of cats; random white noise is not a 'learnable' feature. However by rotating, flipping, cropping, or adjusting contrast of the images, and adding these as new training instances, this would force the model to be more tolerant of differences in each image of cats, and therefore generalise better.
There are many methods for regularising models, such as l1 or l2 regularisation in linear regression or neural nets, and also plenty of other methods specific to other model types.
Making sure your training data is as varied as possible, and using model specific regularisation techniques, are probably closer to 'best practise' (though I don't really like the term since 'best practises' often change, such as new regularisation techniques, optimisations etc etc).
Update:
I've since experimented with adding Gaussian noise to input features, and especially in cases with overfit neural nets this can help the model generalise. It's also possible to add GaussianNoise layers directly in a neural net for this purpose, as in the TensorFlow documentation here.