Rendered Image Denoising
I am learning about "Image Denoising using Autoencoders". So, now I want to build and train a model. Hence, when I read into how Nvidia generated the dataset, I came across: We used about 1000 different scenes and created a series of 16 progressive images for each scene. To train the denoiser, images were rendered from the scene data at 1 sample per pixel, then 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, and 131072 samples per pixel.
I was trying to understand-
1) what is meant by rendering images at n samples per pixel?
2) How to do this in python to generate the dataset?
I have read some articles regarding this but could not form a confident opinion from a Data Science perspective.
https://area.autodesk.com/tutorials/what-is-sampling/
Any leads would be much appreciated! Thanks
Topic image nvidia autoencoder python
Category Data Science