FP16, FP32 - what is it all about? or is it just Bitsize for Float-Values (Python)
What is it all about FP16, FP32 in Python? My potential Business Partner and I are building a Deep Learning Setup for working with time series. He came up with "FP16 and FP32" while finding a GPU. It looks like he's talking about Floating Point values in 16 vs 32bit. (Our data points look like this: "5989.12345", so I'm pretty sure 16bit ain't enough.)
Is FP16 a special technique GPUs use to improve performance or is it just a fancy term for using 16-bit float values instead of 32 standard floats?
Topic gpu finite-precision performance
Category Data Science