What is it all about FP16, FP32 in Python? My potential Business Partner and I are building a Deep Learning Setup for working with time series. He came up with "FP16 and FP32" while finding a GPU. It looks like he's talking about Floating Point values in 16 vs 32bit. (Our data points look like this: "5989.12345", so I'm pretty sure 16bit ain't enough.) Is FP16 a special technique GPUs use to improve performance or is it just a fancy …
I am trying to build a LSTM model in keras where I have one question with 10 answers but only ONE among them is correct. So basically im tring to build a 10 class classification problem. As most of the research papers are using Mean average precision(MAP) and Mean reciprocal rank(MRR) to evaluate this type of problems, i want to use them as custom metrics. So my first question: Is the below code correct to calculate mean average precision? def …
I deploy machine learning models (typically GPU) to a variety of environments. I work sort of at the edge of ML R&D and devops, so I am really big into reproducibility, and one thing that drives me nuts is when models output similar but not byte-for-byte identical values, frustrating any hash-based automated testing. For example, here is a score from the same sample, inference model, code, container image, etc, but one is on a Titan, one is on an RTX …
I have a feed-forward neural network with a customized cost function. Since my cost function has an exponential component, I need to handle very large numbers. By default, tensorflow uses float32 datatype, which is not sufficient for my work. How can I have a tensorflow model with float64 datatype? I have tried to define all the tensors as float64 and hope the casting take care of the rest but it does not work. Also, I have enabled the eager execution …