Tensorflow Model permanently becomes corrupt when input embeddings exceed max_position_embeddings
I use Tensorflow C++ API.
I have a Tensorflow model.
I give some inputs to this model.
There is a parameter called max_position_embeddings
This parameter determines maximum acceptable input dimensions
When I give a very long input for inference I get the exception:
{{function_node __inference__inference_10663}} {{function_node __inference__inference_10663}} indices[0,2048] = 2049 is not in [0, 2049)
[[{{node decoder/position_embeddings/Gather}}]] [[StatefulPartitionedCall/StatefulPartitionedCall]]
So far, everything is normal. However, after catching this exception, I try a second inference with a small size input and again I get the very same error:
{{function_node __inference__inference_10663}} {{function_node __inference__inference_10663}} indices[0,2048] = 2049 is not in [0, 2049)
I have debugged the code and in the 2nd inference attempt with good input, just before inference and during initializing tensors to feed to inference, I got the exception during tensor initialization:
Model Mdl = ... // this is the model gets corrupted in the previous inference call
// Define the tensors
Tensor input_ids{ Mdl,serving_default_input_ids }; //this is where I get exception.
What I wonder is that why Tensorflow graphs and models are stateful and once I get an exception for max_position_embeddings , I can no longer do an inference with this model.
Is there a solution to fix this issue?
Thanks.
Topic embeddings inference tensorflow graphs
Category Data Science