Multiple keras models parallel - time efficient

I am trying to load two different keras models in parallel. I tried to use the functional API model:

input1 = Input(inputShapeOfModel1)
input2 = Input(inputShapeOfModel2)

output1 = model1(input1)
output2 = model2(input2) 

parallelModel = Model([input1,input2], [output1,output2])

This works but it does not run in parallel actually. Inference time is just the sum of each model's individual inference time.

My question is should this run concurrently? I also tried to load them in different py files with gpu memory options. Still I haven't got parallelism (inference time is x1.5 for each model)

Is there any way to get inference time of both models as close to a single's model inference time? Is the only solution to add a second gpu?

UPDATE: in different scripts they seem to be able to run in parallel, so there must be a way to efficiently run in python/keras as well.

Topic gpu keras tensorflow computer-vision parallel

Category Data Science


As was suggested by Erik van de Ven, it sounds like running each model on a different process should provide the requested parallelism.

I guess you could either run the fit function for each model in a different process
Or you could even load them on different cpu cores:

with K.device('cpu0'):
    input1 = Input(inputShapeOfModel1)
    output1 = model1(input1)

with K.device('gpu0'):
    input2 = Input(inputShapeOfModel2)
    output2 = model2(input2)

model = Model([input1, input2], [output1, output2])

I haven't tried any of these though, so i'm not sure what would provide the best result

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.