How to run list comprehensions on GPU?
Is there a way to run complex list comprehensions like the following on GPU?
[[x[index] if x[index]len(x) else x[index]-1 for x in slice] if (len(slice)==1) else slice for slice,index in zip(slices,indices)]
To what degree is it Possible? Do I have to convert it to some kind of numpy comprehension (if so what part is speciffically possible/necessary)
The goal is performance optimization on large datalists/arrays.
Topic numpy gpu performance
Category Data Science