How to run list comprehensions on GPU?

Is there a way to run complex list comprehensions like the following on GPU?

[[x[index] if x[index]len(x) else x[index]-1 for x in slice] if (len(slice)==1) else slice for slice,index in zip(slices,indices)]

To what degree is it Possible? Do I have to convert it to some kind of numpy comprehension (if so what part is speciffically possible/necessary)

The goal is performance optimization on large datalists/arrays.

Topic numpy gpu performance

Category Data Science


There are several different methods.

One method to use Numba which is a just-in-time complier. Numba can compile Python to target CUDA. CUDA is a API for GPU computation.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.