How to solve MemoryError problem

I've created and normalized my colored image dataset of 3716 sample and size 493*491 as x_train, its type is list I'm tring to convert it into numpy array as follows

from matplotlib import image
import numpy as np
import cv2

def prepro_resize(input_img):
  oimg=image.imread(input_img)
  return cv2.resize(oimg, (IMG_HEIGHT, IMG_WIDTH),interpolation = cv2.INTER_AREA)

x_train_ = [(prepro_resize(x_train[i])).astype('float32')/255.0 for i in range(len(x_train))]

x_train_ = np.array(x_train_) #L1
#print(x_train_.shape)

but i get the following error when L1 runs MemoryError: Unable to allocate 10.1 GiB for an array with shape (3716, 493, 491, 3) and data type float32

Topic opencv numpy tensorflow deep-learning python

Category Data Science


You could try the following:

1.) Convert to greyscale images instead of RGB if your application does not need RGB. Colored images consume relatively more memory than greyscale ones.

2.) Resize the images to a lower resolution than the current one

Cheers!

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.