MR images segmentation for feature extraction

I have datasets of brain MR images with tumours, the tumours are already selected manually by a physicist using Image J.

I have read about segmentation, but I still couldn't understand how do they extract features from a segmented image.

should the images have only the tumor with a black background as shown in the below images, so the feature extraction will be processed on the whole image? or do they extract features only on the region of interest using overlay, or layer that specifies the ROI?

and would Discrete wavelet transform DWT descriptor be a good choice for a descriptor?

Topic computer-vision feature-extraction machine-learning

Category Data Science


"Feature extraction" is basically a process of reducing the input data (an image of X pixels) to a lower dimension (that is still enough to perform the model's task). So you have 2 options, to use a man-made features (for this you need to be the expert that determines what is a good and relevant feature) or train a feature extractor.

enter image description here

For a man-made feature extractor you need to understand what is the difference between the tumor area and the rest of the image (is it just a matter of the grayscale, where you can use some threshold value? maybe texture or spatial frequency where discrete wavelets can help?). This is not an easy task (there is a reason why the tumor was segmented by some physicist?? {I would think he should be some sort of a doctor}). I am not an expert in MRI imaging but I've found some papers that talk exactly about your question:

"Feature Extraction for MRI Segmentation - Wiley Online Library"

"Feature extraction and selection from MRI images for the brain tumor classification"

If we are talking about training a feature extractor, then you need to define a model with an input and an output and some quantifiable training goal (loss/score function). From my understanding, your final goal is to segment just the tumor part of the image, right?

Which means that your model's output should be a binary image with the same size as the input image. The pixels of the output image will be 0 for not-tumor and 1 for tumor pixels. This is similar to tasks of "Salient Object Detection".

The only difference with segmentation is that your output image is binary. In segmentation you will get an output image where each pixel is classified to one of k segments. This might also be relevant to you if you are trying to both segment the tumor and classify it as one of several known tumor-classes. Or if you want to divide the image into different segments, with the tumor being one of them.

If we are talking about a deep-learning model, both tasks share similar architectures known as Fully Convolutional Networks (with many different variations). You can read more about it at a paper named: "Fully Convolutional Networks for Semantic Segmentation". In these models, the output of the second-to-last (refereed to as the "embedding layer") can usually be used as a feature extractor.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.