Skip to main content

L’apprentissage multi-tâche pour la segmentation et la classification des cancers à partir des images médicales

Engineer: IKRAM IAICH
Organisation: ENSIAS
Language: French
Promotion: 2021
Year: 3

Abstract #

Cancer is a very dangerous disease that always requires an accurate and early diagnosis. During

the treatment process, a large amount of imaging data is generated. This huge amount of data is usually

complex and difficult to interpret with human vision because of the noise caused by the imaging

systems and the diversity of modality, source and size.

Artificial intelligence today is widely applied to all aspects of medical image analysis, including

oncology, making high precision diagnosis possible. It is proving to be one of the best ways to analyze

and make predictions based on large imaging data sets. The advent of AI methods and deep learning

in particular is bringing great success in many computer vision applications such as medical image

classification and segmentation. Both of these tasks provide a better understanding of cancer and a

better definition of the most effective treatment. But technically, classification and segmentation

models are very expensive and require large datasets, powerful RAM and long training times.

Multi-task learning, a deep learning algorithm, is the best way to overcome the problems of singletask classification and segmentation. It allows to improve the results of both tasks by using a single model that shares the necessary resources.

In this context, the objective of this report is to implement a Deep Learning-based solution for the

classification and segmentation of different types of cancer from medical images of different

modalities. To meet this objective, we have built two Multi-task models for the classification and

segmentation tasks in order to choose the best model later on. Both models are based on the U-net

Model by replacing its encoder with pre-trained models(VGG&-, Mobilenetv2). The proposed models

are evaluated and compared with other single-task techniques of image segmentation and classification

namely U-net for segmentation and VGG-16 and Mobilenetv2 for classification. The obtained results

show very encouraging performances of the multi-task compared to the single-task models.