Saddam Abdulwahab defended his PhD

Supervised Monocular Depth Estimation Based on Machine and Deep Learning Models

Abstract: Depth Estimation refers to measuring the distance of each pixel relative to the camera. Depth estimation is crucial for many applications, such as scene understanding and reconstruction, robot vision, and self-driving cars. Depth maps can be estimated using stereo or monocular images. Depth estimation is typically performed through stereo vision following several time-consuming stages, such as epipolar geometry, rectification, and matching. However, predicting depth maps from single RGB images is still challenging as object shapes are to be inferred from intensity images strongly affected by viewpoint changes, texture content, and light conditions. Additionally, the camera only captures a 2D projection of the 3D world. While the apparent size and position of objects in the image can change significantly based on their distance from the camera.

Stereo cameras have been deployed in systems to obtain depth map information. Although it shows good performance, but its main drawback is the complex and expensive hardware setup it requires and the time complexity, which limits its use. In turn, monocular cameras have become simpler and cheaper; however, single images always need more important depth map information. Many approaches to predict depth maps from monocular images have recently been proposed, thanks to the revolution of deep learning models. However, most of these solutions result in blurry approximations of low-resolution depth maps. In general, depth estimation requires knowing the appropriate representation methods to extract the shared features in a single RGB image and the corresponding depth map to get the depth estimation.

Consequently, this thesis attempts to contribute into two research lines in estimating depth maps (also known as depth images): the first line estimates the depth based on the object present in a scene to reduce the complexity of the complete scene. Thus, we developed new techniques and concepts based on traditional and deep learning methods to achieve this task. The second research line estimates the depth based on a complete scene from a monocular camera. We have developed more comprehensive techniques with a high precision rate and acceptable computational timing to get more precise depth maps.

Read More

WhatsApp Image 2023-04-21 at 4.06.15 PM

Nasibeh Saffari defended her PhD


Abstract: Mammographic breast density (MBD) reflects the amount of fibroglandular area of breast tissue that appears white and shiny on mammograms, commonly known as percent breast density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate estimation of BMD with visual assessment remains a challenge due to poor contrast and significant variations in background adipose tissue in mammograms. In addition, the correct interpretation of mammography images requires highly trained medical experts: It is difficult, laborious, expensive and prone to errors. However, dense breast tissue can make breast cancer more difficult to identify and be associated with a higher risk of breast cancer. For example, women with high breast density compared to women with low breast density have been reported to have a four to six times greater risk of developing the disease. The main key to breast density computation and breast density classification is to correctly detect dense tissues in mammographic images. Many methods have been proposed to estimate breast density; however, most are not automated. In addition, they have been severely affected by low signal-to-noise ratio and density variability in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to help the doctor analyze and diagnose it automatically. The current development of deep learning methods motivates us to improve the current breast density analysis systems. The main focus of this thesis is to develop a system to automate breast density analysis (such as; Breast Density Segmentation (BDS), Breast Density Percentage (BDP) and Breast Density Classification ( BDC) ), using deep learning techniques and applying it to temporal mammograms after treatment to analyze breast density changes to find a dangerous and suspicious patient.

Read More


Vivek Kumar Singh defended his PhD

Segmentation and Classification of Multimodal Medical Images based on Generative Adversarial Learning and Convolutional Neural Networks

Abstract: Medical imaging is an important means for early illness detention in the majority of medical fields, which provides better prognosis to the patients. But properly interpreting medical images needs highly trained medical experts: it is difficult, time-consuming, expensive, and error-prone. It would be more beneficial to have a computer-aided diagnosis (CAD) system that can automatically outline the possible ill tissues and suggest diagnosis to the doctor. Current development in deep learning methods motivates us to improve current medical image analysis systems. In this thesis, we have considered three different medical diagnosis, such as brenst cancer from mammograms and ultrasound images, skin lesion from dermoscopic images, and retinal diseases from fundus images. These tasks are very challenging due to the several sources of variability in the image capturing processes.


Read More