saddam

Saddam Abdulwahab defended his PhD

Supervised Monocular Depth Estimation Based on Machine and Deep Learning Models

Abstract: Depth Estimation refers to measuring the distance of each pixel relative to the camera. Depth estimation is crucial for many applications, such as scene understanding and reconstruction, robot vision, and self-driving cars. Depth maps can be estimated using stereo or monocular images. Depth estimation is typically performed through stereo vision following several time-consuming stages, such as epipolar geometry, rectification, and matching. However, predicting depth maps from single RGB images is still challenging as object shapes are to be inferred from intensity images strongly affected by viewpoint changes, texture content, and light conditions. Additionally, the camera only captures a 2D projection of the 3D world. While the apparent size and position of objects in the image can change significantly based on their distance from the camera.

Stereo cameras have been deployed in systems to obtain depth map information. Although it shows good performance, but its main drawback is the complex and expensive hardware setup it requires and the time complexity, which limits its use. In turn, monocular cameras have become simpler and cheaper; however, single images always need more important depth map information. Many approaches to predict depth maps from monocular images have recently been proposed, thanks to the revolution of deep learning models. However, most of these solutions result in blurry approximations of low-resolution depth maps. In general, depth estimation requires knowing the appropriate representation methods to extract the shared features in a single RGB image and the corresponding depth map to get the depth estimation.

Consequently, this thesis attempts to contribute into two research lines in estimating depth maps (also known as depth images): the first line estimates the depth based on the object present in a scene to reduce the complexity of the complete scene. Thus, we developed new techniques and concepts based on traditional and deep learning methods to achieve this task. The second research line estimates the depth based on a complete scene from a monocular camera. We have developed more comprehensive techniques with a high precision rate and acceptable computational timing to get more precise depth maps.

Read More

WhatsApp Image 2023-04-21 at 4.06.15 PM

Nasibeh Saffari defended her PhD

ANALYZING THE BREAST TISSUE IN MAMMOGRAMS USING DEEP LEARNING

Abstract: Mammographic breast density (MBD) reflects the amount of fibroglandular area of breast tissue that appears white and shiny on mammograms, commonly known as percent breast density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate estimation of BMD with visual assessment remains a challenge due to poor contrast and significant variations in background adipose tissue in mammograms. In addition, the correct interpretation of mammography images requires highly trained medical experts: It is difficult, laborious, expensive and prone to errors. However, dense breast tissue can make breast cancer more difficult to identify and be associated with a higher risk of breast cancer. For example, women with high breast density compared to women with low breast density have been reported to have a four to six times greater risk of developing the disease. The main key to breast density computation and breast density classification is to correctly detect dense tissues in mammographic images. Many methods have been proposed to estimate breast density; however, most are not automated. In addition, they have been severely affected by low signal-to-noise ratio and density variability in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to help the doctor analyze and diagnose it automatically. The current development of deep learning methods motivates us to improve the current breast density analysis systems. The main focus of this thesis is to develop a system to automate breast density analysis (such as; Breast Density Segmentation (BDS), Breast Density Percentage (BDP) and Breast Density Classification ( BDC) ), using deep learning techniques and applying it to temporal mammograms after treatment to analyze breast density changes to find a dangerous and suspicious patient.

Read More

European Union Projects: Bosomshield

BosomShield proposes to join the two disciplines (pathological and radiological imaging) in a software that will analyse these images to classify the breast cancer subtypes and predict (together with the complete clinical history of the patient) the probability of relapse for distant metastasis. In addition, BosomShield will provide high-level training in breast cancer research to young researchers by offering the necessary transferable skills for thriving careers underpinned using diverse disciplines, digital radiology and pathology, biomedical, AI, privacy and software development.bosom

Read More