Categories

Projects


Competitions

  • Avito Demand Prediction Kaggle competition

    Check the code  github

    Avito launched a competition on Kaggle challenging users to predict Avito to predict demand for an online advertisement based on its full description (title, description, images, etc.), its context (geographically where it was posted, similar ads already posted) and historical demand for similar ads...

  • AI Challenger English Chinese Machine Translation

    Ai Challenger a new Chinese platform for ai challenges, their first contest was related to machine translation system and I wanted to try my techniques in NMT systems on a system that I have no clue about its target Chinese language.

    In that contest I participated as Marb and got 25.50 bleu score on their evaluation .

  • Building Reservation System

    During Summer 2013, Faculty of Engineering Alexandria University announced a competition to make an online room reservation system with automated filling of rooms based on available courses along with their capacity and required equipment in the room, I won the first prize of this contest.


Experience

  • Working at Raisa

    I was able to get an offer from Raisa Egypt , a startup specialized in forecasting oil and gas from land geological features.

  • Working at Microsoft Research Lab

    I have always wanted to work in the field of machine learning and during my work at Valeo , I was participating in kaggle for learning purposes. Moving to Microsoft Lab helped me gain more industrial and research experience in the field.

  • Working at Valeo

    After graduating from Faculty of Engineering Alexandria university, I was lucky to get a recommendation at Valeo Automotive company, for 2 years and 3 months at Valeo I gained a broad knowledge in software development.

  • Bkam Internship as Software Engineer

    Bkam was a fast-growing startup specialized in online price comparison between available online and on-site products from several stores and get you the store link with the best price for a specific item.

    At my second year I was lucky to get a recommendation from my TA Ahmed ElSharkasy to join them.


Publications


Tutorials


Paper Review

  • Out-of-Distribution Detection in Vision-Language Models: A Survey

    Vision-Language Models (VLMs) like CLIP have dramatically shifted the landscape of visual understanding. Trained on internet-scale image-text pairs, these models demonstrate remarkable zero-shot generalization, describing objects they have never explicitly seen during training. Yet this generalization comes with an underappreciated fragility: when deployed in the real world, VLMs routinely encounter inputs that bear no resemblance to anything...

  • Reasoning's Razor: When Thinking More Makes Safety Worse

    Large Reasoning Models (LRMs) like DeepSeek-R1 and QwQ-32B have become remarkably capable at solving complex problems through extended chain-of-thought. The natural instinct is to apply this power to safety-critical tasks: detecting harmful content, catching hallucinations, flagging policy violations. More reasoning = more accuracy = safer AI, right?

    A new paper challenges that intuition head-on. “Reasoning’s Razor” <a...

  • How Can LoRA parameters improve the detection of Near-OOD data?

    We’ve all come to love Low-Rank Adaptation (LoRA) for making it practical to fine-tune massive Large Language Models (LLMs) on our own data. The standard practice is simple: you inject small, trainable matrices into the model, fine-tune only them, and then, for deployment, you merge these new weights back into the original model to avoid any inference...

  • Weight Space Learning Treating Neural Network Weights as Data

    In the world of machine learning, we often think of data as the primary source of information. But what if we started looking at the models themselves as a rich source of data? This is the core idea behind weight space learning, a fascinating and rapidly developing field of AI research. The real question in this post...

  • Decoding LLM Hallucinations An In-Depth Survey Summary

    The rapid advancement of Large Language Models (LLMs) has brought transformative capabilities, yet their tendency to “hallucinate”—generating outputs that are nonsensical, factually incorrect, or unfaithful to provided context—poses significant risks to their reliability, especially in information-critical applications . A comprehensive survey by Huang (Huang et al., 2025) systematically explores this phenomenon, offering a detailed taxonomy, analyzing root causes,...

  • Out-of-Distribution Detection Ensuring AI Robustness

    Deep neural networks can solve various complex tasks and achieve state-of-the-art results in multiple domains such as image classification, speech recognition, machine translation, robotics, and control. However, due to the distributional shift between collected training data and actual test data, The trained neural network has a difference between the network’s performance on the training and unseen real...

  • GAIA Gradient-Based Attribution for OOD Detection

    Deep neural networks (DNNs) have shown incredible accuracy across numerous applications. However, their inability to handle out-of-distribution (OOD) samples can lead to unpredictable and potentially unsafe behavior. This post explores the recent paper on the Gradient Abnormality Inspection and Aggregation (GAIA)(Chen et al., 2023) framework, which introduces an innovative approach to enhance OOD detection.

    Gradient-aware...

  • Survey on Uncertainty Estimation in Deep Learning

    A distinction between aleatoric and epistemic uncertainties is proposed in the domain of medical decision-making (Senge et al., 2014). Their paper explained that aleatoric and epistemic uncertainties are not distinguished in Bayesian inference. Moreover, the expectation over the model with respect to the posterior is used to get our prediction leading to an averaged epistemic...


Teaching

  • Empowering the Next Generation My Journey with MISE

    In the ever-evolving landscape of artificial intelligence, there’s a pressing need to cultivate diverse talent and foster inclusivity in STEM fields. One initiative that has personally enriched my journey and allowed me to contribute meaningfully to this cause is MISE (Machine Learning in Science and Engineering), a transformative program dedicated to equipping high school...