Hello, I am

Fatima Yousif

MSc Student in Intelligent Field Robotic Systems (Currently, Netherlands)

ROS
ROS2
Git
OpenCV
PyTorch
Python
C++
MATLAB
Stonefish
TensorFlow
Gazebo
Python
TensorFlow
OpenCV
ROS2
Gazebo
Git
PyTorch
Stonefish
C++
MATLAB
ROS
PyTorch
OpenCV
Git
Gazebo
Stonefish
ROS
TensorFlow
ROS2
MATLAB
C++
Python
TensorFlow
ROS
ROS2
PyTorch
OpenCV
C++
Stonefish
Gazebo
Python
Git
MATLAB

About Me

I am an Erasmus Mundus Joint Master's Scholar in Intelligent Field Robotic Systems at Universitat de Girona, Spain, and the University of Zagreb, Croatia. My research interests span a diverse range of cutting-edge disciplines, including robotics, computer vision, and deep/machine learning.

I am passionate about exploring the intersection of these fields to push the boundaries of technology and create innovative solutions that address complex real-world challenges.

Additionally, I have proven leadership qualities demonstrated in global communities including Google Developer Student Clubs, TEDx, Google Developer Groups Live, U.S. Embassy programs, and 10Pearls.

Education

2023 - Present

Erasmus Mundus Joint Master’s in Intelligent Field Robotic Systems | Universitat de Girona

Semester I & II in Girona: Autonomous Systems, Machine Learning, Multiview Geometry, Probabilistic Robotics (Kalman Filtering), and Robot Manipulation, Localization (SLAM), Planning, Perception (Computer Vision), and Intervention.

Semester III in Zagreb: Aerial Robotics, Multi-Robot Systems, Human-Robot Interaction, Robotic Sensing, Perception, & Actuation, Deep Learning, and Ethics & Technology.

2018 - 2022

B.E. in Software Engineering | Mehran University of Engineering and Technology

Agent Based Intelligent Systems, Data Science & Analytics, Simulation & Modeling, Cloud Computing, Statistics and Probablity

CGPA 3.96 / 4.00 - Silver Medal Distinction & First Position

Experiences

03-03-2025 - Present

Master Thesis/Intern | Saxion Smart Mechatronics and Robotics Research Group

Currently working on my Master's thesis under the KIEM, avoiding the invisible project. My research focuses on algorithms for multimodal target tracking and drone-based following, with three key objectives: Multi-modal (visual & thermal) target detection and tracking, drone control for target following, and integration of both into a unified software pipeline.

21-10-2024 - 23-10-2024

ROSCon 2024 Diversity Scholar | Open Robotics, Denmark

I secured a diversity scholarship to attend the ROSCon 2024 in Denmark where I had the opportunity to network with companies and ROS contributors globally. I specifically got extensive hands-on experience by attending the workshops named “Open source, open hardware hand-held mobile mapping system for large scale surveys” which gave exposure to essential processes such as LIDAR odometry and multi-session refinement for large-scale mapping and “ros2_control” where we learned about controller chaining, fallback controllers, and async controllers.

06-2024 - 08-2024

Robotics Intern | Paltech Robotics GmbH

Worked on testing and comparing two new ultrasonic sensors i.e. Bosch and Valeo for the obstacle avoidance task to include the safety braking feature (setting thresholds to slow down or stop the robot with ROS2) which involved performing multiple field tests of different high grass.

My Projects

Sim2Real: Controlling a Swarm of Crazyflies using Reynolds Rules and Consensus Protocol

This project implements swarm control for Crazyflies UAVs using Reynolds Rules for flocking and a Consensus Protocol for coordinated movement. It integrates rendezvous and formation control in ROS2 and Gazebo, enabling agents to converge and maintain geometric formations. Tested in both simulation and real-world environments, the system demonstrates adaptability and scalability, with results highlighting the impact of communication topologies on swarm dynamics.

Stereo Visual-Odometry (VO) on the KITTI Dataset

This projects contains the implementation of Stereo VO pipeline in Python on the KITTI dataset. It processes stereo image data using SIFT, feature matching using BFMatcher, triangulation of points, to estimate the motion of a camera (w.r.t its starting position) in 3D space using the approach of minimizing the 3D to 2D reprojection error with PnP and RANSAC.

Aerial Robotics

In this Aerial Robotics course, lab work included design and implementations of attitude control of a quadrotor, cascade control of a single quadrotor axis in MATLAB, cascade horizontal control of quadrotor in the Gazebo simulator and on the real DJI Tello quadrotor.

Deep Learning

In this Deep Learning course lab work, PyTorch implementations included working on logistic regression and gradient descent, implementing fully connected models on the MNIST dataset, Convolutional models for image classification tasks on MNIST and CIFAR, Recurrent models for analysis of sentiment classification with the Stanford Sentiment Treebank (SST) dataset followed by detailed implementations on metric embeddings.

Human Detection and Tracking

This project focuses on human detection and tracking using the state-of-the-art YOLOv9 object detection model and the DeepSORT multi-object tracking algorithm. The methodology integrates Kalman filtering for motion prediction and deep learning-based appearance matching. The system is tested under various conditions, addressing challenges such as occlusions, identity switches, and tracking interruptions.

Frontier Based Exploration Using Kobuki Turtlebot

Frontier exploration project using an RGB-D camera mounted on a Kobuki Turtlebot. The project integrates advanced path planning techniques, combining the RRT* algorithm with Dubin’s path to map unknown environments. Additionally, a hybrid control system, which merges PID control with principles from the Pure Pursuit Controller used to optimize the robot’s velocity profiles. The implementation is done in Python within ROS framework, with simulation testing performed in the Stonefish simulator before real-world testing.

Monocular Visual Odometry for an Underwater Vehicle

Monocular visual odometry (VO) for an Autonomous Underwater Vehicle (AUV) through an integrated approach combining extended Kalman filter (EKF) based navigation. The methodology employs SIFT feature detection and FLANN matching to process images from a ROSBag. A key contribution of this work is the incorporation of EKF to provide a refined estimation of the vehicle´s motion and trajectory.

Pose Based SLAM using the Extended Kalman Filter on a Kobuki Turtlebot

Pose based EKF SLAM algorithm using the Extended Kalman Filter (EKF), incorporating view poses where environmental scans are integrated into the state vector. This algorithm was evaluated through both simulation and real-world testing.

Kinematic Control System for a Mobile Manipulator, based on the Task-Priority Redundancy Resolution Algorithm

Kinematic control system derived and implemented on a differential-drive robot (Kobuki Turtlebot 2), fitted with a 4 DOF manipulator (uFactory uArm Swift Pro). The system is predicated on the task-priority redundancy resolution algorithm. The implementation is done using ROS and the Stonefish simulator.

ROS2 Collision Avoidance Using Cross and Direct Echo of Bosch Ultrasonic Sensor Systems

Testing and comparing of two ultrasonic sensors i.e. Bosch and Valeo for the obstacle avoidance task to include the safety braking feature (setting thresholds to slow down or stop the robot for collision avoidance with ROS2) which involved performing multiple field tests of different high grass.

SLAM - Differential Drive Mobile Robot

Simultaneous Localization and Mapping(SLAM) algorithms for a differential drive mobile robot with python simulations and plotting.

Behaviour trees for pick-place of objects

Used the py_trees library followed by the results tested with turtlebot simulations in RViz with different complex environments having path planning and obstacle avoidance involved.

Pick and Place Application with the Staubli TS60 and TX60 Robot

Worked on industrial manipulators - TS60 and TX60 robots for classification, assembling pieces and performing pick-place tasks on simulation alongside real-robot implementation.

Palletizing Application with UR3e Collaborative Robot (CoBot)

Worked with the collaborative robot by developing a pick-and-place program for pallets to perform palletizing application by utilizing industrial UR3e Collaborative Robot.

Stereo Visual Odometry (VO) for Grizzly Robotic Utility Vehicle

Developed VO pipeline from stereo camera calibration, feature extraction, and matching using SURF features and utilizing bucketing strategies and circular matching for accurate apparent motion computation and effective noise/outlier rejection, Structure from motion (2D-to-2D, 3D-to-2D, and 3D-to-3D) for triangulation and refinement using bundle adjustment. The final VO trajectory was also extensively compared with GPS-generated ground truth data.

Event Based Cameras (EBC)

Worked on Event-based EBCs examining event data alongside ground truth using Davis using the frame-based approach for encoding raw event-based data into frames compatible with CNNs and RNNs and also applied Motion Compensation.

Machine Vision Projects

Contributed to projects such as Augmented Reality, Camera Calibration, Detecting Aruco markers, and Generating Fiducial Makers with computer vision, and image processing in C++.

Reinforcement Learning-Based Path Planning for Autonomous Robots in Static Environments

Implemented the Q-learning algorithm on a point (omnidirectional) robot for path planning and navigation purposes.

Image Captioning Deep Learning Model

Developed this Final Year Project by using cutting-edge including Deep Learning, Computer Vision, and data mining technologies including Keras libraries with Flask in the backend and AWS.

Contact Me