Research

Research that stays close to real systems.

My research focuses on graph-based AI, LLM reasoning and linguistic evaluation, agentic systems, and efficient model behavior, with an emphasis on how systems reason, retrieve, and fail under real constraints.

Current themes

  • Graph representation learning
  • LLM reasoning and linguistic evaluation
  • Agentic AI

Selected recent work

TAPS: Task Aware Proposal Distributions for Speculative Sampling (2026)

Research on speculative decoding for large language models, focused on task-aware draft model training and routing strategies.

Chained Prompting for Better Systematic Review Search Strategies (2025)

Work on LLM-based chained prompting for building systematic review search strategies with strong recall and reproducibility.

Optimizing Deep Neural Networks using Safety-Guided Self Compression (2025)

Work on safety-guided compression methods for improving neural network efficiency under deployment constraints.

AraLingBench: A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models (2025)

Benchmarking work focused on evaluating Arabic linguistic capabilities in large language models.

Earlier publications

On Edge Human Action Recognition Using Radar-Based Sensing and Deep Learning (2023)

Published in IEEE Transactions on Industrial Informatics.

A CNN-Based Method for Discriminating Shadowed Targets in FMCW Radar Systems (2022)

Published in Sensors.

Maritime Localization System Based on IoT (2021)

Presented at the IEEE International Conference on Electronics, Circuits, and Systems.

Convolutional Neural Networks for Arabic Font Recognition (2019)

Presented at the International Conference on Signal-Image Technology & Internet-Based Systems.