About me

I am a fourth year Ph.D. student at Visual Information Laboratory, University of Bristol, Bristol, UK where I am working on human movemnet analysis under supervision of Professor Majid Mirmehdi. Prior to that, I obtained a Masters degree in Artificial Intelligence at the Shahid Beheshti University (former: National University of Iran), Teheran, Iran. During my Master, I worked on object tracking methods.

During my free time I like painting (especially water color), reading books (especially philosophy and psychology), traveling, and spending time with my family and friends.


Projects

VI-Net

View-Invariant Quality of Human Movement Assessment

We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pretrained 2D CNN (e.g. VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multiview, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view s enarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.

QMAR Dataset

QMAR is an RGB multi-view dataset for healthcare applications. It was recorded using 6 Primesense cameras with 38 healthy subjects, 8 female and 30 male. The subjects were trained by a physiotherapist to perform two different types of movements while simulating two ailments, resulting in four overall possibilities: a return walk to approximately the original position while simulatin Parkinsons (W-P), and Stroke (WS), and standing up and sitting down with Parkinson (SS-P) and Stroke (SS-S). The dataset includes RGB and depth (and skeleton data, although in this work we only use RGB. As capturing depth data from the 6 Primesense cameras was not possible due to infrared interference, the depth and skeleton data were retaine from only view 2 at ≈ 0◦ and view 5 at ≈ 90◦. The movements in QMAR were scored by the severity of the abnormality. The score ranges were 0 to 4 for W-P, 0 to 5 for W-S and SS-S, and 0 to 12 for SS-P.

August 2020

View-Invariant Human Pose

View-Invariant Pose Analysis for Human Movement Assessment from RGB Data

We propose a CNN regression method to generate high-level, view-invariant features from RGB images which are suitable for human pose estimation and movement quality analysis. The inputs to our network are body joint heatmaps and limb-maps to help our network exploit geometric relationships between different body parts to estimate the features more accurately. A new multiview and multimodal human movement dataset is also introduced part of which is used to evaluate the results of the proposed method. We present comparative experimental results on pose estimation using a manifold-based pose representation built from motion-captured data. We show that the new RGB derived features provide pose estimates of similar or better accuracy than those produced from depth data, even from single views only.

September 2019

Publications

2020

F. Sardari, A. Paiement, S. Hannuna, and M. Mirmehdi, "VI-Net: View-Invariant Quality of Human Movement Assessment", Arxiv. 2020

2019

F. Sardari, A. Paiement, and M. Mirmehdi, "View-invariant Pose Analysis for Human Movement Assessment from RGB Data", ICIAP. 2019

2017

F. Sardari, and M. E. Moghaddam, "A hybrid occlusion free object tracking method using particle filter and modified galaxy based search meta-heuristic algorithm", Applied Soft Computing. 2017

2016

F. Sardari, and M. E. Moghaddam, "An object tracking method using modified galaxy-based search algorithm", Swarm and Evolutionary Computation. 2016

2012

F. Sardari, and M. E. Moghaddam, "A genetic based generic filter for image impulse noise reduction", IET Image Processing. 2012



Awards

  • Awarded a fully funded international Ph.D. scholarship from Kings College London
  • Awarded a fully funded international Ph.D. scholarship from University of Bristol
  • Awarded a fully funded international Ph.D. studentship from University of Nottingham
  • Offered to study MS.c in Artificial Intelligence without participating in national entrance exam by Shahid Beheshti University
  • Selected as first rank student in MS.c at Shahid Beheshti University, Tehran, Iran
  • Selected as exceptional talent student in BS.c at Shahid Bahonar University, Kerman, Iran

Contact

Office

Visual Information Laboratory, University of Bristol

1 Cathedral Square

Trinity Street

Bristol, BS1 5DD

E-mail

faegheh.sardari(at)bristol.ac.uk