Portrait of Hamid Palangi

Hamid Palangi

Senior Researcher

About

I am a member of MSR AI at Microsoft Research.

My current research interests are mainly in the areas of Natural Language Processing, and Reasoning across Language and Vision. In the past I was interested in Linear Inverse Problems [focusing on Sparse Decomposition and Compressive Sensing].

Before joining MSR AI, I worked at MSR on deep learning methods for Speech Recognition (2013), Sentence Modeling for Web Search and Information Retrieval (2014, IEEE Signal Processing Society Best Paper award 2018), and Image Captioning (2016).

I also work as a mentor at the Microsoft AI School advanced projects class (AI-611) [currently only available for Microsoft FTEs].

News:

  • [2019] Capturing the shared structure among different NLP datasets is the key to achieve meaningful transferability among them. We show that disentangling data-specific semantics from general language structure is the key to achieve this, where our proposed model, HUBERT, but not BERT, is able to learn and leverage more effectively. Check it out here.
  • [2019] Will be helping as area chair for multimodality track at ACL 2020.
  • [2019] How can we leverage the large number of image-text pairs available on the web to mimic the way people improve their scene and language understanding? Can we build a model that unifies machine capabilities to perform well on both vision-language generation tasks and understanding tasks? Our work on large scale language and vision pretraining is a step towards this direction. You can download the codes, give it a try! Three short posts related to this work are available at MSR BlogVentureBeat, and Medium.
  • [2019] Leveraging neuro-symbolic representations to solve math problems helps us to better understand neural models and impose necessary discrete inductive biases into them. What are the necessary ingredients for these types of structures for being efficient in these reasoning tasks? Our recent work proposes one! Part of our initial results for this work will be presented at NeurIPS 2019’s KR2ML workshop.
  • [2019] How large scale neural scene graph parsers can benefit challenging downstream tasks like text-image retrieval and image captioning? Check out our effort to address this question.
  • [2019] Will be helping as audio-video chair at ACL 2020.
  • [2019] Our work on sentence embedding for web search and IR that we did on 2014 was selected for IEEE Signal Processing Society 2018 Best Paper award (Test of Time). It was announced during ICASSP 2019 in Brighton, UK. Congratulations to the team and wonderful collaborators!
  • [2018] Epilepsy is one of the most common neurological disorders in the world, affecting over 70 million people globally, 30 to 40 per cent of whom do not respond to medication. Our recent work published at Clinical Neurophysiology proposes optimized DNN architectures to detect them.
  •  [2018] Our work on perceptually de-hashing image hashes for similarity retrieval will appear at Signal Processing Image Communications.
  • [2018] Our work on robust detection of epileptic seizures will be presented at ICASSP 2018.
  • [2018] Our work on designing DL models with higher interpretability capabilities using TPRs got accepted at AAAI 2018 (Oral Presentation). You can download the paper here. Two short posts related to this work are available here and here.
  • [2017] Presented a tutorial at IEEE GlobalSIP 2017 @ Montreal about DL tools and frameworks. If you are similar to me, having the question “which DL framework should I choose for my project?,” check it out here.
  •  [2017] Our recent results leveraging grammatically-interpretable learned representations in deep NLP models will be presented at NIPS 2017’s explainable AI workshop.
  •  [2016] I have summarized what I learned from Deep Learning Summer School at Montreal, check it out here.
  •  [2016] Our recent work on sentence embedding for web search and IR will appear in IEEE/ACM Transactions on Audio, Speech, and Language Processing.
  •  [2016] Our recent work proposing a deep learning approach for distributed compressive sensing will appear in IEEE Transactions on Signal Processing. Check out the paper and a post about it at Nuit Blanche. The codes are open source, give it a try! For more information about compressive sensing check out here.