Yinya Huang

Yinya Huang

Yinya Huang is currently a postdoctoral fellow in the Department of Computer Science at the City University of Hong Kong. She received her Ph.D. degree in Computer Science from Sun Yat-sen University, where she was adviced by Prof. Xiaodan Liang and Prof. Liang Lin at the Human Cyber Physical Intelligence Integration Lab (HCP-I2 Lab). She was meanwhile a research intern at Tencent AI Lab, Seattle led by Dr. Dong Yu, where she had the honor to work with Prof. Meng Fang, Prof. Liwei Wang, Dr. Kun Xu, and Dr. Hongming Zhang. She also received a strong background in logic and computational linguistics from the Institute of Logic and Cognition at Sun Yat-sen University, which has deepened and widened her research on complex reasoning in natural language processing.

Her recent interest in general lies in evaluating and improving models’ system 2 thinking, especially large language models’ complex reasoning (e.g., mathematical reasoning, theorem proving, logical reasoning, counterfactual thinking, and commonsense reasoning) and further applications (e.g., operations research problem solving, automated assistants).

Yinya Huang is on the job market! Feel free to contact if you are interested in research collaborations or have any job recommendations :D

Interests
  • Large Language Models
  • Complex Reasoning
  • Theorem Proving
Education
  • Ph.D. in Computer Science, 2023

    HCP Lab, Sun Yat-sen University

  • MPhil in Logic, 2018

    Institute of Logic and Cognition, Sun Yat-sen University

  • BPhil in Logic, 2015

    Institute of Logic and Cognition, Sun Yat-sen University

News

All Publications

(2024). FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track (NeurIPS 2024 D&B Track).

PDF Cite URL Data Code

(2024). Proving Theorems Recursively. The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024).

PDF Cite URL Code

(2024). ATG: Benchmarking Automated Theorem Generation for Generative Language Models. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024 Findings).

PDF Cite URL Data Leaderboard

(2024). AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations. The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024 Findings).

PDF Cite URL Code

(2023). Discourse-Aware Graph Networks for Textual Logical Reasoning. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).

PDF Cite DOI URL

(2023). TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023), Sentosa, Singapore, December 6-10, 2023.

PDF Cite DOI URL Code

Selected Experience

  • Research Intern, Tencent AI Lab, Seattle (remote), 2021.12 - 2022.8
    Topic: Natural language explanation
    Mentors: Dr. Hongming Zhang, Dr. Lemao Liu, Dr. Dong Yu
  • Research Intern, Tencent AI Lab, Seattle (remote), 2020.5 - 2021.12
    Topic: Natural language reasoning
    Mentors: Prof. Liwei Wang, Prof. Meng Fang, Dr. Kun Xu, Dr. Dong Yu
  • Teaching Assistant, Sun Yat-sen University, 2019.9 - 2020.1
    Course Title: Introduction to Deep Learning
    Instructor: Prof. Xiaodan Liang
  • Teaching Assistant, Sun Yat-sen University, 2018.3 - 2018.7
    Course Title: Deep Learning in Practice
    Instructor: Prof. Liang Lin

Professional Service

Workshops

Program Committee Member:

  • ICLR (2024)
  • NeurIPS (2024, 2023)
  • ACL (2024, 2023)
  • EMNLP (2024, 2023, 2022)
  • NAACL (2024)
  • COLM (2024)
  • COLING (2020)
  • ACM MM (2024)
  • IJCAI (2024)
  • ICLR Workshop (2021)

Journal Reviewer:

Awards