Haoran Xu
Haoran Xu
Home
Experience
Preprints
Publications
Contact
Light
Dark
Automatic
Haoran Xu
Latest
Upsample or Upweight? Balanced Training on Heavily Imbalanced Datasets
X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality Translation at Scale
Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation
Streaming Sequence Transduction through Dynamic Compression
A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models
Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
Efficiently Harnessing Parameter Importance for Better Training
Towards Being Parameter-Efficient: A Stratified Sparsely Activated Transformer with Dynamic Capacity
Language-Aware Multilingual Machine Translation with Self-Supervised Learning
The Importance of Being Parameters: An Intra-Distillation Method for Serious Gains
Por Qué Não Utiliser Alla Språk? Mixed Training with Gradient Optimization in Few-Shot Cross-Lingual Transfer
BERT, mBERT, BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation
Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction
VAE based Text Style Transfer with Pivot Words Enhancement Learning
Gradual Fine-Tuning for Low-Resource Domain Adaptation
Zero-Shot Cross-Lingual Dependency Parsing through Contextual Embedding Transformation
Cross-Lingual BERT Contextual Embedding Space Mapping with Isotropic and Isometric Conditions
Efficient Quadratic Programming for Peak-to-Average Power Ratio Reduction in Communication Systems
Cite
×