Google Scholar • Updated 2026-03-30
| # | Paper | Citations |
|---|---|---|
| 1 |
AutoML: A survey of the state-of-the-art
|
2,621 |
| 2 |
Virtual homogeneity learning: Defending against data heterogeneity in federated learning
|
146 |
| 3 |
Benchmarking the performance and energy efficiency of AI accelerators for AI training
|
126 |
| 4 |
BurstGPT: A Real-world Workload Dataset to Optimize LLM Serving Systems
|
120 |
| 5 |
Benchmarking deep learning models and automated model design for COVID-19 detection with chest CT scans
|
67 |
| 6 |
Automated model design and benchmarking of deep learning models for covid-19 detection with chest ct scans
|
57 |
| 7 |
Fusionai: Decentralized training and deploying llms with massive consumer-level gpus
|
46 |
| 8 |
EAGAN: Efficient two-stage evolutionary architecture search for GANs
|
38 |
| 9 |
ExpertFlow: Efficient Mixture-of-Experts Inference via Predictive Expert Caching and Token Scheduling
|
29 |
| 10 |
Evolutionary multi-objective architecture search framework: Application to covid-19 3d ct classification
|
27 |
| 11 |
Computer-aided clinical skin disease diagnosis using CNN and object detection models
|
24 |
| 12 |
Nas-lid: Efficient neural architecture search with local intrinsic dimension
|
22 |
| 13 |
Fault-Tolerant Hybrid-Parallel Training at Scale with Reliable and Efficient In-memory Checkpointing
|
18 |
| 14 |
FusionLLM: a decentralized LLM training system on geo-distributed GPUs with adaptive compression
|
13 |
| 15 |
MedPipe: End-to-End Joint Search of Data Augmentation Policy and Neural Architecture for 3D Medical Image Classification
|
8 |
| 16 |
Lang-PINN: From Language to Physics-Informed Neural Networks via a Multi-Agent Framework
|
5 |
| 17 |
Ghost in the cloud: Your geo-distributed large language models training is easily manipulated
|
4 |
| 18 |
Autohete: An automatic and efficient heterogeneous training system for llms
|
3 |
| 19 |
Routemark: A fingerprint for intellectual property attribution in routing-based model merging
|
2 |
| 20 |
GM-Skip: Metric-Guided Transformer Block Skipping for Efficient Vision-Language Models
|
1 |