Trending repositories for topic neural-architecture-search
A curated list of automated machine learning papers, articles, tutorials, slides and projects
Differentiable architecture search for convolutional and recurrent networks
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
A curated list of automated machine learning papers, articles, tutorials, slides and projects
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Differentiable architecture search for convolutional and recurrent networks
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Differentiable architecture search for convolutional and recurrent networks
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019
Fast & Simple Resource-Constrained Learning of Deep Network Structure
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
A curated list of automated machine learning papers, articles, tutorials, slides and projects
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
Slimmable Networks, AutoSlim, and Beyond, ICLR 2019, and ICCV 2019
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Fast & Simple Resource-Constrained Learning of Deep Network Structure
Differentiable architecture search for convolutional and recurrent networks
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
Differentiable architecture search for convolutional and recurrent networks
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
This is a list of interesting papers and projects about TinyML.
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
Evolutionary Neural Architecture Search on Transformers for RUL Prediction
Evolutionary Neural Architecture Search on Transformers for RUL Prediction
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
[MICCAI 2021] BiX-NAS: Searching Efficient Bi-directional Architecture for Medical Image Segmentation
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
Unified Architecture Search with Convolution, Transformer, and MLP (ECCV 2022)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
⚡️ [AAAI'20][ICML'19 AutoML] InstaNAS: Instance-aware Neural Architecture Search
This is a list of interesting papers and projects about TinyML.
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
This is a list of interesting papers and projects about TinyML.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
An autoML framework & toolkit for machine learning on graphs.
Differentiable architecture search for convolutional and recurrent networks
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
A curated list of awesome resources combining Transformers with Neural Architecture Search
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Evolutionary Neural Architecture Search on Transformers for RUL Prediction
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
Generate hierarchical quantum circuits for Neural Architecture Search.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
A paper collection about automated graph learning
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
This is a list of interesting papers and projects about TinyML.
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
A curated list of awesome resources combining Transformers with Neural Architecture Search
Rapid experimentation and scaling of deep learning models on molecular and crystal graphs.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
Official repository for PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT