Trending repositories for topic neural-architecture-search
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
A curated list of automated machine learning papers, articles, tutorials, slides and projects
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
An autoML framework & toolkit for machine learning on graphs.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
A curated list of automated machine learning papers, articles, tutorials, slides and projects
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
A curated list of automated machine learning papers, articles, tutorials, slides and projects
This is a list of interesting papers and projects about TinyML.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
Differentiable architecture search for convolutional and recurrent networks
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
An autoML framework & toolkit for machine learning on graphs.
This is a list of interesting papers and projects about TinyML.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
A curated list of automated machine learning papers, articles, tutorials, slides and projects
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Differentiable architecture search for convolutional and recurrent networks
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
This is a list of interesting papers and projects about TinyML.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
Differentiable architecture search for convolutional and recurrent networks
A curated list of automated machine learning papers, articles, tutorials, slides and projects
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
Official implementation of NeurIPS'22 paper "Monte Carlo Tree Search based Variable Selection for High-Dimensional Bayesian Optimization"
Generate hierarchical quantum circuits for Neural Architecture Search.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
[ISPRS 2024] LoveNAS: Towards Multi-Scene Land-Cover Mapping via Hierarchical Searching Adaptive Network
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
Official implementation of NeurIPS'22 paper "Monte Carlo Tree Search based Variable Selection for High-Dimensional Bayesian Optimization"
Generate hierarchical quantum circuits for Neural Architecture Search.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
[ISPRS 2024] LoveNAS: Towards Multi-Scene Land-Cover Mapping via Hierarchical Searching Adaptive Network
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
Official repository for PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)
This is a list of interesting papers and projects about TinyML.
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
[CVPR 2020] MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
This is a list of interesting papers and projects about TinyML.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Differentiable architecture search for convolutional and recurrent networks
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
An autoML framework & toolkit for machine learning on graphs.
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
Evolutionary Neural Architecture Search on Transformers for RUL Prediction
[ISPRS 2024] LoveNAS: Towards Multi-Scene Land-Cover Mapping via Hierarchical Searching Adaptive Network
Official implementation of NeurIPS'22 paper "Monte Carlo Tree Search based Variable Selection for High-Dimensional Bayesian Optimization"
The official implementation of paper PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
This is a list of interesting papers and projects about TinyML.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Generate hierarchical quantum circuits for Neural Architecture Search.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[AAAI '23] PINAT: A Permutation INvariance Augmented Transformer for NAS Predictor
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.