Trending repositories for topic neural-architecture-search
This is a list of interesting papers and projects about TinyML.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
This is a list of interesting papers and projects about TinyML.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
A curated list of automated machine learning papers, articles, tutorials, slides and projects
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
This is a list of interesting papers and projects about TinyML.
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
A curated list of awesome resources combining Transformers with Neural Architecture Search
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
Differentiable architecture search for convolutional and recurrent networks
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hype...
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
This is a list of interesting papers and projects about TinyML.
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
A curated list of awesome resources combining Transformers with Neural Architecture Search
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
A curated list of automated machine learning papers, articles, tutorials, slides and projects
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hype...
Differentiable architecture search for convolutional and recurrent networks
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
This is a list of interesting papers and projects about TinyML.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Differentiable architecture search for convolutional and recurrent networks
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
A curated list of awesome resources combining Transformers with Neural Architecture Search
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hype...
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
This is a list of interesting papers and projects about TinyML.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
A curated list of awesome resources combining Transformers with Neural Architecture Search
Official repository for PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
[ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, Zhangyang Wang
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hype...
Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
[ISPRS 2024] LoveNAS: Towards Multi-Scene Land-Cover Mapping via Hierarchical Searching Adaptive Network
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
This is a list of interesting papers and projects about TinyML.
A curated list of automated machine learning papers, articles, tutorials, slides and projects
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Differentiable architecture search for convolutional and recurrent networks
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
Automated Deep Learning: Neural Architecture Search Is Not the End (a curated list of AutoDL resources and an in-depth analysis)
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
An autoML framework & toolkit for machine learning on graphs.
PyTorch implementation of "Efficient Neural Architecture Search via Parameters Sharing"
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
[ISPRS 2024] LoveNAS: Towards Multi-Scene Land-Cover Mapping via Hierarchical Searching Adaptive Network
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
Evolutionary Neural Architecture Search on Transformers for RUL Prediction
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
This is a list of interesting papers and projects about TinyML.
[AAAI '23] PINAT: A Permutation INvariance Augmented Transformer for NAS Predictor
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
Official repository for PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.
A scalable graph learning toolkit for extremely large graph datasets. (WWW'22, 🏆 Best Student Paper Award)
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
A toolbox for receptive field analysis and visualizing neural network architectures