Statistics for topic neural-architecture-search
RepositoryStats tracks 518,991 Github repositories, of these 93 are tagged with the neural-architecture-search topic. The most common primary language for repositories using this topic is Python (74).
Stargazers over time for topic neural-architecture-search
Most starred repositories for topic neural-architecture-search (view more)
Trending repositories for topic neural-architecture-search (view more)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
This is a list of interesting papers and projects about TinyML.
An autoML framework & toolkit for machine learning on graphs.
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
Differentiable architecture search for convolutional and recurrent networks
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
An optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models.
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
Differentiable architecture search for convolutional and recurrent networks
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.
A Full-Pipeline Automated Time Series (AutoTS) Analysis Toolkit.
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Official PyTorch implementation of "DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models" (ICLR 2024)
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256K...
A curated list of automated machine learning papers, articles, tutorials, slides and projects
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.
Generate hierarchical quantum circuits for Neural Architecture Search.
Evolutionary Neural Architecture Search on Transformers for RUL Prediction
Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.
Transforming Neural Architecture Search (NAS) into multi-objective optimization problems. A benchmark suite for testing evolutionary algorithms in deep learning.