caffe-cmake-git
|
1.0.r134.g04ab089db-1 |
0 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind. Uses cmake to build giving great flexibility. |
orphan
|
2023-06-17 13:11 (UTC) |
ludwig-example
|
1.0-1 |
1 |
0.00
|
Example for training deep learning models with Ludwig |
orphan
|
2019-03-18 15:16 (UTC) |
mxnet
|
1.7.0-2 |
12 |
0.00
|
Flexible and Efficient Library for Deep Learning |
orphan
|
2020-12-12 11:17 (UTC) |
mxnet-cuda
|
1.7.0-2 |
12 |
0.00
|
Flexible and Efficient Library for Deep Learning (with CUDA) |
orphan
|
2020-12-12 11:17 (UTC) |
mxnet-cuda-git
|
2.0.0.r11770.7d84b59845-1 |
0 |
0.00
|
A flexible and efficient library for deep learning (with CUDA) |
orphan
|
2022-01-19 06:02 (UTC) |
mxnet-git
|
2.0.0.r11770.7d84b59845-1 |
0 |
0.00
|
A flexible and efficient library for deep learning |
orphan
|
2022-01-19 06:02 (UTC) |
mxnet-mkl
|
1.7.0-2 |
12 |
0.00
|
Flexible and Efficient Library for Deep Learning (with MKL) |
orphan
|
2020-12-12 11:17 (UTC) |
mxnet-mkl-cuda
|
1.7.0-2 |
12 |
0.00
|
Flexible and Efficient Library for Deep Learning (with MKL and CUDA) |
orphan
|
2020-12-12 11:17 (UTC) |
python-deepspeed
|
0.12.4-1 |
0 |
0.00
|
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective. |
orphan
|
2023-12-05 13:25 (UTC) |
python-gluoncv
|
0.10.0-1 |
1 |
0.00
|
A Deep Learning Toolkit for Computer Vision |
orphan
|
2021-03-09 09:31 (UTC) |
python-jittor
|
1.3.4.1-2 |
0 |
0.00
|
Just-in-time deep learning framework |
orphan
|
2022-05-12 01:56 (UTC) |
python-keras-git
|
2.9.0rc0.r372.g83852348f-1 |
1 |
0.00
|
A Python deep learning API, running on TensorFlow (git version) |
orphan
|
2022-06-19 01:04 (UTC) |
python-ludwig
|
0.4-1 |
3 |
0.00
|
Data-centric declarative deep learning framework |
orphan
|
2022-01-14 18:44 (UTC) |
python-paddlepaddle
|
2.2.2-1 |
0 |
0.00
|
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice |
orphan
|
2022-01-21 04:41 (UTC) |
python-paddlepaddle-cuda
|
2.2.2-1 |
0 |
0.00
|
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (with CUDA) |
orphan
|
2022-01-21 04:41 (UTC) |
python-paddlepaddle-cuda-git
|
2.2.1.r33234.8da9eff4e49-2 |
0 |
0.00
|
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (with CUDA) |
orphan
|
2021-12-24 03:49 (UTC) |
python-paddlepaddle-git
|
2.2.1.r33234.8da9eff4e49-2 |
0 |
0.00
|
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice |
orphan
|
2021-12-24 03:49 (UTC) |
python-labml
|
0.4.168-1 |
0 |
0.00
|
Monitor deep learning model training and hardware usage from mobile. |
adrien1018
|
2023-09-21 05:33 (UTC) |
python-webdataset
|
0.2.86-1 |
0 |
0.00
|
Record sequential storage for deep learning. |
alhirzel
|
2023-12-27 18:45 (UTC) |
r-deeppincs
|
1.12.0-1 |
0 |
0.00
|
Protein Interactions and Networks with Compounds based on Sequences using Deep Learning |
BioArchLinuxBot
|
2024-05-01 23:59 (UTC) |
python-magika
|
0.5.1-1 |
3 |
0.62
|
Detect file content types with deep learning |
bjin
|
2024-03-24 06:39 (UTC) |
paddlepaddle-bin
|
2.6.1-1 |
1 |
0.00
|
Parallel Distributed Deep Learning |
carlosal1015
|
2024-03-20 22:33 (UTC) |
visualdl
|
2.5.0-1 |
1 |
0.00
|
Deep Learning Visualization Toolkit |
carlosal1015
|
2023-01-18 13:47 (UTC) |
onednn-git
|
3.1_rc.r510.gde4b31752-1 |
0 |
0.00
|
An open-source performance library for deep learning applications |
Chocobo1
|
2023-04-10 14:59 (UTC) |
caffe
|
1.0-18 |
18 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (cpu only) |
dbermond
|
2022-03-01 14:24 (UTC) |
caffe-cuda
|
1.0-12 |
2 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (with cuda support) |
dbermond
|
2022-03-01 14:46 (UTC) |
caffe-cuda-doc
|
1.0-12 |
2 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (with cuda support, documentation) |
dbermond
|
2022-03-01 14:46 (UTC) |
caffe-cuda-doc-git
|
1.0.r136.g9b8915401-3 |
0 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (with cuda support, documentation, git version) |
dbermond
|
2022-03-01 14:46 (UTC) |
caffe-cuda-git
|
1.0.r136.g9b8915401-3 |
0 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (with cuda support, git version) |
dbermond
|
2022-03-01 14:46 (UTC) |
caffe-doc
|
1.0-18 |
18 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (cpu only, documentation) |
dbermond
|
2022-03-01 14:24 (UTC) |
caffe-doc-git
|
1.0.r136.g9b8915401-1 |
23 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (cpu only, documentation, git version) |
dbermond
|
2022-03-01 14:46 (UTC) |
caffe-git
|
1.0.r136.g9b8915401-1 |
23 |
0.00
|
A deep learning framework made with expression, speed, and modularity in mind (cpu only, git version) |
dbermond
|
2022-03-01 14:46 (UTC) |
openvino
|
2024.1.0-3 |
10 |
1.15
|
A toolkit for developing artificial inteligence and deep learning applications |
dbermond
|
2024-05-04 15:56 (UTC) |
openvino-git
|
2024.1.0.r264.gc6c94bdca19-1 |
0 |
0.00
|
A toolkit for developing artificial inteligence and deep learning applications (git version) |
dbermond
|
2024-05-04 15:56 (UTC) |
python-tensorrt
|
10.0.1.6-1 |
15 |
0.13
|
A platform for high-performance deep learning inference on NVIDIA hardware (python bindings and tools) |
dbermond
|
2024-04-30 19:32 (UTC) |
tensorrt
|
10.0.1.6-1 |
15 |
0.13
|
A platform for high-performance deep learning inference on NVIDIA hardware |
dbermond
|
2024-04-30 19:32 (UTC) |
tiny-dnn
|
1.0.0a3-4 |
2 |
0.00
|
A C++11 implementation of deep learning for limited computational resource, embedded systems and IoT devices |
dbermond
|
2018-12-29 18:02 (UTC) |
tiny-dnn-git
|
1.0.0a3.r246.gc0f576f5-1 |
0 |
0.00
|
A C++11 implementation of deep learning for limited computational resource, embedded systems and IoT devices (git version) |
dbermond
|
2018-12-29 18:02 (UTC) |
tvm
|
0.15.dev0.57.g108377452e-1 |
0 |
0.00
|
Apache TVM, a deep learning compiler that enables access to high-performance machine learning anywhere for everyone |
entshuld
|
2023-11-11 23:22 (UTC) |
nemesyst-git
|
2.0.6.r6.68bebbc-1 |
1 |
0.00
|
Practical, distributed, hybrid-parallelism, deep learning framework. |
GeorgeRaven
|
2020-04-30 10:58 (UTC) |
python-decord
|
0.6.0-7 |
0 |
0.00
|
An efficient video loader for deep learning with smart shuffling that's super easy to digest |
hottea
|
2024-04-27 11:26 (UTC) |
python-decord-cuda
|
0.6.0-7 |
0 |
0.00
|
An efficient video loader for deep learning with smart shuffling that's super easy to digest (with CUDA) |
hottea
|
2024-04-27 11:26 (UTC) |
python-einops
|
0.8.0-1 |
2 |
0.96
|
Deep learning operations reinvented (for pytorch, tensorflow, jax and others) |
hottea
|
2024-04-28 04:28 (UTC) |
python-imantics
|
0.1.12-4 |
0 |
0.00
|
Reactive python package for managing, creating and visualizing different deep-learning image annotation formats |
hottea
|
2023-05-03 20:20 (UTC) |
python-mmengine
|
0.10.4-1 |
0 |
0.00
|
OpenMMLab Foundational Library for Training Deep Learning Models |
hottea
|
2024-04-23 05:17 (UTC) |
python-nvidia-dali
|
1.37.0-1 |
0 |
0.00
|
A library containing both highly optimized building blocks and an execution engine for data pre-processing in deep learning applications |
hottea
|
2024-04-29 22:19 (UTC) |
python-pyretri-git
|
0.1.0.r83.c559bb8-1 |
0 |
0.00
|
Open source deep learning based unsupervised image retrieval toolbox built on PyTorch |
hottea
|
2020-06-04 02:49 (UTC) |
python-torchio
|
0.19.6-1 |
0 |
0.00
|
Tools for medical image processing in deep learning and PyTorch |
hottea
|
2024-02-20 05:22 (UTC) |
fastdeploy-git
|
r1349.d74e1209-1 |
0 |
0.00
|
An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for Cloud Mobile and Edge. |
Kevin_Liu
|
2022-12-04 05:30 (UTC) |
python-kerasplotlib
|
0.1.6-1 |
0 |
0.00
|
Kerasplotlib provides a useful interface for Keras users that meet many common visualization needs related with training and evaluating deep learning models. |
kryptato
|
2021-01-27 23:27 (UTC) |