dynet
- Version:
2.1.2
- Category:
ai
- Cluster:
Loki
Description
DyNet is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance. For example, these kinds of networks are particularly important in natural language processing tasks, and DyNet has been used to build state-of-the-art systems for syntactic parsing, machine translation, morphological inflection, and many other application areas.
Documentation
$ python -m dynet.__main__ --help
usage: __main__.py [-h] [--dynet-seed SEED] [--dynet-mem MEM]
[--dynet-gpus GPUS] [--dynet-autobatch [LEVEL]]
[--dynet-devices DEVICES] [--dynet-viz]
[--dynet-weight-decay LAMBDA]
[--dynet-profiling] [--dynet-rand-seed SEED]
[--dynet-param-cgpu]
script.py ...
Common DyNet options:
--dynet-seed SEED Random seed
--dynet-mem MEM Memory to reserve (in MB)
--dynet-gpus GPUS Number of GPUs to use
--dynet-autobatch [LEVEL] Enable dynamic auto-batching (0 = off, 1 = on)
--dynet-devices DEVICES Specific devices to use
--dynet-viz Enable graph visualization
--dynet-weight-decay Apply L2 weight regularization
--dynet-profiling Enable profiling
--dynet-rand-seed Seed for random initialization
Examples/Usage
Load the DyNet module:
$ module load ai/dynet-py37-cuda10.2-gcc8/2.1.2
Unload the module:
$ module unload ai/dynet-py37-cuda10.2-gcc8/2.1.2
Sample Python usage:
import dynet as dy m = dy.ParameterCollection() W = m.add_parameters((1, 2)) b = m.add_parameters(1) x = dy.inputVector([0.5, 0.3]) y = dy.tanh(W * x + b) print(y.value())
Installation
Source code is obtained from DyNet GitHub