23.8. d2l API ドキュメント

この節では、d2l パッケージに含まれるクラスと関数をアルファベット順に示し、それらが本書のどこで定義されているかを示す。これにより、より詳細な実装や説明を見つけることができる。 GitHub リポジトリ のソースコードも参照のこと。

23.8.1. クラス (Classes)

class d2l.torch.AdditiveAttention(num_hiddens, dropout, **kwargs)[ソース]

ベースクラス: Module

Additive attention.

Defined in 11.3.2.2 章

forward(queries, keys, values, valid_lens)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.AddNorm(norm_shape, dropout)[ソース]

ベースクラス: Module

The residual connection followed by layer normalization.

Defined in 11.7.2 章

forward(X, Y)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.AttentionDecoder[ソース]

ベースクラス: Decoder

The base attention-based decoder interface.

Defined in 11.4 章

property attention_weights
class d2l.torch.Classifier(plot_train_per_epoch=2, plot_valid_per_epoch=1)[ソース]

ベースクラス: Module

The base class of classification models.

Defined in 4.3 章

accuracy(Y_hat, Y, averaged=True)[ソース]

Compute the number of correct predictions.

Defined in 4.3 章

layer_summary(X_shape)[ソース]

Defined in 7.6 章

loss(Y_hat, Y, averaged=True)[ソース]

Defined in 4.5 章

validation_step(batch)[ソース]
class d2l.torch.DataModule(root='../data', num_workers=4)[ソース]

ベースクラス: HyperParameters

The base class of data.

Defined in 3.2.2 章

get_dataloader(train)[ソース]
get_tensorloader(tensors, train, indices=slice(0, None, None))[ソース]

Defined in 3.3 章

train_dataloader()[ソース]
val_dataloader()[ソース]
class d2l.torch.Decoder[ソース]

ベースクラス: Module

The base decoder interface for the encoder--decoder architecture.

Defined in 10.6 章

forward(X, state)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_state(enc_all_outputs, *args)[ソース]
class d2l.torch.DotProductAttention(dropout)[ソース]

ベースクラス: Module

Scaled dot product attention.

Defined in 11.3.2.2 章

forward(queries, keys, values, valid_lens=None)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.Encoder[ソース]

ベースクラス: Module

The base encoder interface for the encoder--decoder architecture.

Defined in 10.6 章

forward(X, *args)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.EncoderDecoder(encoder, decoder)[ソース]

ベースクラス: Classifier

The base class for the encoder--decoder architecture.

Defined in 10.6 章

forward(enc_X, dec_X, *args)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

predict_step(batch, device, num_steps, save_attention_weights=False)[ソース]

Defined in 10.7.6 章

class d2l.torch.FashionMNIST(batch_size=64, resize=(28, 28))[ソース]

ベースクラス: DataModule

The Fashion-MNIST dataset.

Defined in 4.2 章

get_dataloader(train)[ソース]

Defined in 4.2 章

text_labels(indices)[ソース]

Return text labels.

Defined in 4.2 章

visualize(batch, nrows=1, ncols=8, labels=[])[ソース]

Defined in 4.2 章

class d2l.torch.GRU(num_inputs, num_hiddens, num_layers, dropout=0)[ソース]

ベースクラス: RNN

The multilayer GRU model.

Defined in 10.3 章

class d2l.torch.HyperParameters[ソース]

ベースクラス: object

The base class of hyperparameters.

save_hyperparameters(ignore=[])[ソース]

Save function arguments into class attributes.

Defined in 23.7 章

class d2l.torch.LeNet(lr=0.1, num_classes=10)[ソース]

ベースクラス: Classifier

The LeNet-5 model.

Defined in 7.6 章

class d2l.torch.LinearRegression(lr)[ソース]

ベースクラス: Module

The linear regression model implemented with high-level APIs.

Defined in 3.5 章

configure_optimizers()[ソース]

Defined in 3.5 章

forward(X)[ソース]

Defined in 3.5 章

get_w_b()[ソース]

Defined in 3.5 章

loss(y_hat, y)[ソース]

Defined in 3.5 章

class d2l.torch.LinearRegressionScratch(num_inputs, lr, sigma=0.01)[ソース]

ベースクラス: Module

The linear regression model implemented from scratch.

Defined in 3.4 章

configure_optimizers()[ソース]

Defined in 3.4 章

forward(X)[ソース]

Defined in 3.4 章

loss(y_hat, y)[ソース]

Defined in 3.4 章

class d2l.torch.Module(plot_train_per_epoch=2, plot_valid_per_epoch=1)[ソース]

ベースクラス: Module, HyperParameters

The base class of models.

Defined in 3.2 章

apply_init(inputs, init=None)[ソース]

Defined in 6.4 章

configure_optimizers()[ソース]

Defined in 4.3 章

forward(X)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

loss(y_hat, y)[ソース]
plot(key, value, train)[ソース]

Plot a point in animation.

training_step(batch)[ソース]
validation_step(batch)[ソース]
class d2l.torch.MTFraEng(batch_size, num_steps=9, num_train=512, num_val=128)[ソース]

ベースクラス: DataModule

The English-French dataset.

Defined in 10.5 章

build(src_sentences, tgt_sentences)[ソース]

Defined in 10.5.3 章

get_dataloader(train)[ソース]

Defined in 10.5.3 章

class d2l.torch.MultiHeadAttention(num_hiddens, num_heads, dropout, bias=False, **kwargs)[ソース]

ベースクラス: Module

Multi-head attention.

Defined in 11.5 章

forward(queries, keys, values, valid_lens)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

transpose_output(X)[ソース]

Reverse the operation of transpose_qkv.

Defined in 11.5 章

transpose_qkv(X)[ソース]

Transposition for parallel computation of multiple attention heads.

Defined in 11.5 章

class d2l.torch.PositionalEncoding(num_hiddens, dropout, max_len=1000)[ソース]

ベースクラス: Module

Positional encoding.

Defined in 11.6 章

forward(X)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.PositionWiseFFN(ffn_num_hiddens, ffn_num_outputs)[ソース]

ベースクラス: Module

The positionwise feed-forward network.

Defined in 11.7 章

forward(X)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.ProgressBoard(xlabel=None, ylabel=None, xlim=None, ylim=None, xscale='linear', yscale='linear', ls=['-', '--', '-.', ':'], colors=['C0', 'C1', 'C2', 'C3'], fig=None, axes=None, figsize=(3.5, 2.5), display=True)[ソース]

ベースクラス: HyperParameters

The board that plots data points in animation.

Defined in 3.2 章

draw(x, y, label, every_n=1)[ソース]

Defined in 23.7 章

class d2l.torch.Residual(num_channels, use_1x1conv=False, strides=1)[ソース]

ベースクラス: Module

The Residual block of ResNet models.

Defined in 8.6 章

forward(X)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.ResNeXtBlock(num_channels, groups, bot_mul, use_1x1conv=False, strides=1)[ソース]

ベースクラス: Module

The ResNeXt block.

Defined in 8.6.2 章

forward(X)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.RNN(num_inputs, num_hiddens)[ソース]

ベースクラス: Module

The RNN model implemented with high-level APIs.

Defined in 9.6 章

forward(inputs, H=None)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.RNNLM(rnn, vocab_size, lr=0.01)[ソース]

ベースクラス: RNNLMScratch

The RNN-based language model implemented with high-level APIs.

Defined in 9.6 章

init_params()[ソース]
output_layer(hiddens)[ソース]

Defined in 9.5 章

class d2l.torch.RNNLMScratch(rnn, vocab_size, lr=0.01)[ソース]

ベースクラス: Classifier

The RNN-based language model implemented from scratch.

Defined in 9.5 章

forward(X, state=None)[ソース]

Defined in 9.5 章

init_params()[ソース]
one_hot(X)[ソース]

Defined in 9.5 章

output_layer(rnn_outputs)[ソース]

Defined in 9.5 章

predict(prefix, num_preds, vocab, device=None)[ソース]

Defined in 9.5 章

training_step(batch)[ソース]
validation_step(batch)[ソース]
class d2l.torch.RNNScratch(num_inputs, num_hiddens, sigma=0.01)[ソース]

ベースクラス: Module

The RNN model implemented from scratch.

Defined in 9.5 章

forward(inputs, state=None)[ソース]

Defined in 9.5 章

class d2l.torch.Seq2Seq(encoder, decoder, tgt_pad, lr)[ソース]

ベースクラス: EncoderDecoder

The RNN encoder--decoder for sequence to sequence learning.

Defined in 10.7.3 章

configure_optimizers()[ソース]

Defined in 4.3 章

validation_step(batch)[ソース]
class d2l.torch.Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers, dropout=0)[ソース]

ベースクラス: Encoder

The RNN encoder for sequence-to-sequence learning.

Defined in 10.7 章

forward(X, *args)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.SGD(params, lr)[ソース]

ベースクラス: HyperParameters

Minibatch stochastic gradient descent.

Defined in 3.4 章

step()[ソース]
zero_grad()[ソース]
class d2l.torch.SoftmaxRegression(num_outputs, lr)[ソース]

ベースクラス: Classifier

The softmax regression model.

Defined in 4.5 章

forward(X)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.SyntheticRegressionData(w, b, noise=0.01, num_train=1000, num_val=1000, batch_size=32)[ソース]

ベースクラス: DataModule

Synthetic data for linear regression.

Defined in 3.3 章

get_dataloader(train)[ソース]

Defined in 3.3 章

class d2l.torch.TimeMachine(batch_size, num_steps, num_train=10000, num_val=5000)[ソース]

ベースクラス: DataModule

The Time Machine dataset.

Defined in 9.2 章

build(raw_text, vocab=None)[ソース]

Defined in 9.2 章

get_dataloader(train)[ソース]

Defined in 9.3.3 章

class d2l.torch.Trainer(max_epochs, num_gpus=0, gradient_clip_val=0)[ソース]

ベースクラス: HyperParameters

The base class for training models with data.

Defined in 3.2.2 章

clip_gradients(grad_clip_val, model)[ソース]

Defined in 9.5 章

fit(model, data)[ソース]
fit_epoch()[ソース]

Defined in 3.4 章

prepare_batch(batch)[ソース]

Defined in 6.7 章

prepare_data(data)[ソース]
prepare_model(model)[ソース]

Defined in 6.7 章

class d2l.torch.TransformerEncoder(vocab_size, num_hiddens, ffn_num_hiddens, num_heads, num_blks, dropout, use_bias=False)[ソース]

ベースクラス: Encoder

The Transformer encoder.

Defined in 11.7.4 章

forward(X, valid_lens)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.TransformerEncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout, use_bias=False)[ソース]

ベースクラス: Module

The Transformer encoder block.

Defined in 11.7.2 章

forward(X, valid_lens)[ソース]

Define the computation performed at every call.

Should be overridden by all subclasses.

注釈

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class d2l.torch.Vocab(tokens=[], min_freq=0, reserved_tokens=[])[ソース]

ベースクラス: object

Vocabulary for text.

to_tokens(indices)[ソース]
property unk

23.8.2. 関数 (Functions)

d2l.torch.add_to_class(Class)[ソース]

Register functions as methods in created class.

Defined in 3.2 章

d2l.torch.bleu(pred_seq, label_seq, k)[ソース]

Compute the BLEU.

Defined in 10.7.6 章

d2l.torch.check_len(a, n)[ソース]

Check the length of a list.

Defined in 9.5 章

d2l.torch.check_shape(a, shape)[ソース]

Check the shape of a tensor.

Defined in 9.5 章

d2l.torch.corr2d(X, K)[ソース]

Compute 2D cross-correlation.

Defined in 7.2 章

d2l.torch.cpu()[ソース]

Get the CPU device.

Defined in 6.7 章

d2l.torch.gpu(i=0)[ソース]

Get a GPU device.

Defined in 6.7 章

d2l.torch.init_cnn(module)[ソース]

Initialize weights for CNNs.

Defined in 7.6 章

d2l.torch.init_seq2seq(module)[ソース]

Initialize weights for sequence-to-sequence learning.

Defined in 10.7 章

d2l.torch.masked_softmax(X, valid_lens)[ソース]

Perform softmax operation by masking elements on the last axis.

Defined in 11.3 章

d2l.torch.num_gpus()[ソース]

Get the number of available GPUs.

Defined in 6.7 章

d2l.torch.plot(X, Y=None, xlabel=None, ylabel=None, legend=[], xlim=None, ylim=None, xscale='linear', yscale='linear', fmts=('-', 'm--', 'g-.', 'r:'), figsize=(3.5, 2.5), axes=None)[ソース]

Plot data points.

Defined in 2.4 章

d2l.torch.set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend)[ソース]

Set the axes for matplotlib.

Defined in 2.4 章

d2l.torch.set_figsize(figsize=(3.5, 2.5))[ソース]

Set the figure size for matplotlib.

Defined in 2.4 章

d2l.torch.show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5), cmap='Reds')[ソース]

Show heatmaps of matrices.

Defined in 11.1 章

d2l.torch.show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist)[ソース]

Plot the histogram for list length pairs.

Defined in 10.5 章

d2l.torch.try_all_gpus()[ソース]

Return all available GPUs, or [cpu(),] if no GPU exists.

Defined in 6.7 章

d2l.torch.try_gpu(i=0)[ソース]

Return gpu(i) if exists, otherwise return cpu().

Defined in 6.7 章

d2l.torch.use_svg_display()[ソース]

Use the svg format to display a plot in Jupyter.

Defined in 2.4 章