fastai.v3,

Part2 lesson 9, 01 matmul & 02 fully-connected | fastai 2019 course -v3

Follow Apr 23, 2021 · 1 min read
Part2 lesson 9, 01 matmul &  02 fully-connected | fastai 2019 course -v3
Share this

Quick code manual (to review), https://github.com/fastai/course-v3/tree/master/nbs/dl2/01 & 02

  1. matrix multiplication function
    1. classic matmul (3 for loop)
    2. use elementwise (2 for loop)
    3. use broadcating (1 for loop)
    4. use einsum(torch’s) (0 for loop)
  2. forward and backward pass function
    • mse, relu, linear function’s gradient
  3. Layers as classes
    • Relu, Linear, MSE -> Model
      • call & backward
  4. Module.forward()
    • Module -> Relu, Linear, MSE
    • Model call & backward

Question: difference bewteen layers as classes & module.forward()

  1. nn.Linear and nn.Module
    • nn.Module -> Model
    • nn.Linear,LeLU

Todo: kaiming_normal deprecated


  1. function get_data
    • return torch tensor datasets
  2. class Dataset
    • initialize with x, y
    • length, getitem
  3. function get_dls
    • validation set’s batchsize : twice than train
    • why should we shuffle train data?
  4. class DataBunch
    • train_ds, valid_ds as property to get item
  5. function get_model
    • return model & optimizer function
    • use torch.nn library
  6. class Learner
    • saves model, opt, loss, data
  7. function create_learner
    • return Leaner instance