Pytorch
Basic
Lambda Function
Anonymous functioni that’s not meant to be stored.
Lambda Functions with Practical Examples in Python
Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
Tensor
Tensor and NumPy Transfer
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
t = torch.ones(5)
n = t.numpy()
Tensors on the CPU and NumPy arrays can share their underlying memory locations, and changing one will change the other
Automatic Differentiation
There are reasons you might want to disable gradient tracking:
- To mark some parameters in your neural network at frozen parameters. This is a very common scenario for finetuning a pretrained network
- To speed up computations when you are only doing forward pass, because computations on tensors that do not track gradients would be more efficient.
with torch.no_grad():
Optimizing Model Parameters
optimizer.zero_grad()
loss.backward()
optimizer.step()
Coding Order
Device Configuration(GPU)
- Hyper Parameters
- Data Preprocessing and Data Loader
- Model Declaration
- Loss and Optimizer
- Train and Test
Train & Test
model.train()
model.eval()
|