最近訓練嘗試 SGD
這邊做一個註記
發現過擬合一直都是難解的問題啊~~
尤其在小資料上
import torch import torch.nn as nn import torch.optim as optim # 定義一個簡單的神經網絡模型 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(10, 50) self.fc2 = nn.Linear(50, 2) def forward(self, x): x = torch.relu(self.fc1(x)) x = self.fc2(x) return x # 創建模型實例 model = Net() # 定義訓練數據和標簽 input_data = torch.randn(32, 10) labels = torch.randn(32, 2) # 定義損失函數 criterion = nn.MSELoss() # 定義優化器,同時添加權重衰減 optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=0.001) # 設置 weight_decay 參數來添加權重衰減項 # 進行訓練 for epoch in range(num_epochs): # 前向傳播 outputs = model(input_data) loss = criterion(outputs, labels) # 反向傳播 optimizer.zero_grad() loss.backward() optimizer.step() # 打印損失值 print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')
感謝 chat GPT
留言板
歡迎留下建議與分享!希望一起交流!感恩!