PRO
TRAILBLAZER
Gradient Descent · Loss Landscape
step
0
loss
0.000
|grad|
0.000
x,y
0, 0
SGD
Momentum
Reset
current position
global minimum
path history
The optimizer follows the
negative gradient
of the loss surface, stepping downhill toward a minimum. Adjust the
learning rate
to see how step size affects convergence.
Learning Rate
0.035