Optim Setup
NonArchimedeanMachineLearning.Loss — Type
Loss{F1,F2}A batch-oriented loss function structure for optimization.
Wraps both an evaluation function and a gradient function. Both functions should be closures that capture any necessary data (for example training data) and operate on batches.
Fields
eval::F1: Function with signature(params) -> values, whereparamsis a collection of parameter polydiscs andvaluesis the corresponding collection of loss valuesgrad::F2: Function with signature(tangents) -> values, wheretangentsis a collection of tangent vectors andvaluesis the corresponding collection of directional derivatives
NonArchimedeanMachineLearning.OptimSetup — Type
OptimSetup{S,T,N,U,V,L,O}Complete optimization setup containing loss, parameters, optimizer, and state.
Mutable structure that captures everything needed for optimization. The loss function should have data baked in as a closure.
Fields
loss::L: Loss function (closure over data) with a batch evaluation method and a batch directional-derivative methodparam::ValuationPolydisc{S,T,N}: Current parameter values (mutable during optimization)optimiser::O: Optimizer function(loss, param, state, context) -> (new_param, new_state, converged)state::U: Optimization state (e.g., previous steps, momentum, etc.)context::V: Optimizer settings (e.g., learning rate, degree, etc.)converged::Bool: Whether the optimizer has converged
Type Parameters
S: Coefficient type (typically p-adic numbers)T: Radius/valuation typeN: Dimension of parameter spaceU: State typeV: Context typeL: Concrete loss typeO: Concrete optimizer callable type
NonArchimedeanMachineLearning.eval_loss — Method
eval_loss(optim::OptimSetup)Evaluate the loss function at the current parameter values.
Arguments
optim::OptimSetup: The optimization setup
Returns
Scalar value of the loss at the current parameters
NonArchimedeanMachineLearning.has_converged — Method
has_converged(optim::OptimSetup) -> BoolCheck whether the optimization has converged.
Convergence is detected when the optimizer can no longer refine parameters, typically because the polydisc radius has reached the precision of the p-adic field.
Arguments
optim::OptimSetup: The optimization setup
Returns
true if the optimization has converged, false otherwise.
NonArchimedeanMachineLearning.optimize! — Method
optimize!(optim::OptimSetup, max_steps::Int; verbose::Bool=false) -> IntRun optimization until convergence or max_steps, whichever comes first.
Arguments
optim::OptimSetup: The optimization setupmax_steps::Int: Maximum number of steps to takeverbose::Bool=false: Iftrue, print loss at each step
Returns
The number of steps taken. Check has_converged(optim) to distinguish early convergence from hitting max_steps.
Example
optim = greedy_descent_init(param, loss, 1, (false, 1))
steps = optimize!(optim, 100; verbose=true)
if has_converged(optim)
println("Converged after \$steps steps")
else
println("Reached max steps (\$steps)")
endNonArchimedeanMachineLearning.step! — Method
step!(optim_setup::OptimSetup)Perform one optimization step.
Calls the optimizer function to compute new parameters, state, and convergence status, then updates the optimization setup accordingly.
Arguments
optim_setup::OptimSetup: The optimization setup
Notes
Mutates the optimization setup by updating both parameters and state.
NonArchimedeanMachineLearning.update_param! — Method
update_param!(optim::OptimSetup{S,T,N,U,V,L,O}, param::ValuationPolydisc{S,T,N}) where {S,T,N,U,V,L,O}Update the parameter values in the optimization setup.
Arguments
optim::OptimSetup{S,T,N,U,V,L,O}: The optimization setupparam::ValuationPolydisc{S,T,N}: New parameter values
Notes
Mutates the optimization setup in place.
NonArchimedeanMachineLearning.update_state! — Method
update_state!(optim::OptimSetup{S,T,N,U,V,L,O}, state::U) where {S,T,N,U,V,L,O}Update the optimizer state in the optimization setup.
Arguments
optim::OptimSetup{S,T,N,U,V,L,O}: The optimization setupstate::U: New state value
Notes
Mutates the optimization setup in place.