Random Descent
NonArchimedeanMachineLearning.random_descent — Method
random_descent(loss::Loss, param::ValuationPolydisc{S,T,N}, state::Int, settings::Tuple{Bool,Int}) where {S,T,N}Perform one step of random descent (baseline optimizer).
BASELINE ONLY: Randomly selects a child without evaluating loss. Used to demonstrate that structured optimization algorithms outperform random exploration.
Arguments
loss::Loss: The loss function structure (not used in selection)param::ValuationPolydisc{S,T,N}: Current parameter valuesstate::Int: Unused state parameter (for compatibility)settings::Tuple{Bool,Int}:(strict, degree)wherestrictenables single-coordinate descent
Returns
Tuple{ValuationPolydisc{S,T,N}, Int, Bool}: Randomly selected child, updated state, and convergence status
NonArchimedeanMachineLearning.random_descent_init — Method
random_descent_init(param::ValuationPolydisc{S,T,N}, loss::Loss, state::Int, settings::Tuple{Bool,Int}) where {S,T,N}Initialize an optimization setup for random descent.
BASELINE ONLY: This optimizer is used for baseline comparison to demonstrate the effectiveness of structured optimization algorithms.
Arguments
param::ValuationPolydisc{S,T,N}: Initial parameter valuesloss::Loss: The loss function structurestate::Int: Starting state (typically 1)settings::Tuple{Bool,Int}:(strict, degree)controlling descent behavior
Returns
OptimSetup: Configured optimization setup for random descent
Example
# Create baseline optimizer for comparison
random_optim = random_descent_init(param, loss, 1, (false, 1))
greedy_optim = greedy_descent_init(param, loss, 1, (false, 1))
# Compare performance
for i in 1:20
step!(random_optim)
step!(greedy_optim)
end
println("Random: ", eval_loss(random_optim))
println("Greedy: ", eval_loss(greedy_optim)) # Should be much better