Random Descent

NonArchimedeanMachineLearning.random_descentMethod
random_descent(loss::Loss, param::ValuationPolydisc{S,T,N}, state::Int, settings::Tuple{Bool,Int}) where {S,T,N}

Perform one step of random descent (baseline optimizer).

BASELINE ONLY: Randomly selects a child without evaluating loss. Used to demonstrate that structured optimization algorithms outperform random exploration.

Arguments

  • loss::Loss: The loss function structure (not used in selection)
  • param::ValuationPolydisc{S,T,N}: Current parameter values
  • state::Int: Unused state parameter (for compatibility)
  • settings::Tuple{Bool,Int}: (strict, degree) where strict enables single-coordinate descent

Returns

Tuple{ValuationPolydisc{S,T,N}, Int, Bool}: Randomly selected child, updated state, and convergence status

source
NonArchimedeanMachineLearning.random_descent_initMethod
random_descent_init(param::ValuationPolydisc{S,T,N}, loss::Loss, state::Int, settings::Tuple{Bool,Int}) where {S,T,N}

Initialize an optimization setup for random descent.

BASELINE ONLY: This optimizer is used for baseline comparison to demonstrate the effectiveness of structured optimization algorithms.

Arguments

  • param::ValuationPolydisc{S,T,N}: Initial parameter values
  • loss::Loss: The loss function structure
  • state::Int: Starting state (typically 1)
  • settings::Tuple{Bool,Int}: (strict, degree) controlling descent behavior

Returns

OptimSetup: Configured optimization setup for random descent

Example

# Create baseline optimizer for comparison
random_optim = random_descent_init(param, loss, 1, (false, 1))
greedy_optim = greedy_descent_init(param, loss, 1, (false, 1))

# Compare performance
for i in 1:20
    step!(random_optim)
    step!(greedy_optim)
end

println("Random: ", eval_loss(random_optim))
println("Greedy: ", eval_loss(greedy_optim))  # Should be much better
source