Least Squares

NonArchimedeanMachineLearning.make_ordinary_least_squares_lossMethod
make_ordinary_least_squares_loss(data::Vector{Tuple{Vector{S}, Vector{T}}})::Loss where {S, T}

Create an ordinary least squares loss function for linear regression.

Given training data {(x₁, y₁), ..., (xₙ, yₙ)}, constructs a loss function L(A, b) = Σᵢ ||Axᵢ + b - yᵢ||² where the parameters are matrix A and vector b.

Arguments

  • data::Vector{Tuple{Vector{S}, Vector{T}}}: Training data as (input_vector, output_vector) pairs

Returns

Loss: Loss structure with closures for evaluation and gradient computation

Parameter Ordering

For m-dimensional outputs and n-dimensional inputs, the parameters are ordered as:

  • Indices 1 to m*n: Matrix A entries in row-major order [A₁₁, A₁₂, ..., A₁ₙ, A₂₁, ..., Aₘₙ]
  • Indices mn+1 to mn+m: Vector b entries [b₁, ..., bₘ]

Notes

The loss function is constructed symbolically using LinearPolynomial and PolydiscFunction operations.

source
NonArchimedeanMachineLearning.solve_linear_systemMethod
solve_linear_system(A::Matrix{S}, b::Vector{S}, y::Vector{S})::Loss where S

Create a least squares loss function for solving a linear system.

Given matrix A, vectors b and y, constructs a loss function L(x) = ||Ax + b - y||² where x is the parameter to optimize.

Arguments

  • A::Matrix{S}: Coefficient matrix (m × n)
  • b::Vector{S}: Offset vector (m-dimensional)
  • y::Vector{S}: Target vector (m-dimensional)

Returns

Loss: Loss structure with closures for evaluation and gradient computation

Notes

The parameter x is n-dimensional, where n is the number of columns of A. The loss measures the squared Euclidean norm of the residual (Ax + b - y).

source