In Linear Regression analysis, it is possible to use a least-squares approach to finding the best b- values in the linear regression equation. The least-squared error approach leads to a set of n normal equations in n unknowns that can be solved for directly. But that approach does not work here for logistic regression. Suppose any b- values are selected and the question is asked what is the likelihood that they match the logistic distribution defined, using statistical principles and the assumption that errors have a normal probability distribution. This technique of picking the most likely b- values that match the observed data is known as a maximum likelihood solution. In the case of linear regression, a maximum likelihood solution turns out to be mathematically equivalent to a least squares solution. But here maximum likelihood must be used directly.
B’X = b 0 + b 1 x 1 + ... + b v x v .
The log likelihood equation is not linear in the unknown b- value parameters, so it must be solved using non-linear optimization techniques described in Computational Technique.