Photo by Janosch Diggelmann on Unsplash
Read More

Bradley-Terry Model

In many machine learning and decision-making systems, what we encounter is not a directly measurable quality score, but rather a large number of preference judgments in the form of pairwise comparisons, that is deciding which of two options is better. Although such pairwise comparison data is simple in form, it implicitly contains rich structural information. Starting from a probabilistic semantics perspective, this article will gradually explain how the Bradley–Terry model can transform these preference comparisons into a learnable representation of latent utilities.
Read More
Photo by Daniel Seßler on Unsplash
Read More

Entropy

In probabilistic modeling and machine learning, entropy is a fundamental concept for quantifying uncertainty. It not only describes the inherent randomness of data, but also implicitly captures the minimum information cost required in prediction and modeling. Many learning objectives that may appear different on the surface, such as maximizing log-likelihood or designing loss functions, can in fact be traced back and understood through the lens of entropy.
Read More
Photo by Marek Piwnicki on Unsplash
Read More

Confusion Matrix

The confusion matrix is ​​a tool used to measure the performances of models. This allows data scientists to analyze and optimize models. Therefore, when learning machine learning, we must learn to use confusion matrix. In addition, this article will also introduce accuracy, recall, precision, and F1 score.
Read More