Cross-entropy loss, or log loss, measure the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverge from the actual label. So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value. A perfect model would have a log loss of 0.
Articles in this section
- Evaluation Metrics in Keras
- Covariance in Time Series
- How to merge two dataframes x_test and y_pred?
- What is pickling and unpickling?
- What is Hierarchical Clustering?
- Bottoms-up approach in Hierarchical Clustering
- Random Forest - Low bias and Low variance
- What is Euclidean And Manhattan distances in KNN?
- Is it possible to conclude over-fitting over train data only?
- Identification of biased train and test scores