Diverse and Generative ML benchmark (DIGEN)

A modern machine learning benchmark, which includes: 40 datasets in tabular numeric format specially designed to differentiate the performance of some of the leading Machine Learning (ML) methods, and a package to perform reproducible benchmarking that simplifies comparison of performance of the methods. DIGEN provides comprehensive information on the datasets, including: ground truth – a mathematical formula presenting how the target was generated for each of the datasets, the results of exploratory analysis, which includes feature correlation and histogram showing how binary endpoint was calculated, multiple statistics on the datasets, including the AUROC, AUPRC and F1 scores, each dataset comes with Receiver-Operating Characteristics (ROC) and Precision-Recall (PRC) charts for tuned ML methods, and a boxplot with projected performance of the leading methods after hyper-parameter tuning (100 runs of each method started with different random seed), Apart from providing a collection of datasets and tuned ML methods, DIGEN provides tools to easily tune and optimize parameters of any novel ML method, as well as visualize its performance in comparison with the leading ones. DIGEN also offers tools for reproducibility.

 

View Resource

Penn Machine Learning Benchmarks (PMLB)

This repository contains the code and data for a large, curated set of benchmark datasets for evaluating and comparing supervised machine learning algorithms. These data sets cover a broad range of applications, and include binary/multi-class classification problems and regression problems, as well as combinations of categorical, ordinal, and continuous features.

View Resource