qsprpred.models.assessment package
Subpackages
- qsprpred.models.assessment.metrics package
Submodules
qsprpred.models.assessment.classification module
qsprpred.models.assessment.methods module
This module holds assessment methods for QSPRModels
- class qsprpred.models.assessment.methods.CrossValAssessor(scoring: str | Callable[[Iterable, Iterable], float], split: DataSplit | None = None, monitor: AssessorMonitor | None = None, use_proba: bool = True, mode: EarlyStoppingMode | None = None, round: int = 5, split_multitask_scores: bool = False)[source]
Bases:
ModelAssessor
Perform cross validation on a model.
- Variables:
useProba (bool) – use predictProba instead of predict for classification
monitor (AssessorMonitor) – monitor to use for assessment, if None, a BaseMonitor is used
mode (EarlyStoppingMode) – mode to use for early stopping
split (DataSplit) – split to use for cross validation (default: KFold, n_splits=5)
round (int) – number of decimal places to round predictions to (default: 5)
splitMultitaskScores (bool) – whether to split the scores per task for multitask models
Initialize the evaluation method class.
- Parameters:
scoring – str | Callable[[Iterable, Iterable], float],
monitor (AssessorMonitor) – monitor to track the evaluation
use_proba (bool) – use probabilities for classification models
mode (EarlyStoppingMode) – early stopping mode for fitting
split_multitask_scores (bool) – whether to split the scores per task for multitask models
- class qsprpred.models.assessment.methods.ModelAssessor(scoring: str | Callable[[Iterable, Iterable], float], monitor: AssessorMonitor | None = None, use_proba: bool = True, mode: EarlyStoppingMode | None = None, split_multitask_scores: bool = False)[source]
Bases:
ABC
Base class for assessment methods.
- Variables:
scoreFunc (Metric) – scoring function to use, should match the output of the evaluation method (e.g. if the evaluation methods returns class probabilities, the scoring function support class probabilities)
monitor (AssessorMonitor) – monitor to use for assessment, if None, a BaseMonitor is used
useProba (bool) – use probabilities for classification models
mode (EarlyStoppingMode) – early stopping mode for fitting
splitMultitaskScores (bool) – whether to split the scores per task for multitask models
Initialize the evaluation method class.
- Parameters:
scoring – str | Callable[[Iterable, Iterable], float],
monitor (AssessorMonitor) – monitor to track the evaluation
use_proba (bool) – use probabilities for classification models
mode (EarlyStoppingMode) – early stopping mode for fitting
split_multitask_scores (bool) – whether to split the scores per task for multitask models
- class qsprpred.models.assessment.methods.TestSetAssessor(scoring: str | Callable[[Iterable, Iterable], float], monitor: AssessorMonitor | None = None, use_proba: bool = True, mode: EarlyStoppingMode | None = None, round: int = 5, split_multitask_scores: bool = False)[source]
Bases:
ModelAssessor
Assess a model on a test set.
- Attributes:+
useProba (bool): use predictProba instead of predict for classification monitor (AssessorMonitor): monitor to use for assessment, if None, a BaseMonitor
is used
mode (EarlyStoppingMode): mode to use for early stopping round (int): number of decimal places to round predictions to (default: 3) splitMultitaskScores (bool): whether to split the scores per task for multitask models
Initialize the evaluation method class.
- Parameters:
scoring – str | Callable[[Iterable, Iterable], float],
monitor (AssessorMonitor) – monitor to track the evaluation
use_proba (bool) – use probabilities for classification models
mode (EarlyStoppingMode) – early stopping mode for fitting
split_multitask_scores (bool) – whether to split the scores per task for multitask models