qsprpred.extra.gpu.models package
Submodules
qsprpred.extra.gpu.models.base module
- class qsprpred.extra.gpu.models.base.QSPRModelGPU(base_dir: str, alg: Type | None = None, name: str | None = None, parameters: dict | None = None, autoload=True, random_state: int | None = None)[source]
-
Initialize a QSPR model instance.
If the model is loaded from file, the data set is not required. Note that the data set is required for fitting and optimization.
- Parameters:
base_dir (str) – base directory of the model, the model files are stored in a subdirectory
{baseDir}/{outDir}/
alg (Type) – estimator class
name (str) – name of the model
parameters (dict) – dictionary of algorithm specific parameters
autoload (bool) – if
True
, the estimator is loaded from the serialized file if it exists, otherwise a new instance of alg is createdrandom_state (int) – Random state to use for shuffling and other random operations.
- checkData(ds: QSPRDataset, exception: bool = True) bool
Check if the model has a data set.
- Parameters:
ds (QSPRDataset) – data set to check
exception (bool) – if true, an exception is raised if no data is set
- Returns:
True if data is set, False otherwise (if exception is False)
- Return type:
- property classPath: str
Return the fully classified path of the model.
- Returns:
class path of the model
- Return type:
- cleanFiles()
Clean up the model files.
Removes the model directory and all its contents.
- convertToNumpy(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset | None = None) tuple[numpy.ndarray, numpy.ndarray] | ndarray
Convert the given data matrix and target matrix to np.ndarray format.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix
- Returns:
data matrix and/or target matrix in np.ndarray format
- createPredictionDatasetFromMols(mols: list[str | rdkit.Chem.rdchem.Mol], smiles_standardizer: str | Callable[[str], str] = 'chembl', n_jobs: int = 1, fill_value: float = nan) tuple[qsprpred.data.tables.qspr.QSPRDataset, numpy.ndarray]
Create a
QSPRDataset
instance from a list of SMILES strings.- Parameters:
- Returns:
a tuple containing the
QSPRDataset
instance and a boolean mask indicating which molecules failed to be processed- Return type:
- abstract fit(X: DataFrame | ndarray, y: DataFrame | ndarray, estimator: Any = None, mode: EarlyStoppingMode = EarlyStoppingMode.NOT_RECORDING, monitor: FitMonitor = None, **kwargs) Any | tuple[Any, int] | None
Fit the model to the given data matrix or
QSPRDataset
.- Note. convertToNumpy can be called here, to convert the input data to
np.ndarray format.
Note. if no estimator is given, the estimator instance of the model is used.
- Note. if a model supports early stopping, the fit function should have the
early_stopping
decorator and the mode argument should be used to set the early stopping mode. If the model does not support early stopping, the mode argument is ignored.
- Parameters:
X (pd.DataFrame, np.ndarray) – data matrix to fit
y (pd.DataFrame, np.ndarray) – target matrix to fit
estimator (Any) – estimator instance to use for fitting
mode (EarlyStoppingMode) – early stopping mode
monitor (FitMonitor) – monitor for the fitting process, if None, the base monitor is used
kwargs – additional arguments to pass to the fit method of the estimator
- Returns:
fitted estimator instance int: in case of early stopping, the number of iterations
after which the model stopped training
- Return type:
Any
- fitDataset(ds: QSPRDataset, monitor=None, mode=EarlyStoppingMode.OPTIMAL, save_model=True, save_data=False, **kwargs) str
Train model on the whole attached data set.
** IMPORTANT ** For models that supportEarlyStopping,
CrossValAssessor
should be run first, so that the average number of epochs from the cross-validation with early stopping can be used for fitting the model.- Parameters:
ds (QSPRDataset) – data set to fit this model on
monitor (FitMonitor) – monitor for the fitting process, if None, the base monitor is used
mode (EarlyStoppingMode) – early stopping mode for models that support early stopping, by default fit the ‘optimal’ number of epochs previously stopped at in model assessment on train or test set, to avoid the use of extra data for a validation set.
save_model (bool) – save the model to file
save_data (bool) – save the supplied dataset to file
kwargs – additional arguments to pass to fit
- Returns:
path to the saved model, if
save_model
is True- Return type:
- getParameters(new_parameters) dict | None
Get the model parameters combined with the given parameters.
If both the model and the given parameters contain the same key, the value from the given parameters is used.
- static handleInvalidsInPredictions(mols: list[str], predictions: ndarray | list[numpy.ndarray], failed_mask: ndarray) ndarray
Replace invalid predictions with None.
- Parameters:
mols (MoleculeTable) – molecules for which the predictions were made
predictions (np.ndarray) – predictions made by the model
failed_mask (np.ndarray) – boolean mask of failed predictions
- Returns:
predictions with invalids replaced by None
- Return type:
np.ndarray
- initFromDataset(data: QSPRDataset | None)
- initRandomState(random_state)
Set random state if applicable. Defaults to random state of dataset if no random state is provided,
- Parameters:
random_state (int) – Random state to use for shuffling and other random operations.
- property isMultiTask: bool
Return if model is a multitask model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
True if model is a multitask model
- Return type:
- abstract loadEstimator(params: dict | None = None) object
Initialize estimator instance with the given parameters.
If
params
isNone
, the default parameters will be used.
- abstract loadEstimatorFromFile(params: dict | None = None) object
Load estimator instance from file and apply the given parameters.
- classmethod loadParamsGrid(fname: str, optim_type: str, model_types: str) ndarray
Load parameter grids for bayes or grid search parameter optimization from json file.
- Parameters:
- Returns:
array with three columns containing modeltype, optimization type (grid or bayes) and model type
- Return type:
np.ndarray
- property optimalEpochs: int | None
Return the optimal number of epochs for early stopping.
- Returns:
optimal number of epochs
- Return type:
int | None
- property outDir: str
Return output directory of the model, the model files are stored in this directory (
{baseDir}/{name}
).- Returns:
output directory of the model
- Return type:
- property outPrefix: str
Return output prefix of the model files.
The model files are stored with this prefix (i.e.
{outPrefix}_meta.json
).- Returns:
output prefix of the model files
- Return type:
- abstract predict(X: DataFrame | ndarray | QSPRDataset, estimator: Any = None) ndarray
Make predictions for the given data matrix or
QSPRDataset
.Note. convertToNumpy can be called here, to convert the input data to np.ndarray format.
- Note. if no estimator is given, the estimator instance of the model
is used.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to predict
estimator (Any) – estimator instance to use for fitting
- Returns:
2D array containing the predictions, where each row corresponds to a sample in the data and each column to a target property
- Return type:
np.ndarray
- predictDataset(dataset: QSPRDataset, use_probas: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given dataset.
- Parameters:
dataset – a
QSPRDataset
instanceuse_probas – use probabilities if this is a classification model
- Returns:
an array of predictions or a list of arrays of predictions (for classification models with use_probas=True)
- Return type:
np.ndarray | list[np.ndarray]
- predictMols(mols: List[str | Mol], use_probas: bool = False, smiles_standardizer: str | callable = 'chembl', n_jobs: int = 1, fill_value: float = nan, use_applicability_domain: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given molecules.
- Parameters:
mols (List[str | Mol]) – list of SMILES strings
use_probas (bool) – use probabilities for classification models
smiles_standardizer – either
chembl
,old
, or a partial function that reads and standardizes smiles.n_jobs – Number of jobs to use for parallel processing.
fill_value – Value to use for missing values in the feature matrix.
use_applicability_domain – Use applicability domain to return if a molecule is within the applicability domain of the model.
- Returns:
- an array of predictions or a list of arrays of predictions
(for classification models with use_probas=True)
- np.ndarray[bool]: boolean mask indicating which molecules fall
within the applicability domain of the model
- Return type:
np.ndarray | list[np.ndarray]
- abstract predictProba(X: DataFrame | ndarray | QSPRDataset, estimator: Any = None) list[numpy.ndarray]
Make predictions for the given data matrix or
QSPRDataset
, but use probabilities for classification models. Does not work with regression models.Note. convertToNumpy can be called here, to convert the input data to np.ndarray format.
- Note. if no estimator is given, the estimator instance of the model
is used.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to make predict
estimator (Any) – estimator instance to use for fitting
- Returns:
a list of 2D arrays containing the probabilities for each class, where each array corresponds to a target property, each row to a sample in the data and each column to a class
- Return type:
list[np.ndarray]
- save(save_estimator=False)
Save model to file.
- Parameters:
save_estimator (bool) – Explicitly save the estimator to file, if
True
. Note that some models may save the estimator by default even if this argument isFalse
.- Returns:
absolute path to the metafile of the saved model str:
absolute path to the saved estimator, if
include_estimator
isTrue
- Return type:
- abstract saveEstimator() str
Save the underlying estimator to file.
- Returns:
absolute path to the saved estimator
- Return type:
path (str)
- setParams(params: dict | None, reset_estimator: bool = True)
Set model parameters. The estimator is also updated with the new parameters if ‘reload_estimator’ is
True
.
- abstract property supportsEarlyStopping: bool
Return if the model supports early stopping.
- Returns:
True if the model supports early stopping
- Return type:
- property task: ModelTasks
Return the task of the model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
task of the model
- Return type:
qsprpred.extra.gpu.models.base_torch module
- class qsprpred.extra.gpu.models.base_torch.QSPRModelPyTorchGPU(base_dir: str, alg: Type | None = None, name: str | None = None, parameters: dict | None = None, autoload=True, random_state: int | None = None)[source]
Bases:
QSPRModelGPU
,ABC
Initialize a QSPR model instance.
If the model is loaded from file, the data set is not required. Note that the data set is required for fitting and optimization.
- Parameters:
base_dir (str) – base directory of the model, the model files are stored in a subdirectory
{baseDir}/{outDir}/
alg (Type) – estimator class
name (str) – name of the model
parameters (dict) – dictionary of algorithm specific parameters
autoload (bool) – if
True
, the estimator is loaded from the serialized file if it exists, otherwise a new instance of alg is createdrandom_state (int) – Random state to use for shuffling and other random operations.
- checkData(ds: QSPRDataset, exception: bool = True) bool
Check if the model has a data set.
- Parameters:
ds (QSPRDataset) – data set to check
exception (bool) – if true, an exception is raised if no data is set
- Returns:
True if data is set, False otherwise (if exception is False)
- Return type:
- property classPath: str
Return the fully classified path of the model.
- Returns:
class path of the model
- Return type:
- cleanFiles()
Clean up the model files.
Removes the model directory and all its contents.
- convertToNumpy(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset | None = None) tuple[numpy.ndarray, numpy.ndarray] | ndarray
Convert the given data matrix and target matrix to np.ndarray format.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix
- Returns:
data matrix and/or target matrix in np.ndarray format
- createPredictionDatasetFromMols(mols: list[str | rdkit.Chem.rdchem.Mol], smiles_standardizer: str | Callable[[str], str] = 'chembl', n_jobs: int = 1, fill_value: float = nan) tuple[qsprpred.data.tables.qspr.QSPRDataset, numpy.ndarray]
Create a
QSPRDataset
instance from a list of SMILES strings.- Parameters:
- Returns:
a tuple containing the
QSPRDataset
instance and a boolean mask indicating which molecules failed to be processed- Return type:
- abstract fit(X: DataFrame | ndarray, y: DataFrame | ndarray, estimator: Any = None, mode: EarlyStoppingMode = EarlyStoppingMode.NOT_RECORDING, monitor: FitMonitor = None, **kwargs) Any | tuple[Any, int] | None
Fit the model to the given data matrix or
QSPRDataset
.- Note. convertToNumpy can be called here, to convert the input data to
np.ndarray format.
Note. if no estimator is given, the estimator instance of the model is used.
- Note. if a model supports early stopping, the fit function should have the
early_stopping
decorator and the mode argument should be used to set the early stopping mode. If the model does not support early stopping, the mode argument is ignored.
- Parameters:
X (pd.DataFrame, np.ndarray) – data matrix to fit
y (pd.DataFrame, np.ndarray) – target matrix to fit
estimator (Any) – estimator instance to use for fitting
mode (EarlyStoppingMode) – early stopping mode
monitor (FitMonitor) – monitor for the fitting process, if None, the base monitor is used
kwargs – additional arguments to pass to the fit method of the estimator
- Returns:
fitted estimator instance int: in case of early stopping, the number of iterations
after which the model stopped training
- Return type:
Any
- fitDataset(ds: QSPRDataset, monitor=None, mode=EarlyStoppingMode.OPTIMAL, save_model=True, save_data=False, **kwargs) str
Train model on the whole attached data set.
** IMPORTANT ** For models that supportEarlyStopping,
CrossValAssessor
should be run first, so that the average number of epochs from the cross-validation with early stopping can be used for fitting the model.- Parameters:
ds (QSPRDataset) – data set to fit this model on
monitor (FitMonitor) – monitor for the fitting process, if None, the base monitor is used
mode (EarlyStoppingMode) – early stopping mode for models that support early stopping, by default fit the ‘optimal’ number of epochs previously stopped at in model assessment on train or test set, to avoid the use of extra data for a validation set.
save_model (bool) – save the model to file
save_data (bool) – save the supplied dataset to file
kwargs – additional arguments to pass to fit
- Returns:
path to the saved model, if
save_model
is True- Return type:
- getParameters(new_parameters) dict | None
Get the model parameters combined with the given parameters.
If both the model and the given parameters contain the same key, the value from the given parameters is used.
- static handleInvalidsInPredictions(mols: list[str], predictions: ndarray | list[numpy.ndarray], failed_mask: ndarray) ndarray
Replace invalid predictions with None.
- Parameters:
mols (MoleculeTable) – molecules for which the predictions were made
predictions (np.ndarray) – predictions made by the model
failed_mask (np.ndarray) – boolean mask of failed predictions
- Returns:
predictions with invalids replaced by None
- Return type:
np.ndarray
- initFromDataset(data: QSPRDataset | None)
- initRandomState(random_state)
Set random state if applicable. Defaults to random state of dataset if no random state is provided,
- Parameters:
random_state (int) – Random state to use for shuffling and other random operations.
- property isMultiTask: bool
Return if model is a multitask model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
True if model is a multitask model
- Return type:
- abstract loadEstimator(params: dict | None = None) object
Initialize estimator instance with the given parameters.
If
params
isNone
, the default parameters will be used.
- abstract loadEstimatorFromFile(params: dict | None = None) object
Load estimator instance from file and apply the given parameters.
- classmethod loadParamsGrid(fname: str, optim_type: str, model_types: str) ndarray
Load parameter grids for bayes or grid search parameter optimization from json file.
- Parameters:
- Returns:
array with three columns containing modeltype, optimization type (grid or bayes) and model type
- Return type:
np.ndarray
- property optimalEpochs: int | None
Return the optimal number of epochs for early stopping.
- Returns:
optimal number of epochs
- Return type:
int | None
- property outDir: str
Return output directory of the model, the model files are stored in this directory (
{baseDir}/{name}
).- Returns:
output directory of the model
- Return type:
- property outPrefix: str
Return output prefix of the model files.
The model files are stored with this prefix (i.e.
{outPrefix}_meta.json
).- Returns:
output prefix of the model files
- Return type:
- abstract predict(X: DataFrame | ndarray | QSPRDataset, estimator: Any = None) ndarray
Make predictions for the given data matrix or
QSPRDataset
.Note. convertToNumpy can be called here, to convert the input data to np.ndarray format.
- Note. if no estimator is given, the estimator instance of the model
is used.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to predict
estimator (Any) – estimator instance to use for fitting
- Returns:
2D array containing the predictions, where each row corresponds to a sample in the data and each column to a target property
- Return type:
np.ndarray
- predictDataset(dataset: QSPRDataset, use_probas: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given dataset.
- Parameters:
dataset – a
QSPRDataset
instanceuse_probas – use probabilities if this is a classification model
- Returns:
an array of predictions or a list of arrays of predictions (for classification models with use_probas=True)
- Return type:
np.ndarray | list[np.ndarray]
- predictMols(mols: List[str | Mol], use_probas: bool = False, smiles_standardizer: str | callable = 'chembl', n_jobs: int = 1, fill_value: float = nan, use_applicability_domain: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given molecules.
- Parameters:
mols (List[str | Mol]) – list of SMILES strings
use_probas (bool) – use probabilities for classification models
smiles_standardizer – either
chembl
,old
, or a partial function that reads and standardizes smiles.n_jobs – Number of jobs to use for parallel processing.
fill_value – Value to use for missing values in the feature matrix.
use_applicability_domain – Use applicability domain to return if a molecule is within the applicability domain of the model.
- Returns:
- an array of predictions or a list of arrays of predictions
(for classification models with use_probas=True)
- np.ndarray[bool]: boolean mask indicating which molecules fall
within the applicability domain of the model
- Return type:
np.ndarray | list[np.ndarray]
- abstract predictProba(X: DataFrame | ndarray | QSPRDataset, estimator: Any = None) list[numpy.ndarray]
Make predictions for the given data matrix or
QSPRDataset
, but use probabilities for classification models. Does not work with regression models.Note. convertToNumpy can be called here, to convert the input data to np.ndarray format.
- Note. if no estimator is given, the estimator instance of the model
is used.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to make predict
estimator (Any) – estimator instance to use for fitting
- Returns:
a list of 2D arrays containing the probabilities for each class, where each array corresponds to a target property, each row to a sample in the data and each column to a class
- Return type:
list[np.ndarray]
- save(save_estimator=False)
Save model to file.
- Parameters:
save_estimator (bool) – Explicitly save the estimator to file, if
True
. Note that some models may save the estimator by default even if this argument isFalse
.- Returns:
absolute path to the metafile of the saved model str:
absolute path to the saved estimator, if
include_estimator
isTrue
- Return type:
- abstract saveEstimator() str
Save the underlying estimator to file.
- Returns:
absolute path to the saved estimator
- Return type:
path (str)
- setParams(params: dict | None, reset_estimator: bool = True)
Set model parameters. The estimator is also updated with the new parameters if ‘reload_estimator’ is
True
.
- abstract property supportsEarlyStopping: bool
Return if the model supports early stopping.
- Returns:
True if the model supports early stopping
- Return type:
- property task: ModelTasks
Return the task of the model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
task of the model
- Return type:
qsprpred.extra.gpu.models.chemprop module
QSPRPpred wrapper for chemprop models.
- class qsprpred.extra.gpu.models.chemprop.ChempropModel(base_dir: str, name: str | None = None, parameters: dict | None = None, autoload=True, random_state: int | None = None, quiet_logger: bool = True)[source]
Bases:
QSPRModelPyTorchGPU
QSPRpred implementation of Chemprop model.
- Variables:
name (str) – name of the model
alg (Type) – estimator class
parameters (dict) – dictionary of algorithm specific parameters
estimator (Any) – the underlying estimator instance of the type specified in
QSPRModel.alg
, ifQSPRModel.fit
or optimization was performedfeatureCalculators (MoleculeDescriptorsCalculator) – feature calculator instance taken from the data set or deserialized from file if the model is loaded without data
featureStandardizer (SKLearnStandardizer) – feature standardizer instance taken from the data set or deserialized from file if the model is loaded without data
baseDir (str) – base directory of the model, the model files are stored in a subdirectory
{baseDir}/{outDir}/
Initialize a Chemprop instance.
If the model is loaded from file, the data set is not required. Note that the data set is required for fitting and optimization.
- Parameters:
base_dir (str) – base directory of the model, the model files are stored in a subdirectory
{baseDir}/{outDir}/
name (str) – name of the model
parameters (dict) – dictionary of algorithm specific parameters
autoload (bool) – if
True
, the estimator is loaded from the serialized file if it exists, otherwise a new instance of alg is createdquiet_logger (bool) – if
True
, the chemprop logger is set to quiet mode (no debug messages)
- checkArgs(args: TrainArgs | dict)[source]
Check if the given arguments are valid.
- Parameters:
args (chemprop.args.TrainArgs, dict) – arguments to check
- checkData(ds: QSPRDataset, exception: bool = True) bool
Check if the model has a data set.
- Parameters:
ds (QSPRDataset) – data set to check
exception (bool) – if true, an exception is raised if no data is set
- Returns:
True if data is set, False otherwise (if exception is False)
- Return type:
- property classPath: str
Return the fully classified path of the model.
- Returns:
class path of the model
- Return type:
- cleanFiles()[source]
Clean up the model files.
Removes the model directory and all its contents. Handles closing the chemprop logger as well.
- convertToMoleculeDataset(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset | None = None) tuple[numpy.ndarray, numpy.ndarray] | ndarray [source]
Convert the given data matrix and target matrix to chemprop Molecule Dataset.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix
- Returns:
data matrix and/or target matrix in np.ndarray format
- convertToNumpy(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset | None = None) tuple[numpy.ndarray, numpy.ndarray] | ndarray
Convert the given data matrix and target matrix to np.ndarray format.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix
- Returns:
data matrix and/or target matrix in np.ndarray format
- createPredictionDatasetFromMols(mols: list[str | rdkit.Chem.rdchem.Mol], smiles_standardizer: str | Callable[[str], str] = 'chembl', n_jobs: int = 1, fill_value: float = nan) tuple[qsprpred.data.tables.qspr.QSPRDataset, numpy.ndarray]
Create a
QSPRDataset
instance from a list of SMILES strings.- Parameters:
- Returns:
a tuple containing the
QSPRDataset
instance and a boolean mask indicating which molecules failed to be processed- Return type:
- fit(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset, estimator: Any | None = None, mode: EarlyStoppingMode | None = None, split: DataSplit = None, monitor: FitMonitor = None, **kwargs) Any
Wrapper for fit method of models that support early stopping.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to fit
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix to fit
estimator (Any) – estimator instance to use for fitting
mode (EarlyStoppingMode) – early stopping mode
split (DataSplit) – data split to use for early stopping, if None, a ShuffleSplit with 10% validation set size is used
monitor (FitMonitor) – monitor to use for fitting, if None, a BaseMonitor is used
kwargs (dict) – additional keyword arguments for the estimator’s fit method
- Returns:
fitted estimator instance
- Return type:
Any
- fitDataset(ds: QSPRDataset, monitor=None, mode=EarlyStoppingMode.OPTIMAL, save_model=True, save_data=False, **kwargs) str
Train model on the whole attached data set.
** IMPORTANT ** For models that supportEarlyStopping,
CrossValAssessor
should be run first, so that the average number of epochs from the cross-validation with early stopping can be used for fitting the model.- Parameters:
ds (QSPRDataset) – data set to fit this model on
monitor (FitMonitor) – monitor for the fitting process, if None, the base monitor is used
mode (EarlyStoppingMode) – early stopping mode for models that support early stopping, by default fit the ‘optimal’ number of epochs previously stopped at in model assessment on train or test set, to avoid the use of extra data for a validation set.
save_model (bool) – save the model to file
save_data (bool) – save the supplied dataset to file
kwargs – additional arguments to pass to fit
- Returns:
path to the saved model, if
save_model
is True- Return type:
- classmethod fromFile(filename: str) ChempropModel [source]
Initialize a new instance from a JSON file.
- static getAvailableParameters()[source]
Return a dictionary of available parameters for the algorithm.
Definitions and default values can be found on the Chemprop github (https://github.com/chemprop/chemprop/blob/master/chemprop/args.py)
- getParameters(new_parameters) dict | None
Get the model parameters combined with the given parameters.
If both the model and the given parameters contain the same key, the value from the given parameters is used.
- static handleInvalidsInPredictions(mols: list[str], predictions: ndarray | list[numpy.ndarray], failed_mask: ndarray) ndarray
Replace invalid predictions with None.
- Parameters:
mols (MoleculeTable) – molecules for which the predictions were made
predictions (np.ndarray) – predictions made by the model
failed_mask (np.ndarray) – boolean mask of failed predictions
- Returns:
predictions with invalids replaced by None
- Return type:
np.ndarray
- initFromDataset(data: QSPRDataset | None)
- initRandomState(random_state)
Set random state if applicable. Defaults to random state of dataset if no random state is provided,
- Parameters:
random_state (int) – Random state to use for shuffling and other random operations.
- property isMultiTask: bool
Return if model is a multitask model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
True if model is a multitask model
- Return type:
- loadEstimator(params: dict | None = None) object [source]
Initialize estimator instance with the given parameters.
If
params
isNone
, the default parameters will be used.
- loadEstimatorFromFile(params: dict | None = None, fallback_load=True) object [source]
Load estimator instance from file and apply the given parameters.
- classmethod loadParamsGrid(fname: str, optim_type: str, model_types: str) ndarray
Load parameter grids for bayes or grid search parameter optimization from json file.
- Parameters:
- Returns:
array with three columns containing modeltype, optimization type (grid or bayes) and model type
- Return type:
np.ndarray
- property optimalEpochs: int | None
Return the optimal number of epochs for early stopping.
- Returns:
optimal number of epochs
- Return type:
int | None
- property outDir: str
Return output directory of the model, the model files are stored in this directory (
{baseDir}/{name}
).- Returns:
output directory of the model
- Return type:
- property outPrefix: str
Return output prefix of the model files.
The model files are stored with this prefix (i.e.
{outPrefix}_meta.json
).- Returns:
output prefix of the model files
- Return type:
- predict(X: DataFrame | ndarray | QSPRDataset, estimator: ChempropMoleculeModel | None = None) ndarray [source]
Make predictions for the given data matrix or
QSPRDataset
.- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to predict
estimator (MoleculeModel) – estimator instance to use for fitting
- Returns:
2D array containing the predictions, where each row corresponds to a sample in the data and each column to a target property
- Return type:
np.ndarray
- predictDataset(dataset: QSPRDataset, use_probas: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given dataset.
- Parameters:
dataset – a
QSPRDataset
instanceuse_probas – use probabilities if this is a classification model
- Returns:
an array of predictions or a list of arrays of predictions (for classification models with use_probas=True)
- Return type:
np.ndarray | list[np.ndarray]
- predictMols(mols: List[str | Mol], use_probas: bool = False, smiles_standardizer: str | callable = 'chembl', n_jobs: int = 1, fill_value: float = nan, use_applicability_domain: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given molecules.
- Parameters:
mols (List[str | Mol]) – list of SMILES strings
use_probas (bool) – use probabilities for classification models
smiles_standardizer – either
chembl
,old
, or a partial function that reads and standardizes smiles.n_jobs – Number of jobs to use for parallel processing.
fill_value – Value to use for missing values in the feature matrix.
use_applicability_domain – Use applicability domain to return if a molecule is within the applicability domain of the model.
- Returns:
- an array of predictions or a list of arrays of predictions
(for classification models with use_probas=True)
- np.ndarray[bool]: boolean mask indicating which molecules fall
within the applicability domain of the model
- Return type:
np.ndarray | list[np.ndarray]
- predictProba(X: DataFrame | ndarray | QSPRDataset, estimator: ChempropMoleculeModel | None = None) list[numpy.ndarray] [source]
Make predictions for the given data matrix or
QSPRDataset
, but use probabilities for classification models.In case of regression models, this method is equivalent to
predict
.- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to make predict
estimator (MoleculeModel, None) – estimator instance to use for fitting
- Returns:
a list of 2D arrays containing the probabilities for each class, where each array corresponds to a target property, each row to a sample in the data and each column to a class
- Return type:
list[np.ndarray]
- save(save_estimator=False)
Save model to file.
- Parameters:
save_estimator (bool) – Explicitly save the estimator to file, if
True
. Note that some models may save the estimator by default even if this argument isFalse
.- Returns:
absolute path to the metafile of the saved model str:
absolute path to the saved estimator, if
include_estimator
isTrue
- Return type:
- saveEstimator() str [source]
Save the underlying estimator to file.
- Returns:
path to the saved estimator
- Return type:
path (str)
- setParams(params: dict | None, reset_estimator: bool = True)
Set model parameters. The estimator is also updated with the new parameters if ‘reload_estimator’ is
True
.
- supportsEarlyStopping() bool [source]
Return if the model supports early stopping.
- Returns:
True if the model supports early stopping
- Return type:
- property task: ModelTasks
Return the task of the model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
task of the model
- Return type:
- class qsprpred.extra.gpu.models.chemprop.ChempropMoleculeModel(args: TrainArgs, scaler: StandardScaler | None = None)[source]
Bases:
MoleculeModel
Wrapper for chemprop.models.MoleculeModel.
- Variables:
args (chemprop.args.TrainArgs) – arguments for training the model,
scaler (chemprop.data.scaler.StandardScaler) – scaler for scaling the targets
Initialize a MoleculeModel instance.
- Parameters:
args (chemprop.args.TrainArgs) – arguments for training the model,
scaler (chemprop.data.scaler.StandardScaler) – scaler for scaling the targets
- T_destination = ~T_destination
- add_module(name: str, module: Module | None) None
Add a child module to the current module.
The module can be accessed as an attribute using the given name.
- Parameters:
name (str) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module.
- apply(fn: Callable[[Module], None]) T
Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.Typical use includes initializing the parameters of a model (see also nn-init-doc).
- Parameters:
fn (
Module
-> None) – function to be applied to each submodule- Returns:
self
- Return type:
Module
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
- bfloat16() T
Casts all floating point parameters and buffers to
bfloat16
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- buffers(recurse: bool = True) Iterator[Tensor]
Return an iterator over module buffers.
- Parameters:
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
torch.Tensor – module buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- classmethod cast(obj: MoleculeModel) ChempropMoleculeModel [source]
Cast a chemprop.models.MoleculeModel instance to a MoleculeModel instance.
- Parameters:
obj (chemprop.models.MoleculeModel) – instance to cast
- Returns:
casted MoleculeModel instance
- Return type:
MoleculeModel
- children() Iterator[Module]
Return an iterator over immediate children modules.
- Yields:
Module – a child module
- compile(*args, **kwargs)
Compile this Module’s forward using
torch.compile()
.This Module’s
__call__
method is compiled and all arguments are passed as-is totorch.compile()
.See
torch.compile()
for details on the arguments for this function.
- cpu() T
Move all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- create_encoder(args: TrainArgs) None
Creates the message passing encoder for the model.
- Parameters:
args – A
TrainArgs
object containing model arguments.
- create_ffn(args: TrainArgs) None
Creates the feed-forward layers for the model.
- Parameters:
args – A
TrainArgs
object containing model arguments.
- cuda(device: int | device | None = None) T
Move all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- double() T
Casts all floating point parameters and buffers to
double
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- eval() T
Set the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See locally-disable-grad-doc for a comparison between
.eval()
and several similar mechanisms that may be confused with it.- Returns:
self
- Return type:
Module
- extra_repr() str
Set the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- fingerprint(batch: List[List[str]] | List[List[Mol]] | List[List[Tuple[Mol, Mol]]] | List[BatchMolGraph], features_batch: List[ndarray] = None, atom_descriptors_batch: List[ndarray] = None, atom_features_batch: List[ndarray] = None, bond_descriptors_batch: List[ndarray] = None, bond_features_batch: List[ndarray] = None, fingerprint_type: str = 'MPN') Tensor
Encodes the latent representations of the input molecules from intermediate stages of the model.
- Parameters:
batch – A list of list of SMILES, a list of list of RDKit molecules, or a list of
BatchMolGraph
. The outer list or BatchMolGraph is of lengthnum_molecules
(number of datapoints in batch), the inner list is of lengthnumber_of_molecules
(number of molecules per datapoint).features_batch – A list of numpy arrays containing additional features.
atom_descriptors_batch – A list of numpy arrays containing additional atom descriptors.
atom_features_batch – A list of numpy arrays containing additional atom features.
bond_descriptors_batch – A list of numpy arrays containing additional bond descriptors.
bond_features_batch – A list of numpy arrays containing additional bond features.
fingerprint_type – The choice of which type of latent representation to return as the molecular fingerprint. Currently supported MPN for the output of the MPNN portion of the model or last_FFN for the input to the final readout layer.
- Returns:
The latent fingerprint vectors.
- float() T
Casts all floating point parameters and buffers to
float
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- forward(batch: List[List[str]] | List[List[Mol]] | List[List[Tuple[Mol, Mol]]] | List[BatchMolGraph], features_batch: List[ndarray] = None, atom_descriptors_batch: List[ndarray] = None, atom_features_batch: List[ndarray] = None, bond_descriptors_batch: List[ndarray] = None, bond_features_batch: List[ndarray] = None, constraints_batch: List[Tensor] = None, bond_types_batch: List[Tensor] = None) Tensor
Runs the
MoleculeModel
on input.- Parameters:
batch – A list of list of SMILES, a list of list of RDKit molecules, or a list of
BatchMolGraph
. The outer list or BatchMolGraph is of lengthnum_molecules
(number of datapoints in batch), the inner list is of lengthnumber_of_molecules
(number of molecules per datapoint).features_batch – A list of numpy arrays containing additional features.
atom_descriptors_batch – A list of numpy arrays containing additional atom descriptors.
atom_features_batch – A list of numpy arrays containing additional atom features.
bond_descriptors_batch – A list of numpy arrays containing additional bond descriptors.
bond_features_batch – A list of numpy arrays containing additional bond features.
constraints_batch – A list of PyTorch tensors which applies constraint on atomic/bond properties.
bond_types_batch – A list of PyTorch tensors storing bond types of each bond determined by RDKit molecules.
- Returns:
The output of the
MoleculeModel
, containing a list of property predictions.
- static getTrainArgs(args: dict | None, task: ModelTasks) TrainArgs [source]
Get a chemprop.args.TrainArgs instance from a dictionary.
- Parameters:
args (dict) – dictionary of arguments
task (ModelTasks) – task type
- Returns:
arguments for training the model
- Return type:
chemprop.args.TrainArgs
- get_buffer(target: str) Tensor
Return the buffer given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Parameters:
target – The fully-qualified string name of the buffer to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
The buffer referenced by
target
- Return type:
torch.Tensor
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not a buffer
- get_extra_state() Any
Return any extra state to include in the module’s state_dict.
Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’sstate_dict()
.Note that extra state should be picklable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns:
Any extra state to store in the module’s state_dict
- Return type:
- get_parameter(target: str) Parameter
Return the parameter given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Parameters:
target – The fully-qualified string name of the Parameter to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
The Parameter referenced by
target
- Return type:
torch.nn.Parameter
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Parameter
- get_submodule(target: str) Module
Return the submodule given by
target
if it exists, otherwise throw an error.For example, let’s say you have an
nn.Module
A
that looks like this:A( (net_b): Module( (net_c): Module( (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2)) ) (linear): Linear(in_features=100, out_features=200, bias=True) ) )
(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Parameters:
target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)
- Returns:
The submodule referenced by
target
- Return type:
torch.nn.Module
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Module
- half() T
Casts all floating point parameters and buffers to
half
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- ipu(device: int | device | None = None) T
Move all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- load_state_dict(state_dict: Mapping[str, Any], strict: bool = True, assign: bool = False)
Copy parameters and buffers from
state_dict
into this module and its descendants.If
strict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
unlessget_swap_module_params_on_conversion()
isTrue
.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
assign (bool, optional) – When
False
, the properties of the tensors in the current module are preserved while whenTrue
, the properties of the Tensors in the state dict are preserved. The only exception is therequires_grad
field ofDefault: ``False`
- Returns:
- missing_keys is a list of str containing any keys that are expected
by this module but missing from the provided
state_dict
.
- unexpected_keys is a list of str containing the keys that are not
expected by this module but present in the provided
state_dict
.
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- modules() Iterator[Module]
Return an iterator over all modules in the network.
- Yields:
Module – a module in the network
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): ... print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
- named_buffers(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Tensor]]
Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Parameters:
prefix (str) – prefix to prepend to all buffer names.
recurse (bool, optional) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True.
remove_duplicate (bool, optional) – whether to remove the duplicated buffers in the result. Defaults to True.
- Yields:
(str, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
- named_children() Iterator[Tuple[str, Module]]
Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields:
(str, Module) – Tuple containing a name and child module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
- named_modules(memo: Set[Module] | None = None, prefix: str = '', remove_duplicate: bool = True)
Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Parameters:
memo – a memo to store the set of modules already added to the result
prefix – a prefix that will be added to the name of the module
remove_duplicate – whether to remove the duplicated module instances in the result or not
- Yields:
(str, Module) – Tuple of name and module
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): ... print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
- named_parameters(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Parameter]]
Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Parameters:
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.
- Yields:
(str, Parameter) – Tuple containing the name and parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
- parameters(recurse: bool = True) Iterator[Parameter]
Return an iterator over module parameters.
This is typically passed to an optimizer.
- Parameters:
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter – module parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- register_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor]) RemovableHandle
Register a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_buffer(name: str, tensor: Tensor | None, persistent: bool = True) None
Add a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Parameters:
name (str) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor or None) – buffer to be registered. If
None
, then operations that run on buffers, such ascuda
, are ignored. IfNone
, the buffer is not included in the module’sstate_dict
.persistent (bool) – whether the buffer is part of this module’s
state_dict
.
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> self.register_buffer('running_mean', torch.zeros(num_features))
- register_forward_hook(hook: Callable[[T, Tuple[Any, ...], Any], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any], Any], Any | None], *, prepend: bool = False, with_kwargs: bool = False, always_call: bool = False) RemovableHandle
Register a forward hook on the module.
The hook will be called every time after
forward()
has computed an output.If
with_kwargs
isFalse
or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called. The hook should have the following signature:hook(module, args, output) -> None or modified output
If
with_kwargs
isTrue
, the forward hook will be passed thekwargs
given to the forward function and be expected to return the output possibly modified. The hook should have the following signature:hook(module, args, kwargs, output) -> None or modified output
- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If
True
, the providedhook
will be fired before all existingforward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward
hooks on thistorch.nn.modules.Module
. Note that globalforward
hooks registered withregister_module_forward_hook()
will fire before all hooks registered by this method. Default:False
with_kwargs (bool) – If
True
, thehook
will be passed the kwargs given to the forward function. Default:False
always_call (bool) – If
True
thehook
will be run regardless of whether an exception is raised while calling the Module. Default:False
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_forward_pre_hook(hook: Callable[[T, Tuple[Any, ...]], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any]], Tuple[Any, Dict[str, Any]] | None], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle
Register a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked.If
with_kwargs
is false or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned (unless that value is already a tuple). The hook should have the following signature:hook(module, args) -> None or modified input
If
with_kwargs
is true, the forward pre-hook will be passed the kwargs given to the forward function. And if the hook modifies the input, both the args and kwargs should be returned. The hook should have the following signature:hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingforward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward_pre
hooks on thistorch.nn.modules.Module
. Note that globalforward_pre
hooks registered withregister_module_forward_pre_hook()
will fire before all hooks registered by this method. Default:False
with_kwargs (bool) – If true, the
hook
will be passed the kwargs given to the forward function. Default:False
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_full_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle
Register a backward hook on the module.
The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Parameters:
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingbackward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward
hooks on thistorch.nn.modules.Module
. Note that globalbackward
hooks registered withregister_module_full_backward_hook()
will fire before all hooks registered by this method.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_full_backward_pre_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle
Register a backward pre-hook on the module.
The hook will be called every time the gradients for the module are computed. The hook should have the following signature:
hook(module, grad_output) -> tuple[Tensor] or None
The
grad_output
is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place ofgrad_output
in subsequent computations. Entries ingrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs inplace is not allowed when using backward hooks and will raise an error.
- Parameters:
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Note that globalbackward_pre
hooks registered withregister_module_full_backward_pre_hook()
will fire before all hooks registered by this method.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_load_state_dict_post_hook(hook)
Register a post hook to be run after module’s
load_state_dict
is called.- It should have the following signature::
hook(module, incompatible_keys) -> None
The
module
argument is the current module that this hook is registered on, and theincompatible_keys
argument is aNamedTuple
consisting of attributesmissing_keys
andunexpected_keys
.missing_keys
is alist
ofstr
containing the missing keys andunexpected_keys
is alist
ofstr
containing the unexpected keys.The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling
load_state_dict()
withstrict=True
are affected by modifications the hook makes tomissing_keys
orunexpected_keys
, as expected. Additions to either set of keys will result in an error being thrown whenstrict=True
, and clearing out both missing and unexpected keys will avoid an error.- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_module(name: str, module: Module | None) None
Alias for
add_module()
.
- register_parameter(name: str, param: Parameter | None) None
Add a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Parameters:
name (str) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter or None) – parameter to be added to the module. If
None
, then operations that run on parameters, such ascuda
, are ignored. IfNone
, the parameter is not included in the module’sstate_dict
.
- register_state_dict_pre_hook(hook)
Register a pre-hook for the
state_dict()
method.These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
. The registered hooks can be used to perform pre-processing before thestate_dict
call is made.
- requires_grad_(requires_grad: bool = True) T
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between
.requires_grad_()
and several similar mechanisms that may be confused with it.- Parameters:
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default:
True
.- Returns:
self
- Return type:
Module
- set_extra_state(state: Any) None
Set extra state contained in the loaded
state_dict
.This function is called from
load_state_dict()
to handle any extra state found within thestate_dict
. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within itsstate_dict
.- Parameters:
state (dict) – Extra state from the
state_dict
See
torch.Tensor.share_memory_()
.
- state_dict(*args, destination=None, prefix='', keep_vars=False)
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- to(*args, **kwargs)
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Parameters:
device (
torch.device
) – the desired device of the parameters and buffers in this moduledtype (
torch.dtype
) – the desired floating point or complex dtype of the parameters and buffers in this moduletensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (
torch.memory_format
) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- Returns:
self
- Return type:
Module
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_empty(*, device: int | str | device | None, recurse: bool = True) T
Move the parameters and buffers to the specified device without copying storage.
- Parameters:
device (
torch.device
) – The desired device of the parameters and buffers in this module.recurse (bool) – Whether parameters and buffers of submodules should be recursively moved to the specified device.
- Returns:
self
- Return type:
Module
- train(mode: bool = True) T
Set the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters:
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns:
self
- Return type:
Module
- type(dst_type: dtype | str) T
Casts all parameters and buffers to
dst_type
.Note
This method modifies the module in-place.
- Parameters:
dst_type (type or string) – the desired type
- Returns:
self
- Return type:
Module
- xpu(device: int | device | None = None) T
Move all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
qsprpred.extra.gpu.models.dnn module
Here the DNN model originally from DrugEx can be found.
At the moment this contains a class for fully-connected DNNs.
- class qsprpred.extra.gpu.models.dnn.DNNModel(base_dir: str, alg: ~typing.Type = <class 'qsprpred.extra.gpu.models.neural_network.STFullyConnected'>, name: str | None = None, parameters: dict | None = None, random_state: int | None = None, autoload: bool = True, gpus: list[int] = (0, ), patience: int = 50, tol: float = 0)[source]
Bases:
QSPRModelPyTorchGPU
This class holds the methods for training and fitting a Deep Neural Net QSPR model initialization.
Here the model instance is created and parameters can be defined.
- Variables:
name (str) – name of the model
alg (estimator) – estimator instance or class
parameters (dict) – dictionary of algorithm specific parameters
estimator (object) – the underlying estimator instance, if
fit
or optimization is performed, this model instance gets updated accordinglyfeatureCalculators (MoleculeDescriptorsCalculator) – feature calculator instance taken from the data set or deserialized from file if the model is loaded without data
featureStandardizer (SKLearnStandardizer) – feature standardizer instance taken from the data set or deserialized from file if the model is loaded without data
baseDir (str) – base directory of the model, the model files are stored in a subdirectory
{baseDir}/{outDir}/
patience (int) – number of epochs to wait before early stop if no progress on validation set score
tol (float) – minimum absolute improvement of loss necessary to count as progress on best validation score
nClass (int) – number of classes
nDim (int) – number of features
patience – number of epochs to wait before early stop if no progress on validation set score
Initialize a DNNModel model.
- Parameters:
base_dir (str) – base directory of the model, the model files are stored in a subdirectory
{baseDir}/{outDir}/
alg (Type, optional) – model class or instance. Defaults to STFullyConnected.
name (str, optional) – name of the model. Defaults to None.
parameters (dict, optional) – dictionary of algorithm specific parameters. Defaults to None.
autoload (bool, optional) – whether to load the model from file or not. Defaults to True.
device (torch.device, optional) – The cuda device. Defaults to
DEFAULT_TORCH_DEVICE
.gpus (list[int], optional) – gpu number(s) to use for model fitting. Defaults to
DEFAULT_TORCH_GPUS
.patience (int, optional) – number of epochs to wait before early stop if no progress on validation set score. Defaults to 50.
tol (float, optional) – minimum absolute improvement of loss necessary to count as progress on best validation score. Defaults to 0.
- checkData(ds: QSPRDataset, exception: bool = True) bool
Check if the model has a data set.
- Parameters:
ds (QSPRDataset) – data set to check
exception (bool) – if true, an exception is raised if no data is set
- Returns:
True if data is set, False otherwise (if exception is False)
- Return type:
- property classPath: str
Return the fully classified path of the model.
- Returns:
class path of the model
- Return type:
- cleanFiles()
Clean up the model files.
Removes the model directory and all its contents.
- convertToNumpy(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset | None = None) tuple[numpy.ndarray, numpy.ndarray] | ndarray
Convert the given data matrix and target matrix to np.ndarray format.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix
- Returns:
data matrix and/or target matrix in np.ndarray format
- createPredictionDatasetFromMols(mols: list[str | rdkit.Chem.rdchem.Mol], smiles_standardizer: str | Callable[[str], str] = 'chembl', n_jobs: int = 1, fill_value: float = nan) tuple[qsprpred.data.tables.qspr.QSPRDataset, numpy.ndarray]
Create a
QSPRDataset
instance from a list of SMILES strings.- Parameters:
- Returns:
a tuple containing the
QSPRDataset
instance and a boolean mask indicating which molecules failed to be processed- Return type:
- fit(X: DataFrame | ndarray | QSPRDataset, y: DataFrame | ndarray | QSPRDataset, estimator: Any | None = None, mode: EarlyStoppingMode | None = None, split: DataSplit = None, monitor: FitMonitor = None, **kwargs) Any
Wrapper for fit method of models that support early stopping.
- Parameters:
X (pd.DataFrame, np.ndarray, QSPRDataset) – data matrix to fit
y (pd.DataFrame, np.ndarray, QSPRDataset) – target matrix to fit
estimator (Any) – estimator instance to use for fitting
mode (EarlyStoppingMode) – early stopping mode
split (DataSplit) – data split to use for early stopping, if None, a ShuffleSplit with 10% validation set size is used
monitor (FitMonitor) – monitor to use for fitting, if None, a BaseMonitor is used
kwargs (dict) – additional keyword arguments for the estimator’s fit method
- Returns:
fitted estimator instance
- Return type:
Any
- fitDataset(ds: QSPRDataset, monitor=None, mode=EarlyStoppingMode.OPTIMAL, save_model=True, save_data=False, **kwargs) str
Train model on the whole attached data set.
** IMPORTANT ** For models that supportEarlyStopping,
CrossValAssessor
should be run first, so that the average number of epochs from the cross-validation with early stopping can be used for fitting the model.- Parameters:
ds (QSPRDataset) – data set to fit this model on
monitor (FitMonitor) – monitor for the fitting process, if None, the base monitor is used
mode (EarlyStoppingMode) – early stopping mode for models that support early stopping, by default fit the ‘optimal’ number of epochs previously stopped at in model assessment on train or test set, to avoid the use of extra data for a validation set.
save_model (bool) – save the model to file
save_data (bool) – save the supplied dataset to file
kwargs – additional arguments to pass to fit
- Returns:
path to the saved model, if
save_model
is True- Return type:
- getParameters(new_parameters) dict | None
Get the model parameters combined with the given parameters.
If both the model and the given parameters contain the same key, the value from the given parameters is used.
- static handleInvalidsInPredictions(mols: list[str], predictions: ndarray | list[numpy.ndarray], failed_mask: ndarray) ndarray
Replace invalid predictions with None.
- Parameters:
mols (MoleculeTable) – molecules for which the predictions were made
predictions (np.ndarray) – predictions made by the model
failed_mask (np.ndarray) – boolean mask of failed predictions
- Returns:
predictions with invalids replaced by None
- Return type:
np.ndarray
- initFromDataset(data: QSPRDataset | None)[source]
- initRandomState(random_state)[source]
Set random state if applicable. Defaults to random state of dataset if no random state is provided by the constructor.
- Parameters:
random_state (int) – Random state to use for shuffling and other random operations.
- property isMultiTask: bool
Return if model is a multitask model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
True if model is a multitask model
- Return type:
- loadEstimator(params: dict | None = None) object [source]
Load model from file or initialize new model.
- loadEstimatorFromFile(params: dict | None = None, fallback_load: bool = True) object [source]
Load estimator from file.
- classmethod loadParamsGrid(fname: str, optim_type: str, model_types: str) ndarray
Load parameter grids for bayes or grid search parameter optimization from json file.
- Parameters:
- Returns:
array with three columns containing modeltype, optimization type (grid or bayes) and model type
- Return type:
np.ndarray
- property optimalEpochs: int | None
Return the optimal number of epochs for early stopping.
- Returns:
optimal number of epochs
- Return type:
int | None
- property outDir: str
Return output directory of the model, the model files are stored in this directory (
{baseDir}/{name}
).- Returns:
output directory of the model
- Return type:
- property outPrefix: str
Return output prefix of the model files.
The model files are stored with this prefix (i.e.
{outPrefix}_meta.json
).- Returns:
output prefix of the model files
- Return type:
- predict(X: DataFrame | ndarray | QSPRDataset, estimator: Any = None) ndarray [source]
See
QSPRModel.predict
.
- predictDataset(dataset: QSPRDataset, use_probas: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given dataset.
- Parameters:
dataset – a
QSPRDataset
instanceuse_probas – use probabilities if this is a classification model
- Returns:
an array of predictions or a list of arrays of predictions (for classification models with use_probas=True)
- Return type:
np.ndarray | list[np.ndarray]
- predictMols(mols: List[str | Mol], use_probas: bool = False, smiles_standardizer: str | callable = 'chembl', n_jobs: int = 1, fill_value: float = nan, use_applicability_domain: bool = False) ndarray | list[numpy.ndarray]
Make predictions for the given molecules.
- Parameters:
mols (List[str | Mol]) – list of SMILES strings
use_probas (bool) – use probabilities for classification models
smiles_standardizer – either
chembl
,old
, or a partial function that reads and standardizes smiles.n_jobs – Number of jobs to use for parallel processing.
fill_value – Value to use for missing values in the feature matrix.
use_applicability_domain – Use applicability domain to return if a molecule is within the applicability domain of the model.
- Returns:
- an array of predictions or a list of arrays of predictions
(for classification models with use_probas=True)
- np.ndarray[bool]: boolean mask indicating which molecules fall
within the applicability domain of the model
- Return type:
np.ndarray | list[np.ndarray]
- predictProba(X: DataFrame | ndarray | QSPRDataset, estimator: Any = None) ndarray [source]
- save(save_estimator=False)
Save model to file.
- Parameters:
save_estimator (bool) – Explicitly save the estimator to file, if
True
. Note that some models may save the estimator by default even if this argument isFalse
.- Returns:
absolute path to the metafile of the saved model str:
absolute path to the saved estimator, if
include_estimator
isTrue
- Return type:
- saveEstimator() str [source]
Save the DNNModel model.
- Returns:
path to the saved model
- Return type:
- setParams(params: dict | None, reset_estimator: bool = True)
Set model parameters. The estimator is also updated with the new parameters if ‘reload_estimator’ is
True
.
- property task: ModelTasks
Return the task of the model, taken from the data set or deserialized from file if the model is loaded without data.
- Returns:
task of the model
- Return type:
qsprpred.extra.gpu.models.neural_network module
This module holds the base class for DNN models as well as fully connected NN subclass.
- class qsprpred.extra.gpu.models.neural_network.Base(device: str, gpus: list[int], n_epochs: int = 1000, lr: float = 0.0001, batch_size: int = 256, patience: int = 50, tol: float = 0)[source]
Bases:
Module
Base structure for all classification/regression DNN models.
Mainly, it provides the general methods for training, evaluating model and predicting the given data.
- Variables:
n_epochs (int) – (maximum) number of epochs to train the model
lr (float) – learning rate
batch_size (int) – batch size
patience (int) – number of epochs to wait before early stop if no progress on validation set score, if patience = -1, always train to
n_epochs
tol (float) – minimum absolute improvement of loss necessary to count as progress on best validation score
device (torch.device) – device to run the model on
gpus (list) – list of gpus to run the model on
Initialize the DNN model.
- Parameters:
device (str) – device to run the model on
gpus (list) – list of gpus to run the model on
n_epochs (int) – (maximum) number of epochs to train the model
lr (float) – learning rate
batch_size (int) – batch size
patience (int) – number of epochs to wait before early stop if no progress on validation set score, if patience = -1, always train to
n_epochs
tol (float) – minimum absolute improvement of loss necessary to count as progress on best validation score
- T_destination = ~T_destination
- add_module(name: str, module: Module | None) None
Add a child module to the current module.
The module can be accessed as an attribute using the given name.
- Parameters:
name (str) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module.
- apply(fn: Callable[[Module], None]) T
Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.Typical use includes initializing the parameters of a model (see also nn-init-doc).
- Parameters:
fn (
Module
-> None) – function to be applied to each submodule- Returns:
self
- Return type:
Module
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
- bfloat16() T
Casts all floating point parameters and buffers to
bfloat16
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- buffers(recurse: bool = True) Iterator[Tensor]
Return an iterator over module buffers.
- Parameters:
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
torch.Tensor – module buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- children() Iterator[Module]
Return an iterator over immediate children modules.
- Yields:
Module – a child module
- compile(*args, **kwargs)
Compile this Module’s forward using
torch.compile()
.This Module’s
__call__
method is compiled and all arguments are passed as-is totorch.compile()
.See
torch.compile()
for details on the arguments for this function.
- cpu() T
Move all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- cuda(device: int | device | None = None) T
Move all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- double() T
Casts all floating point parameters and buffers to
double
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- eval() T
Set the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See locally-disable-grad-doc for a comparison between
.eval()
and several similar mechanisms that may be confused with it.- Returns:
self
- Return type:
Module
- evaluate(loader) float [source]
Evaluate the performance of the DNN model.
- Parameters:
loader (torch.util.data.DataLoader) – data loader for test set, including m X n target FloatTensor and l X n label FloatTensor (m is the No. of sample, n is the No. of features, l is the No. of classes or tasks)
- Returns:
the average loss value based on the calculation of loss function with given test set.
- Return type:
loss (float)
- extra_repr() str
Set the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- fit(X_train, y_train, X_valid=None, y_valid=None, monitor: FitMonitor | None = None) int [source]
Training the DNN model.
Training is, similar to the scikit-learn or Keras style. It saves the optimal value of parameters.
- Parameters:
X_train (np.ndarray or pd.Dataframe) – training data (m X n), m is the No. of samples, n is the No. of features
y_train (np.ndarray or pd.Dataframe) – training target (m X l), m is the No. of samples, l is the No. of classes or tasks
X_valid (np.ndarray or pd.Dataframe) – validation data (m X n), m is the No. of samples, n is the No. of features
y_valid (np.ndarray or pd.Dataframe) – validation target (m X l), m is the No. of samples, l is the No. of classes or tasks
monitor (FitMonitor) – monitor to use for training, if None, use base monitor
- Returns:
the epoch number when the optimal model is saved
- Return type:
- float() T
Casts all floating point parameters and buffers to
float
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- forward(*input: Any) None
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- getDataLoader(X, y=None)[source]
Convert data to tensors and get generator over dataset with dataloader.
- Parameters:
X (numpy 2d array) – input dataset
y (numpy 1d column vector) – output data
- get_buffer(target: str) Tensor
Return the buffer given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Parameters:
target – The fully-qualified string name of the buffer to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
The buffer referenced by
target
- Return type:
torch.Tensor
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not a buffer
- get_extra_state() Any
Return any extra state to include in the module’s state_dict.
Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’sstate_dict()
.Note that extra state should be picklable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns:
Any extra state to store in the module’s state_dict
- Return type:
- get_parameter(target: str) Parameter
Return the parameter given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Parameters:
target – The fully-qualified string name of the Parameter to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
The Parameter referenced by
target
- Return type:
torch.nn.Parameter
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Parameter
- get_params(deep=True) dict [source]
Get parameters for this estimator.
Function copied from sklearn.base_estimator!
- get_submodule(target: str) Module
Return the submodule given by
target
if it exists, otherwise throw an error.For example, let’s say you have an
nn.Module
A
that looks like this:A( (net_b): Module( (net_c): Module( (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2)) ) (linear): Linear(in_features=100, out_features=200, bias=True) ) )
(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Parameters:
target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)
- Returns:
The submodule referenced by
target
- Return type:
torch.nn.Module
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Module
- half() T
Casts all floating point parameters and buffers to
half
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- ipu(device: int | device | None = None) T
Move all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- load_state_dict(state_dict: Mapping[str, Any], strict: bool = True, assign: bool = False)
Copy parameters and buffers from
state_dict
into this module and its descendants.If
strict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
unlessget_swap_module_params_on_conversion()
isTrue
.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
assign (bool, optional) – When
False
, the properties of the tensors in the current module are preserved while whenTrue
, the properties of the Tensors in the state dict are preserved. The only exception is therequires_grad
field ofDefault: ``False`
- Returns:
- missing_keys is a list of str containing any keys that are expected
by this module but missing from the provided
state_dict
.
- unexpected_keys is a list of str containing the keys that are not
expected by this module but present in the provided
state_dict
.
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- modules() Iterator[Module]
Return an iterator over all modules in the network.
- Yields:
Module – a module in the network
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): ... print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
- named_buffers(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Tensor]]
Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Parameters:
prefix (str) – prefix to prepend to all buffer names.
recurse (bool, optional) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True.
remove_duplicate (bool, optional) – whether to remove the duplicated buffers in the result. Defaults to True.
- Yields:
(str, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
- named_children() Iterator[Tuple[str, Module]]
Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields:
(str, Module) – Tuple containing a name and child module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
- named_modules(memo: Set[Module] | None = None, prefix: str = '', remove_duplicate: bool = True)
Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Parameters:
memo – a memo to store the set of modules already added to the result
prefix – a prefix that will be added to the name of the module
remove_duplicate – whether to remove the duplicated module instances in the result or not
- Yields:
(str, Module) – Tuple of name and module
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): ... print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
- named_parameters(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Parameter]]
Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Parameters:
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.
- Yields:
(str, Parameter) – Tuple containing the name and parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
- parameters(recurse: bool = True) Iterator[Parameter]
Return an iterator over module parameters.
This is typically passed to an optimizer.
- Parameters:
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter – module parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- predict(X_test) ndarray [source]
Predicting the probability of each sample in the given dataset.
- Parameters:
X_test (ndarray) – m X n target array (m is the No. of sample, n is the No. of features)
- Returns:
probability of each sample in the given dataset, it is an m X l FloatTensor (m is the No. of sample, l is the No. of classes or tasks.)
- Return type:
score (ndarray)
- register_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor]) RemovableHandle
Register a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_buffer(name: str, tensor: Tensor | None, persistent: bool = True) None
Add a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Parameters:
name (str) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor or None) – buffer to be registered. If
None
, then operations that run on buffers, such ascuda
, are ignored. IfNone
, the buffer is not included in the module’sstate_dict
.persistent (bool) – whether the buffer is part of this module’s
state_dict
.
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> self.register_buffer('running_mean', torch.zeros(num_features))
- register_forward_hook(hook: Callable[[T, Tuple[Any, ...], Any], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any], Any], Any | None], *, prepend: bool = False, with_kwargs: bool = False, always_call: bool = False) RemovableHandle
Register a forward hook on the module.
The hook will be called every time after
forward()
has computed an output.If
with_kwargs
isFalse
or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called. The hook should have the following signature:hook(module, args, output) -> None or modified output
If
with_kwargs
isTrue
, the forward hook will be passed thekwargs
given to the forward function and be expected to return the output possibly modified. The hook should have the following signature:hook(module, args, kwargs, output) -> None or modified output
- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If
True
, the providedhook
will be fired before all existingforward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward
hooks on thistorch.nn.modules.Module
. Note that globalforward
hooks registered withregister_module_forward_hook()
will fire before all hooks registered by this method. Default:False
with_kwargs (bool) – If
True
, thehook
will be passed the kwargs given to the forward function. Default:False
always_call (bool) – If
True
thehook
will be run regardless of whether an exception is raised while calling the Module. Default:False
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_forward_pre_hook(hook: Callable[[T, Tuple[Any, ...]], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any]], Tuple[Any, Dict[str, Any]] | None], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle
Register a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked.If
with_kwargs
is false or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned (unless that value is already a tuple). The hook should have the following signature:hook(module, args) -> None or modified input
If
with_kwargs
is true, the forward pre-hook will be passed the kwargs given to the forward function. And if the hook modifies the input, both the args and kwargs should be returned. The hook should have the following signature:hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingforward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward_pre
hooks on thistorch.nn.modules.Module
. Note that globalforward_pre
hooks registered withregister_module_forward_pre_hook()
will fire before all hooks registered by this method. Default:False
with_kwargs (bool) – If true, the
hook
will be passed the kwargs given to the forward function. Default:False
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_full_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle
Register a backward hook on the module.
The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Parameters:
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingbackward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward
hooks on thistorch.nn.modules.Module
. Note that globalbackward
hooks registered withregister_module_full_backward_hook()
will fire before all hooks registered by this method.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_full_backward_pre_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle
Register a backward pre-hook on the module.
The hook will be called every time the gradients for the module are computed. The hook should have the following signature:
hook(module, grad_output) -> tuple[Tensor] or None
The
grad_output
is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place ofgrad_output
in subsequent computations. Entries ingrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs inplace is not allowed when using backward hooks and will raise an error.
- Parameters:
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Note that globalbackward_pre
hooks registered withregister_module_full_backward_pre_hook()
will fire before all hooks registered by this method.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_load_state_dict_post_hook(hook)
Register a post hook to be run after module’s
load_state_dict
is called.- It should have the following signature::
hook(module, incompatible_keys) -> None
The
module
argument is the current module that this hook is registered on, and theincompatible_keys
argument is aNamedTuple
consisting of attributesmissing_keys
andunexpected_keys
.missing_keys
is alist
ofstr
containing the missing keys andunexpected_keys
is alist
ofstr
containing the unexpected keys.The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling
load_state_dict()
withstrict=True
are affected by modifications the hook makes tomissing_keys
orunexpected_keys
, as expected. Additions to either set of keys will result in an error being thrown whenstrict=True
, and clearing out both missing and unexpected keys will avoid an error.- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_module(name: str, module: Module | None) None
Alias for
add_module()
.
- register_parameter(name: str, param: Parameter | None) None
Add a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Parameters:
name (str) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter or None) – parameter to be added to the module. If
None
, then operations that run on parameters, such ascuda
, are ignored. IfNone
, the parameter is not included in the module’sstate_dict
.
- register_state_dict_pre_hook(hook)
Register a pre-hook for the
state_dict()
method.These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
. The registered hooks can be used to perform pre-processing before thestate_dict
call is made.
- requires_grad_(requires_grad: bool = True) T
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between
.requires_grad_()
and several similar mechanisms that may be confused with it.- Parameters:
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default:
True
.- Returns:
self
- Return type:
Module
- set_extra_state(state: Any) None
Set extra state contained in the loaded
state_dict
.This function is called from
load_state_dict()
to handle any extra state found within thestate_dict
. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within itsstate_dict
.- Parameters:
state (dict) – Extra state from the
state_dict
- set_params(**params) Base [source]
Set the parameters of this estimator.
Function copied from sklearn.base_estimator! The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
**params – dict Estimator parameters.
- Returns:
estimator instance
- Return type:
self
See
torch.Tensor.share_memory_()
.
- state_dict(*args, destination=None, prefix='', keep_vars=False)
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- to(*args, **kwargs)
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Parameters:
device (
torch.device
) – the desired device of the parameters and buffers in this moduledtype (
torch.dtype
) – the desired floating point or complex dtype of the parameters and buffers in this moduletensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (
torch.memory_format
) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- Returns:
self
- Return type:
Module
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_empty(*, device: int | str | device | None, recurse: bool = True) T
Move the parameters and buffers to the specified device without copying storage.
- Parameters:
device (
torch.device
) – The desired device of the parameters and buffers in this module.recurse (bool) – Whether parameters and buffers of submodules should be recursively moved to the specified device.
- Returns:
self
- Return type:
Module
- train(mode: bool = True) T
Set the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters:
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns:
self
- Return type:
Module
- type(dst_type: dtype | str) T
Casts all parameters and buffers to
dst_type
.Note
This method modifies the module in-place.
- Parameters:
dst_type (type or string) – the desired type
- Returns:
self
- Return type:
Module
- xpu(device: int | device | None = None) T
Move all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- class qsprpred.extra.gpu.models.neural_network.STFullyConnected(n_dim, n_class, device, gpus, n_epochs=100, lr=None, batch_size=256, patience=50, tol=0, is_reg=True, neurons_h1=256, neurons_hx=128, extra_layer=False, dropout_frac=0.25)[source]
Bases:
Base
Single task DNN classification/regression model.
It contains four fully connected layers between which are dropout layers for robustness.
- Variables:
n_dim (int) – the No. of columns (features) for input tensor
n_class (int) – the No. of columns (classes) for output tensor.
device (torch.cude) – device to run the model on
gpus (list) – list of gpu ids to run the model on
n_epochs (int) – max number of epochs
lr (float) – neural net learning rate
batch_size (int) – batch size for training
patience (int) – early stopping patience
tol (float) – early stopping tolerance
is_reg (bool) – whether the model is for regression or classification
neurons_h1 (int) – No. of neurons in the first hidden layer
neurons_hx (int) – No. of neurons in the second hidden layer
extra_layer (bool) – whether to add an extra hidden layer
dropout_frac (float) – dropout fraction
criterion (torch.nn.Module) – the loss function
dropout (torch.nn.Module) – the dropout layer
fc0 (torch.nn.Module) – the first fully connected layer
fc1 (torch.nn.Module) – the second fully connected layer
fc2 (torch.nn.Module) – the third fully connected layer
fc3 (torch.nn.Module) – the fourth fully connected layer
activation (torch.nn.Module) – the activation function
Initialize the STFullyConnected model.
- Parameters:
n_dim (int) – the No. of columns (features) for input tensor
n_class (int) – the No. of columns (classes) for output tensor.
device (torch.cude) – device to run the model on
gpus (list) – list of gpu ids to run the model on
n_epochs (int) – max number of epochs
lr (float) – neural net learning rate
batch_size (int) – batch size
patience (int) – number of epochs to wait before early stop if no progress on validation set score, if patience = -1, always train to n_epochs
tol (float) – minimum absolute improvement of loss necessary to count as progress on best validation score
is_reg (bool, optional) – Regression model (True) or Classification model (False)
neurons_h1 (int) – number of neurons in first hidden layer
neurons_hx (int) – number of neurons in other hidden layers
extra_layer (bool) – add third hidden layer
dropout_frac (float) – dropout fraction
- T_destination = ~T_destination
- add_module(name: str, module: Module | None) None
Add a child module to the current module.
The module can be accessed as an attribute using the given name.
- Parameters:
name (str) – name of the child module. The child module can be accessed from this module using the given name
module (Module) – child module to be added to the module.
- apply(fn: Callable[[Module], None]) T
Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.Typical use includes initializing the parameters of a model (see also nn-init-doc).
- Parameters:
fn (
Module
-> None) – function to be applied to each submodule- Returns:
self
- Return type:
Module
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[1., 1.], [1., 1.]], requires_grad=True) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
- bfloat16() T
Casts all floating point parameters and buffers to
bfloat16
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- buffers(recurse: bool = True) Iterator[Tensor]
Return an iterator over module buffers.
- Parameters:
recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
torch.Tensor – module buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- children() Iterator[Module]
Return an iterator over immediate children modules.
- Yields:
Module – a child module
- compile(*args, **kwargs)
Compile this Module’s forward using
torch.compile()
.This Module’s
__call__
method is compiled and all arguments are passed as-is totorch.compile()
.See
torch.compile()
for details on the arguments for this function.
- cpu() T
Move all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- cuda(device: int | device | None = None) T
Move all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- double() T
Casts all floating point parameters and buffers to
double
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- eval() T
Set the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See locally-disable-grad-doc for a comparison between
.eval()
and several similar mechanisms that may be confused with it.- Returns:
self
- Return type:
Module
- evaluate(loader) float
Evaluate the performance of the DNN model.
- Parameters:
loader (torch.util.data.DataLoader) – data loader for test set, including m X n target FloatTensor and l X n label FloatTensor (m is the No. of sample, n is the No. of features, l is the No. of classes or tasks)
- Returns:
the average loss value based on the calculation of loss function with given test set.
- Return type:
loss (float)
- extra_repr() str
Set the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- fit(X_train, y_train, X_valid=None, y_valid=None, monitor: FitMonitor | None = None) int
Training the DNN model.
Training is, similar to the scikit-learn or Keras style. It saves the optimal value of parameters.
- Parameters:
X_train (np.ndarray or pd.Dataframe) – training data (m X n), m is the No. of samples, n is the No. of features
y_train (np.ndarray or pd.Dataframe) – training target (m X l), m is the No. of samples, l is the No. of classes or tasks
X_valid (np.ndarray or pd.Dataframe) – validation data (m X n), m is the No. of samples, n is the No. of features
y_valid (np.ndarray or pd.Dataframe) – validation target (m X l), m is the No. of samples, l is the No. of classes or tasks
monitor (FitMonitor) – monitor to use for training, if None, use base monitor
- Returns:
the epoch number when the optimal model is saved
- Return type:
- float() T
Casts all floating point parameters and buffers to
float
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- forward(X, is_train=False) Tensor [source]
Invoke the class directly as a function.
- Parameters:
X (FloatTensor) – m X n FloatTensor, m is the No. of samples, n is the No. of features.
is_train (bool, optional) – is it invoked during training process (True) or just for prediction (False)
- Returns:
- m X n FloatTensor, m is the No. of samples,
n is the No. of classes
- Return type:
y (FloatTensor)
- getDataLoader(X, y=None)
Convert data to tensors and get generator over dataset with dataloader.
- Parameters:
X (numpy 2d array) – input dataset
y (numpy 1d column vector) – output data
- get_buffer(target: str) Tensor
Return the buffer given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Parameters:
target – The fully-qualified string name of the buffer to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
The buffer referenced by
target
- Return type:
torch.Tensor
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not a buffer
- get_extra_state() Any
Return any extra state to include in the module’s state_dict.
Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’sstate_dict()
.Note that extra state should be picklable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns:
Any extra state to store in the module’s state_dict
- Return type:
- get_parameter(target: str) Parameter
Return the parameter given by
target
if it exists, otherwise throw an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Parameters:
target – The fully-qualified string name of the Parameter to look for. (See
get_submodule
for how to specify a fully-qualified string.)- Returns:
The Parameter referenced by
target
- Return type:
torch.nn.Parameter
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Parameter
- get_params(deep=True) dict
Get parameters for this estimator.
Function copied from sklearn.base_estimator!
- get_submodule(target: str) Module
Return the submodule given by
target
if it exists, otherwise throw an error.For example, let’s say you have an
nn.Module
A
that looks like this:A( (net_b): Module( (net_c): Module( (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2)) ) (linear): Linear(in_features=100, out_features=200, bias=True) ) )
(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Parameters:
target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)
- Returns:
The submodule referenced by
target
- Return type:
torch.nn.Module
- Raises:
AttributeError – If the target string references an invalid path or resolves to something that is not an
nn.Module
- half() T
Casts all floating point parameters and buffers to
half
datatype.Note
This method modifies the module in-place.
- Returns:
self
- Return type:
Module
- ipu(device: int | device | None = None) T
Move all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on IPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
- load_state_dict(state_dict: Mapping[str, Any], strict: bool = True, assign: bool = False)
Copy parameters and buffers from
state_dict
into this module and its descendants.If
strict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
unlessget_swap_module_params_on_conversion()
isTrue
.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
assign (bool, optional) – When
False
, the properties of the tensors in the current module are preserved while whenTrue
, the properties of the Tensors in the state dict are preserved. The only exception is therequires_grad
field ofDefault: ``False`
- Returns:
- missing_keys is a list of str containing any keys that are expected
by this module but missing from the provided
state_dict
.
- unexpected_keys is a list of str containing the keys that are not
expected by this module but present in the provided
state_dict
.
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- modules() Iterator[Module]
Return an iterator over all modules in the network.
- Yields:
Module – a module in the network
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): ... print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
- named_buffers(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Tensor]]
Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Parameters:
prefix (str) – prefix to prepend to all buffer names.
recurse (bool, optional) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module. Defaults to True.
remove_duplicate (bool, optional) – whether to remove the duplicated buffers in the result. Defaults to True.
- Yields:
(str, torch.Tensor) – Tuple containing the name and buffer
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
- named_children() Iterator[Tuple[str, Module]]
Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields:
(str, Module) – Tuple containing a name and child module
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
- named_modules(memo: Set[Module] | None = None, prefix: str = '', remove_duplicate: bool = True)
Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Parameters:
memo – a memo to store the set of modules already added to the result
prefix – a prefix that will be added to the name of the module
remove_duplicate – whether to remove the duplicated module instances in the result or not
- Yields:
(str, Module) – Tuple of name and module
Note
Duplicate modules are returned only once. In the following example,
l
will be returned only once.Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): ... print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
- named_parameters(prefix: str = '', recurse: bool = True, remove_duplicate: bool = True) Iterator[Tuple[str, Parameter]]
Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Parameters:
prefix (str) – prefix to prepend to all parameter names.
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.
- Yields:
(str, Parameter) – Tuple containing the name and parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
- parameters(recurse: bool = True) Iterator[Parameter]
Return an iterator over module parameters.
This is typically passed to an optimizer.
- Parameters:
recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter – module parameter
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- predict(X_test) ndarray
Predicting the probability of each sample in the given dataset.
- Parameters:
X_test (ndarray) – m X n target array (m is the No. of sample, n is the No. of features)
- Returns:
probability of each sample in the given dataset, it is an m X l FloatTensor (m is the No. of sample, l is the No. of classes or tasks.)
- Return type:
score (ndarray)
- register_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor]) RemovableHandle
Register a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_buffer(name: str, tensor: Tensor | None, persistent: bool = True) None
Add a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Parameters:
name (str) – name of the buffer. The buffer can be accessed from this module using the given name
tensor (Tensor or None) – buffer to be registered. If
None
, then operations that run on buffers, such ascuda
, are ignored. IfNone
, the buffer is not included in the module’sstate_dict
.persistent (bool) – whether the buffer is part of this module’s
state_dict
.
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> self.register_buffer('running_mean', torch.zeros(num_features))
- register_forward_hook(hook: Callable[[T, Tuple[Any, ...], Any], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any], Any], Any | None], *, prepend: bool = False, with_kwargs: bool = False, always_call: bool = False) RemovableHandle
Register a forward hook on the module.
The hook will be called every time after
forward()
has computed an output.If
with_kwargs
isFalse
or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called. The hook should have the following signature:hook(module, args, output) -> None or modified output
If
with_kwargs
isTrue
, the forward hook will be passed thekwargs
given to the forward function and be expected to return the output possibly modified. The hook should have the following signature:hook(module, args, kwargs, output) -> None or modified output
- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If
True
, the providedhook
will be fired before all existingforward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward
hooks on thistorch.nn.modules.Module
. Note that globalforward
hooks registered withregister_module_forward_hook()
will fire before all hooks registered by this method. Default:False
with_kwargs (bool) – If
True
, thehook
will be passed the kwargs given to the forward function. Default:False
always_call (bool) – If
True
thehook
will be run regardless of whether an exception is raised while calling the Module. Default:False
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_forward_pre_hook(hook: Callable[[T, Tuple[Any, ...]], Any | None] | Callable[[T, Tuple[Any, ...], Dict[str, Any]], Tuple[Any, Dict[str, Any]] | None], *, prepend: bool = False, with_kwargs: bool = False) RemovableHandle
Register a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked.If
with_kwargs
is false or not specified, the input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to theforward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned (unless that value is already a tuple). The hook should have the following signature:hook(module, args) -> None or modified input
If
with_kwargs
is true, the forward pre-hook will be passed the kwargs given to the forward function. And if the hook modifies the input, both the args and kwargs should be returned. The hook should have the following signature:hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingforward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingforward_pre
hooks on thistorch.nn.modules.Module
. Note that globalforward_pre
hooks registered withregister_module_forward_pre_hook()
will fire before all hooks registered by this method. Default:False
with_kwargs (bool) – If true, the
hook
will be passed the kwargs given to the forward function. Default:False
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_full_backward_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle
Register a backward hook on the module.
The hook will be called every time the gradients with respect to a module are computed, i.e. the hook will execute if and only if the gradients with respect to module outputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Parameters:
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingbackward
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward
hooks on thistorch.nn.modules.Module
. Note that globalbackward
hooks registered withregister_module_full_backward_hook()
will fire before all hooks registered by this method.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_full_backward_pre_hook(hook: Callable[[Module, Tuple[Tensor, ...] | Tensor], None | Tuple[Tensor, ...] | Tensor], prepend: bool = False) RemovableHandle
Register a backward pre-hook on the module.
The hook will be called every time the gradients for the module are computed. The hook should have the following signature:
hook(module, grad_output) -> tuple[Tensor] or None
The
grad_output
is a tuple. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the output that will be used in place ofgrad_output
in subsequent computations. Entries ingrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs inplace is not allowed when using backward hooks and will raise an error.
- Parameters:
hook (Callable) – The user-defined hook to be registered.
prepend (bool) – If true, the provided
hook
will be fired before all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Otherwise, the providedhook
will be fired after all existingbackward_pre
hooks on thistorch.nn.modules.Module
. Note that globalbackward_pre
hooks registered withregister_module_full_backward_pre_hook()
will fire before all hooks registered by this method.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_load_state_dict_post_hook(hook)
Register a post hook to be run after module’s
load_state_dict
is called.- It should have the following signature::
hook(module, incompatible_keys) -> None
The
module
argument is the current module that this hook is registered on, and theincompatible_keys
argument is aNamedTuple
consisting of attributesmissing_keys
andunexpected_keys
.missing_keys
is alist
ofstr
containing the missing keys andunexpected_keys
is alist
ofstr
containing the unexpected keys.The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling
load_state_dict()
withstrict=True
are affected by modifications the hook makes tomissing_keys
orunexpected_keys
, as expected. Additions to either set of keys will result in an error being thrown whenstrict=True
, and clearing out both missing and unexpected keys will avoid an error.- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_module(name: str, module: Module | None) None
Alias for
add_module()
.
- register_parameter(name: str, param: Parameter | None) None
Add a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Parameters:
name (str) – name of the parameter. The parameter can be accessed from this module using the given name
param (Parameter or None) – parameter to be added to the module. If
None
, then operations that run on parameters, such ascuda
, are ignored. IfNone
, the parameter is not included in the module’sstate_dict
.
- register_state_dict_pre_hook(hook)
Register a pre-hook for the
state_dict()
method.These hooks will be called with arguments:
self
,prefix
, andkeep_vars
before callingstate_dict
onself
. The registered hooks can be used to perform pre-processing before thestate_dict
call is made.
- requires_grad_(requires_grad: bool = True) T
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between
.requires_grad_()
and several similar mechanisms that may be confused with it.- Parameters:
requires_grad (bool) – whether autograd should record operations on parameters in this module. Default:
True
.- Returns:
self
- Return type:
Module
- set_extra_state(state: Any) None
Set extra state contained in the loaded
state_dict
.This function is called from
load_state_dict()
to handle any extra state found within thestate_dict
. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within itsstate_dict
.- Parameters:
state (dict) – Extra state from the
state_dict
- set_params(**params) STFullyConnected [source]
Set parameters and re-initialize model.
- Parameters:
**params – parameters to be set
- Returns:
the model itself
- Return type:
self (STFullyConnected)
See
torch.Tensor.share_memory_()
.
- state_dict(*args, destination=None, prefix='', keep_vars=False)
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']
- to(*args, **kwargs)
Move and/or cast the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Parameters:
device (
torch.device
) – the desired device of the parameters and buffers in this moduledtype (
torch.dtype
) – the desired floating point or complex dtype of the parameters and buffers in this moduletensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module
memory_format (
torch.memory_format
) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- Returns:
self
- Return type:
Module
Examples:
>>> # xdoctest: +IGNORE_WANT("non-deterministic") >>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_empty(*, device: int | str | device | None, recurse: bool = True) T
Move the parameters and buffers to the specified device without copying storage.
- Parameters:
device (
torch.device
) – The desired device of the parameters and buffers in this module.recurse (bool) – Whether parameters and buffers of submodules should be recursively moved to the specified device.
- Returns:
self
- Return type:
Module
- train(mode: bool = True) T
Set the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Parameters:
mode (bool) – whether to set training mode (
True
) or evaluation mode (False
). Default:True
.- Returns:
self
- Return type:
Module
- type(dst_type: dtype | str) T
Casts all parameters and buffers to
dst_type
.Note
This method modifies the module in-place.
- Parameters:
dst_type (type or string) – the desired type
- Returns:
self
- Return type:
Module
- xpu(device: int | device | None = None) T
Move all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
Note
This method modifies the module in-place.
- Parameters:
device (int, optional) – if specified, all parameters will be copied to that device
- Returns:
self
- Return type:
Module
qsprpred.extra.gpu.models.tests module
- class qsprpred.extra.gpu.models.tests.BenchMarkTest(methodName='runTest')[source]
Bases:
BenchMarkTestCase
Test GPU models with benchmarks.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- classmethod addClassCleanup(function, /, *args, **kwargs)
Same as addCleanup, except the cleanup items are called even if setUpClass fails (unlike tearDownClass).
- addCleanup(function, /, *args, **kwargs)
Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success.
Cleanup items are called even if setUp fails (unlike tearDown).
- addTypeEqualityFunc(typeobj, function)
Add a type specific assertEqual style function to compare a type.
This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.
- Parameters:
typeobj – The data type to call this function on when both values are of the same type in assertEqual().
function – The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal.
- assertAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
If the two objects compare equal then they will automatically compare almost equal.
- assertCountEqual(first, second, msg=None)
Asserts that two iterables have the same elements, the same number of times, without regard to order.
- self.assertEqual(Counter(list(first)),
Counter(list(second)))
- Example:
[0, 1, 1] and [1, 0, 1] compare equal.
[0, 0, 1] and [0, 1] compare unequal.
- assertDictEqual(d1, d2, msg=None)
- assertEqual(first, second, msg=None)
Fail if the two objects are unequal as determined by the ‘==’ operator.
- assertFalse(expr, msg=None)
Check that the expression is false.
- assertGreater(a, b, msg=None)
Just like self.assertTrue(a > b), but with a nicer default message.
- assertGreaterEqual(a, b, msg=None)
Just like self.assertTrue(a >= b), but with a nicer default message.
- assertIn(member, container, msg=None)
Just like self.assertTrue(a in b), but with a nicer default message.
- assertIs(expr1, expr2, msg=None)
Just like self.assertTrue(a is b), but with a nicer default message.
- assertIsInstance(obj, cls, msg=None)
Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message.
- assertIsNone(obj, msg=None)
Same as self.assertTrue(obj is None), with a nicer default message.
- assertIsNot(expr1, expr2, msg=None)
Just like self.assertTrue(a is not b), but with a nicer default message.
- assertIsNotNone(obj, msg=None)
Included for symmetry with assertIsNone.
- assertLess(a, b, msg=None)
Just like self.assertTrue(a < b), but with a nicer default message.
- assertLessEqual(a, b, msg=None)
Just like self.assertTrue(a <= b), but with a nicer default message.
- assertListEqual(list1, list2, msg=None)
A list-specific equality assertion.
- Parameters:
list1 – The first list to compare.
list2 – The second list to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertLogs(logger=None, level=None)
Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger.
This method must be used as a context manager, and will yield a recording object with two attributes:
output
andrecords
. At the end of the context manager, theoutput
attribute will be a list of the matching formatted log messages and therecords
attribute will be a list of the corresponding LogRecord objects.Example:
with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message'])
- assertMultiLineEqual(first, second, msg=None)
Assert that two multi-line strings are equal.
- assertNoLogs(logger=None, level=None)
Fail unless no log messages of level level or higher are emitted on logger_name or its children.
This method must be used as a context manager.
- assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
Objects that are equal automatically fail.
- assertNotEqual(first, second, msg=None)
Fail if the two objects are equal as determined by the ‘!=’ operator.
- assertNotIn(member, container, msg=None)
Just like self.assertTrue(a not in b), but with a nicer default message.
- assertNotIsInstance(obj, cls, msg=None)
Included for symmetry with assertIsInstance.
- assertNotRegex(text, unexpected_regex, msg=None)
Fail the test if the text matches the regular expression.
- assertRaises(expected_exception, *args, **kwargs)
Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertRaises(SomeException): do_something()
An optional keyword argument ‘msg’ can be provided when assertRaises is used as a context object.
The context manager keeps a reference to the exception as the ‘exception’ attribute. This allows you to inspect the exception after the assertion:
with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3)
- assertRaisesRegex(expected_exception, expected_regex, *args, **kwargs)
Asserts that the message in a raised exception matches a regex.
- Parameters:
expected_exception – Exception class expected to be raised.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager.
- assertRegex(text, expected_regex, msg=None)
Fail the test unless the text matches the regular expression.
- assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)
An equality assertion for ordered sequences (like lists and tuples).
For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.
- Parameters:
seq1 – The first sequence to compare.
seq2 – The second sequence to compare.
seq_type – The expected datatype of the sequences, or None if no datatype should be enforced.
msg – Optional message to use on failure instead of a list of differences.
- assertSetEqual(set1, set2, msg=None)
A set-specific equality assertion.
- Parameters:
set1 – The first set to compare.
set2 – The second set to compare.
msg – Optional message to use on failure instead of a list of differences.
assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).
- assertTrue(expr, msg=None)
Check that the expression is true.
- assertTupleEqual(tuple1, tuple2, msg=None)
A tuple-specific equality assertion.
- Parameters:
tuple1 – The first tuple to compare.
tuple2 – The second tuple to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertWarns(expected_warning, *args, **kwargs)
Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertWarns(SomeWarning): do_something()
An optional keyword argument ‘msg’ can be provided when assertWarns is used as a context object.
The context manager keeps a reference to the first matching warning as the ‘warning’ attribute; similarly, the ‘filename’ and ‘lineno’ attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion:
with self.assertWarns(SomeWarning) as cm: do_something() the_warning = cm.warning self.assertEqual(the_warning.some_attribute, 147)
- assertWarnsRegex(expected_warning, expected_regex, *args, **kwargs)
Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.
- Parameters:
expected_warning – Warning class expected to be triggered.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager.
- checkRunResults(results)
- checkSettings()
- clearGenerated()
Remove the directories that are used for testing.
- countTestCases()
- createLargeMultitaskDataSet(name='QSPRDataset_multi_test', target_props=[{'name': 'HBD', 'task': <TargetTasks.MULTICLASS: 'MULTICLASS'>, 'th': [-1, 1, 2, 100]}, {'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
preparation_settings (dict) – dictionary containing preparation settings
random_state (int) – random state to use for splitting and shuffling
- Returns:
a
QSPRDataset
object- Return type:
- createLargeTestDataSet(name='QSPRDataset_test_large', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42, n_jobs=1, chunk_size=None)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createSmallTestDataSet(name='QSPRDataset_test_small', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a small dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createTestDataSetFromFrame(df, name='QSPRDataset_test', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], random_state=None, prep=None, n_jobs=1, chunk_size=None)
Create a dataset for testing purposes from the given data frame.
- Parameters:
df (pd.DataFrame) – data frame containing the dataset
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
prep (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- debug()
Run the test without collecting errors in a TestResult
- defaultTestResult()
- classmethod doClassCleanups()
Execute all class cleanup functions. Normally called for you after tearDownClass.
- doCleanups()
Execute all cleanup functions. Normally called for you after tearDown.
- classmethod enterClassContext(cm)
Same as enterContext, but class-wide.
- enterContext(cm)
Enters the supplied context manager.
If successful, also adds its __exit__ method as a cleanup function and returns the result of the __enter__ method.
- fail(msg=None)
Fail immediately, with the given message.
- failureException
alias of
AssertionError
- classmethod getAllDescriptors()
Return a list of (ideally) all available descriptor sets. For now they need to be added manually to the list below.
TODO: would be nice to create the list automatically by implementing a descriptor set registry that would hold all installed descriptor sets.
- getBigDF()
Get a large data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- classmethod getDataPrepGrid()
Return a list of many possible combinations of descriptor calculators, splits, feature standardizers, feature filters and data filters. Again, this is not exhaustive, but should cover a lot of cases.
- Returns:
a generator that yields tuples of all possible combinations as stated above, each tuple is defined as: (descriptor_calculator, split, feature_standardizer, feature_filters, data_filters)
- Return type:
grid
- classmethod getDefaultCalculatorCombo()
Makes a list of default descriptor calculators that can be used in tests. It creates a calculator with only morgan fingerprints and rdkit descriptors, but also one with them both to test behaviour with multiple descriptor sets. Override this method if you want to test with other descriptor sets and calculator combinations.
- static getDefaultPrep()
Return a dictionary with default preparation settings.
- classmethod getPrepCombos()
Return a list of all possible preparation combinations as generated by
getDataPrepGrid
as well as their names. The generated list can be used to parameterize tests with the given named combinations.
- getSmallDF()
Get a small data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- id()
- longMessage = True
- maxDiff = 640
- run(result=None)
- setUp()
Hook method for setting up the test fixture before exercising it.
- classmethod setUpClass()
Hook method for setting up class fixture before running tests in the class.
- setUpPaths()
Create the directories that are used for testing.
- shortDescription()
Returns a one-line description of the test, or None if no description has been provided.
The default implementation of this method returns the first line of the specified test method’s docstring.
- skipTest(reason)
Skip this test.
- subTest(msg=<object object>, **params)
Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed.
- tearDown()
Remove all files and directories that are used for testing.
- classmethod tearDownClass()
Hook method for deconstructing the class fixture after running all tests in the class.
- validate_split(dataset)
Check if the split has the data it should have after splitting.
- class qsprpred.extra.gpu.models.tests.ChemPropTest(methodName='runTest')[source]
Bases:
ModelDataSetsPathMixIn
,ModelCheckMixIn
,TestCase
This class holds the tests for the DNNModel class.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- classmethod addClassCleanup(function, /, *args, **kwargs)
Same as addCleanup, except the cleanup items are called even if setUpClass fails (unlike tearDownClass).
- addCleanup(function, /, *args, **kwargs)
Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success.
Cleanup items are called even if setUp fails (unlike tearDown).
- addTypeEqualityFunc(typeobj, function)
Add a type specific assertEqual style function to compare a type.
This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.
- Parameters:
typeobj – The data type to call this function on when both values are of the same type in assertEqual().
function – The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal.
- assertAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
If the two objects compare equal then they will automatically compare almost equal.
- assertCountEqual(first, second, msg=None)
Asserts that two iterables have the same elements, the same number of times, without regard to order.
- self.assertEqual(Counter(list(first)),
Counter(list(second)))
- Example:
[0, 1, 1] and [1, 0, 1] compare equal.
[0, 0, 1] and [0, 1] compare unequal.
- assertDictEqual(d1, d2, msg=None)
- assertEqual(first, second, msg=None)
Fail if the two objects are unequal as determined by the ‘==’ operator.
- assertFalse(expr, msg=None)
Check that the expression is false.
- assertGreater(a, b, msg=None)
Just like self.assertTrue(a > b), but with a nicer default message.
- assertGreaterEqual(a, b, msg=None)
Just like self.assertTrue(a >= b), but with a nicer default message.
- assertIn(member, container, msg=None)
Just like self.assertTrue(a in b), but with a nicer default message.
- assertIs(expr1, expr2, msg=None)
Just like self.assertTrue(a is b), but with a nicer default message.
- assertIsInstance(obj, cls, msg=None)
Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message.
- assertIsNone(obj, msg=None)
Same as self.assertTrue(obj is None), with a nicer default message.
- assertIsNot(expr1, expr2, msg=None)
Just like self.assertTrue(a is not b), but with a nicer default message.
- assertIsNotNone(obj, msg=None)
Included for symmetry with assertIsNone.
- assertLess(a, b, msg=None)
Just like self.assertTrue(a < b), but with a nicer default message.
- assertLessEqual(a, b, msg=None)
Just like self.assertTrue(a <= b), but with a nicer default message.
- assertListEqual(list1, list2, msg=None)
A list-specific equality assertion.
- Parameters:
list1 – The first list to compare.
list2 – The second list to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertLogs(logger=None, level=None)
Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger.
This method must be used as a context manager, and will yield a recording object with two attributes:
output
andrecords
. At the end of the context manager, theoutput
attribute will be a list of the matching formatted log messages and therecords
attribute will be a list of the corresponding LogRecord objects.Example:
with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message'])
- assertMultiLineEqual(first, second, msg=None)
Assert that two multi-line strings are equal.
- assertNoLogs(logger=None, level=None)
Fail unless no log messages of level level or higher are emitted on logger_name or its children.
This method must be used as a context manager.
- assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
Objects that are equal automatically fail.
- assertNotEqual(first, second, msg=None)
Fail if the two objects are equal as determined by the ‘!=’ operator.
- assertNotIn(member, container, msg=None)
Just like self.assertTrue(a not in b), but with a nicer default message.
- assertNotIsInstance(obj, cls, msg=None)
Included for symmetry with assertIsInstance.
- assertNotRegex(text, unexpected_regex, msg=None)
Fail the test if the text matches the regular expression.
- assertRaises(expected_exception, *args, **kwargs)
Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertRaises(SomeException): do_something()
An optional keyword argument ‘msg’ can be provided when assertRaises is used as a context object.
The context manager keeps a reference to the exception as the ‘exception’ attribute. This allows you to inspect the exception after the assertion:
with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3)
- assertRaisesRegex(expected_exception, expected_regex, *args, **kwargs)
Asserts that the message in a raised exception matches a regex.
- Parameters:
expected_exception – Exception class expected to be raised.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager.
- assertRegex(text, expected_regex, msg=None)
Fail the test unless the text matches the regular expression.
- assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)
An equality assertion for ordered sequences (like lists and tuples).
For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.
- Parameters:
seq1 – The first sequence to compare.
seq2 – The second sequence to compare.
seq_type – The expected datatype of the sequences, or None if no datatype should be enforced.
msg – Optional message to use on failure instead of a list of differences.
- assertSetEqual(set1, set2, msg=None)
A set-specific equality assertion.
- Parameters:
set1 – The first set to compare.
set2 – The second set to compare.
msg – Optional message to use on failure instead of a list of differences.
assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).
- assertTrue(expr, msg=None)
Check that the expression is true.
- assertTupleEqual(tuple1, tuple2, msg=None)
A tuple-specific equality assertion.
- Parameters:
tuple1 – The first tuple to compare.
tuple2 – The second tuple to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertWarns(expected_warning, *args, **kwargs)
Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertWarns(SomeWarning): do_something()
An optional keyword argument ‘msg’ can be provided when assertWarns is used as a context object.
The context manager keeps a reference to the first matching warning as the ‘warning’ attribute; similarly, the ‘filename’ and ‘lineno’ attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion:
with self.assertWarns(SomeWarning) as cm: do_something() the_warning = cm.warning self.assertEqual(the_warning.some_attribute, 147)
- assertWarnsRegex(expected_warning, expected_regex, *args, **kwargs)
Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.
- Parameters:
expected_warning – Warning class expected to be triggered.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager.
- checkOptimization(model: QSPRModel, ds: QSPRDataset, optimizer: HyperparameterOptimization)
- clearGenerated()
Remove the directories that are used for testing.
- countTestCases()
- createLargeMultitaskDataSet(name='QSPRDataset_multi_test', target_props=[{'name': 'HBD', 'task': <TargetTasks.MULTICLASS: 'MULTICLASS'>, 'th': [-1, 1, 2, 100]}, {'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
preparation_settings (dict) – dictionary containing preparation settings
random_state (int) – random state to use for splitting and shuffling
- Returns:
a
QSPRDataset
object- Return type:
- createLargeTestDataSet(name='QSPRDataset_test_large', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42, n_jobs=1, chunk_size=None)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createSmallTestDataSet(name='QSPRDataset_test_small', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a small dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createTestDataSetFromFrame(df, name='QSPRDataset_test', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], random_state=None, prep=None, n_jobs=1, chunk_size=None)
Create a dataset for testing purposes from the given data frame.
- Parameters:
df (pd.DataFrame) – data frame containing the dataset
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
prep (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- debug()
Run the test without collecting errors in a TestResult
- defaultTestResult()
- classmethod doClassCleanups()
Execute all class cleanup functions. Normally called for you after tearDownClass.
- doCleanups()
Execute all cleanup functions. Normally called for you after tearDown.
- classmethod enterClassContext(cm)
Same as enterContext, but class-wide.
- enterContext(cm)
Enters the supplied context manager.
If successful, also adds its __exit__ method as a cleanup function and returns the result of the __enter__ method.
- fail(msg=None)
Fail immediately, with the given message.
- failureException
alias of
AssertionError
- fitTest(model: QSPRModel, ds: QSPRDataset)
Test model fitting, optimization and evaluation.
- Parameters:
model (QSPRModel) – The model to test.
ds (QSPRDataset) – The dataset to use for testing.
- classmethod getAllDescriptors()
Return a list of (ideally) all available descriptor sets. For now they need to be added manually to the list below.
TODO: would be nice to create the list automatically by implementing a descriptor set registry that would hold all installed descriptor sets.
- getBigDF()
Get a large data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- classmethod getDataPrepGrid()
Return a list of many possible combinations of descriptor calculators, splits, feature standardizers, feature filters and data filters. Again, this is not exhaustive, but should cover a lot of cases.
- Returns:
a generator that yields tuples of all possible combinations as stated above, each tuple is defined as: (descriptor_calculator, split, feature_standardizer, feature_filters, data_filters)
- Return type:
grid
- classmethod getDefaultCalculatorCombo()
Makes a list of default descriptor calculators that can be used in tests. It creates a calculator with only morgan fingerprints and rdkit descriptors, but also one with them both to test behaviour with multiple descriptor sets. Override this method if you want to test with other descriptor sets and calculator combinations.
- static getDefaultPrep()
Return a dictionary with default preparation settings.
- getModel(name: str, parameters: dict | None = None, random_state: int | None = None)[source]
Initialize model with data set.
- Parameters:
name – Name of the model.
parameters – Parameters to use.
random_state – Random seed to use for random operations.
- classmethod getPrepCombos()
Return a list of all possible preparation combinations as generated by
getDataPrepGrid
as well as their names. The generated list can be used to parameterize tests with the given named combinations.
- getSmallDF()
Get a small data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- property gridFile
Return the path to the grid file with test search spaces for hyperparameter optimization.
- id()
- longMessage = True
- maxDiff = 640
- predictorTest(model: QSPRModel, dataset: QSPRDataset, comparison_model: QSPRModel | None = None, expect_equal_result=True, **pred_kwargs)
Test model predictions.
Checks if the shape of the predictions is as expected and if the predictions of the predictMols function are consistent with the predictions of the predict/predictProba functions. Also checks if the predictions of the model are the same as the predictions of the comparison model if given.
- Parameters:
model (QSPRModel) – The model to make predictions with.
dataset (QSPRDataset) – The dataset to make predictions for.
comparison_model (QSPRModel) – another model to compare the predictions with.
expect_equal_result (bool) – Whether the expected result should be equal or not equal to the predictions of the comparison model.
**pred_kwargs – Extra keyword arguments to pass to the predictor’s
predictMols
method.
- run(result=None)
- classmethod setUpClass()
Hook method for setting up class fixture before running tests in the class.
- setUpPaths()
Set up the test environment.
- shortDescription()
Returns a one-line description of the test, or None if no description has been provided.
The default implementation of this method returns the first line of the specified test method’s docstring.
- skipTest(reason)
Skip this test.
- subTest(msg=<object object>, **params)
Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed.
- tearDown()
Remove all files and directories that are used for testing.
- classmethod tearDownClass()
Hook method for deconstructing the class fixture after running all tests in the class.
- testMultiTaskmodel = None
- testMultiTaskmodel_0_MoleculeModel_MULTITASK_REGRESSION(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_MULTITASK_REGRESSION’, task=<ModelTasks.MULTITASK_REGRESSION: ‘MULTITASK_REGRESSION’>, alg_name=’MoleculeModel’, random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
- testMultiTaskmodel_1_MoleculeModel_MULTITASK_SINGLECLASS_None(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_MULTITASK_SINGLECLASS_None’, task=<ModelTasks.MULTITASK_SINGLECLASS: ‘MULTITASK_SINGLECLASS’>, alg_name=’MoleculeModel’, random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
- testMultiTaskmodel_2_MoleculeModel_MULTITASK_SINGLECLASS_1_42(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_MULTITASK_SINGLECLASS_1_42’, task=<ModelTasks.MULTITASK_SINGLECLASS: ‘MULTITASK_SINGLECLASS’>, alg_name=’MoleculeModel’, random_state=[1, 42]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
- testMultiTaskmodel_3_MoleculeModel_MULTITASK_SINGLECLASS_42_42(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_MULTITASK_SINGLECLASS_42_42’, task=<ModelTasks.MULTITASK_SINGLECLASS: ‘MULTITASK_SINGLECLASS’>, alg_name=’MoleculeModel’, random_state=[42, 42]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
- testSingleTaskModel = None
- testSingleTaskModel_0_MoleculeModel_SINGLECLASS(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_SINGLECLASS’, task=<TargetTasks.SINGLECLASS: ‘SINGLECLASS’>, alg_name=’MoleculeModel’, th=[6.5], random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_1_MoleculeModel_MULTICLASS(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_MULTICLASS’, task=<TargetTasks.MULTICLASS: ‘MULTICLASS’>, alg_name=’MoleculeModel’, th=[0, 1, 10, 1100], random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_2_MoleculeModel_REGRESSION_None(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_REGRESSION_None’, task=<TargetTasks.REGRESSION: ‘REGRESSION’>, alg_name=’MoleculeModel’, th=None, random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_3_MoleculeModel_REGRESSION_1_42(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_REGRESSION_1_42’, task=<TargetTasks.REGRESSION: ‘REGRESSION’>, alg_name=’MoleculeModel’, th=None, random_state=[1, 42]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_4_MoleculeModel_REGRESSION_42_42(**kw)
Test the DNNModel model in one configuration [with _=’MoleculeModel_REGRESSION_42_42’, task=<TargetTasks.REGRESSION: ‘REGRESSION’>, alg_name=’MoleculeModel’, th=None, random_state=[42, 42]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- validate_split(dataset)
Check if the split has the data it should have after splitting.
- class qsprpred.extra.gpu.models.tests.NeuralNet(methodName='runTest')[source]
Bases:
ModelDataSetsPathMixIn
,ModelCheckMixIn
,TestCase
This class holds the tests for the DNNModel class.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- classmethod addClassCleanup(function, /, *args, **kwargs)
Same as addCleanup, except the cleanup items are called even if setUpClass fails (unlike tearDownClass).
- addCleanup(function, /, *args, **kwargs)
Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success.
Cleanup items are called even if setUp fails (unlike tearDown).
- addTypeEqualityFunc(typeobj, function)
Add a type specific assertEqual style function to compare a type.
This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.
- Parameters:
typeobj – The data type to call this function on when both values are of the same type in assertEqual().
function – The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal.
- assertAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
If the two objects compare equal then they will automatically compare almost equal.
- assertCountEqual(first, second, msg=None)
Asserts that two iterables have the same elements, the same number of times, without regard to order.
- self.assertEqual(Counter(list(first)),
Counter(list(second)))
- Example:
[0, 1, 1] and [1, 0, 1] compare equal.
[0, 0, 1] and [0, 1] compare unequal.
- assertDictEqual(d1, d2, msg=None)
- assertEqual(first, second, msg=None)
Fail if the two objects are unequal as determined by the ‘==’ operator.
- assertFalse(expr, msg=None)
Check that the expression is false.
- assertGreater(a, b, msg=None)
Just like self.assertTrue(a > b), but with a nicer default message.
- assertGreaterEqual(a, b, msg=None)
Just like self.assertTrue(a >= b), but with a nicer default message.
- assertIn(member, container, msg=None)
Just like self.assertTrue(a in b), but with a nicer default message.
- assertIs(expr1, expr2, msg=None)
Just like self.assertTrue(a is b), but with a nicer default message.
- assertIsInstance(obj, cls, msg=None)
Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message.
- assertIsNone(obj, msg=None)
Same as self.assertTrue(obj is None), with a nicer default message.
- assertIsNot(expr1, expr2, msg=None)
Just like self.assertTrue(a is not b), but with a nicer default message.
- assertIsNotNone(obj, msg=None)
Included for symmetry with assertIsNone.
- assertLess(a, b, msg=None)
Just like self.assertTrue(a < b), but with a nicer default message.
- assertLessEqual(a, b, msg=None)
Just like self.assertTrue(a <= b), but with a nicer default message.
- assertListEqual(list1, list2, msg=None)
A list-specific equality assertion.
- Parameters:
list1 – The first list to compare.
list2 – The second list to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertLogs(logger=None, level=None)
Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger.
This method must be used as a context manager, and will yield a recording object with two attributes:
output
andrecords
. At the end of the context manager, theoutput
attribute will be a list of the matching formatted log messages and therecords
attribute will be a list of the corresponding LogRecord objects.Example:
with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message'])
- assertMultiLineEqual(first, second, msg=None)
Assert that two multi-line strings are equal.
- assertNoLogs(logger=None, level=None)
Fail unless no log messages of level level or higher are emitted on logger_name or its children.
This method must be used as a context manager.
- assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
Objects that are equal automatically fail.
- assertNotEqual(first, second, msg=None)
Fail if the two objects are equal as determined by the ‘!=’ operator.
- assertNotIn(member, container, msg=None)
Just like self.assertTrue(a not in b), but with a nicer default message.
- assertNotIsInstance(obj, cls, msg=None)
Included for symmetry with assertIsInstance.
- assertNotRegex(text, unexpected_regex, msg=None)
Fail the test if the text matches the regular expression.
- assertRaises(expected_exception, *args, **kwargs)
Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertRaises(SomeException): do_something()
An optional keyword argument ‘msg’ can be provided when assertRaises is used as a context object.
The context manager keeps a reference to the exception as the ‘exception’ attribute. This allows you to inspect the exception after the assertion:
with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3)
- assertRaisesRegex(expected_exception, expected_regex, *args, **kwargs)
Asserts that the message in a raised exception matches a regex.
- Parameters:
expected_exception – Exception class expected to be raised.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager.
- assertRegex(text, expected_regex, msg=None)
Fail the test unless the text matches the regular expression.
- assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)
An equality assertion for ordered sequences (like lists and tuples).
For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.
- Parameters:
seq1 – The first sequence to compare.
seq2 – The second sequence to compare.
seq_type – The expected datatype of the sequences, or None if no datatype should be enforced.
msg – Optional message to use on failure instead of a list of differences.
- assertSetEqual(set1, set2, msg=None)
A set-specific equality assertion.
- Parameters:
set1 – The first set to compare.
set2 – The second set to compare.
msg – Optional message to use on failure instead of a list of differences.
assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).
- assertTrue(expr, msg=None)
Check that the expression is true.
- assertTupleEqual(tuple1, tuple2, msg=None)
A tuple-specific equality assertion.
- Parameters:
tuple1 – The first tuple to compare.
tuple2 – The second tuple to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertWarns(expected_warning, *args, **kwargs)
Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertWarns(SomeWarning): do_something()
An optional keyword argument ‘msg’ can be provided when assertWarns is used as a context object.
The context manager keeps a reference to the first matching warning as the ‘warning’ attribute; similarly, the ‘filename’ and ‘lineno’ attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion:
with self.assertWarns(SomeWarning) as cm: do_something() the_warning = cm.warning self.assertEqual(the_warning.some_attribute, 147)
- assertWarnsRegex(expected_warning, expected_regex, *args, **kwargs)
Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.
- Parameters:
expected_warning – Warning class expected to be triggered.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager.
- checkOptimization(model: QSPRModel, ds: QSPRDataset, optimizer: HyperparameterOptimization)
- clearGenerated()
Remove the directories that are used for testing.
- countTestCases()
- createLargeMultitaskDataSet(name='QSPRDataset_multi_test', target_props=[{'name': 'HBD', 'task': <TargetTasks.MULTICLASS: 'MULTICLASS'>, 'th': [-1, 1, 2, 100]}, {'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
preparation_settings (dict) – dictionary containing preparation settings
random_state (int) – random state to use for splitting and shuffling
- Returns:
a
QSPRDataset
object- Return type:
- createLargeTestDataSet(name='QSPRDataset_test_large', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42, n_jobs=1, chunk_size=None)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createSmallTestDataSet(name='QSPRDataset_test_small', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a small dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createTestDataSetFromFrame(df, name='QSPRDataset_test', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], random_state=None, prep=None, n_jobs=1, chunk_size=None)
Create a dataset for testing purposes from the given data frame.
- Parameters:
df (pd.DataFrame) – data frame containing the dataset
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
prep (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- debug()
Run the test without collecting errors in a TestResult
- defaultTestResult()
- classmethod doClassCleanups()
Execute all class cleanup functions. Normally called for you after tearDownClass.
- doCleanups()
Execute all cleanup functions. Normally called for you after tearDown.
- classmethod enterClassContext(cm)
Same as enterContext, but class-wide.
- enterContext(cm)
Enters the supplied context manager.
If successful, also adds its __exit__ method as a cleanup function and returns the result of the __enter__ method.
- fail(msg=None)
Fail immediately, with the given message.
- failureException
alias of
AssertionError
- fitTest(model: QSPRModel, ds: QSPRDataset)
Test model fitting, optimization and evaluation.
- Parameters:
model (QSPRModel) – The model to test.
ds (QSPRDataset) – The dataset to use for testing.
- classmethod getAllDescriptors()
Return a list of (ideally) all available descriptor sets. For now they need to be added manually to the list below.
TODO: would be nice to create the list automatically by implementing a descriptor set registry that would hold all installed descriptor sets.
- getBigDF()
Get a large data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- classmethod getDataPrepGrid()
Return a list of many possible combinations of descriptor calculators, splits, feature standardizers, feature filters and data filters. Again, this is not exhaustive, but should cover a lot of cases.
- Returns:
a generator that yields tuples of all possible combinations as stated above, each tuple is defined as: (descriptor_calculator, split, feature_standardizer, feature_filters, data_filters)
- Return type:
grid
- classmethod getDefaultCalculatorCombo()
Makes a list of default descriptor calculators that can be used in tests. It creates a calculator with only morgan fingerprints and rdkit descriptors, but also one with them both to test behaviour with multiple descriptor sets. Override this method if you want to test with other descriptor sets and calculator combinations.
- static getDefaultPrep()
Return a dictionary with default preparation settings.
- getModel(name: str, alg: Type | None = None, parameters: dict | None = None, random_state: int | None = None)[source]
Initialize model with data set.
- Parameters:
name – Name of the model.
alg – Algorithm to use.
dataset – Data set to use.
parameters – Parameters to use.
random_state – Random seed to use for random operations.
- classmethod getPrepCombos()
Return a list of all possible preparation combinations as generated by
getDataPrepGrid
as well as their names. The generated list can be used to parameterize tests with the given named combinations.
- getSmallDF()
Get a small data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- property gridFile
Return the path to the grid file with test search spaces for hyperparameter optimization.
- id()
- longMessage = True
- maxDiff = 640
- predictorTest(model: QSPRModel, dataset: QSPRDataset, comparison_model: QSPRModel | None = None, expect_equal_result=True, **pred_kwargs)
Test model predictions.
Checks if the shape of the predictions is as expected and if the predictions of the predictMols function are consistent with the predictions of the predict/predictProba functions. Also checks if the predictions of the model are the same as the predictions of the comparison model if given.
- Parameters:
model (QSPRModel) – The model to make predictions with.
dataset (QSPRDataset) – The dataset to make predictions for.
comparison_model (QSPRModel) – another model to compare the predictions with.
expect_equal_result (bool) – Whether the expected result should be equal or not equal to the predictions of the comparison model.
**pred_kwargs – Extra keyword arguments to pass to the predictor’s
predictMols
method.
- run(result=None)
- classmethod setUpClass()
Hook method for setting up class fixture before running tests in the class.
- setUpPaths()
Set up the test environment.
- shortDescription()
Returns a one-line description of the test, or None if no description has been provided.
The default implementation of this method returns the first line of the specified test method’s docstring.
- skipTest(reason)
Skip this test.
- subTest(msg=<object object>, **params)
Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed.
- tearDown()
Remove all files and directories that are used for testing.
- classmethod tearDownClass()
Hook method for deconstructing the class fixture after running all tests in the class.
- testSingleTaskModel = None
- testSingleTaskModel_0_STFullyConnected_SINGLECLASS(**kw)
Test the DNNModel model in one configuration [with _=’STFullyConnected_SINGLECLASS’, task=<TargetTasks.SINGLECLASS: ‘SINGLECLASS’>, alg_name=’STFullyConnected’, alg=<class ‘qsprpred.extra.gpu.model…eural_network.STFullyConnected’>, th=[6.5], random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
alg – Algorithm to use.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_1_STFullyConnected_MULTICLASS(**kw)
Test the DNNModel model in one configuration [with _=’STFullyConnected_MULTICLASS’, task=<TargetTasks.MULTICLASS: ‘MULTICLASS’>, alg_name=’STFullyConnected’, alg=<class ‘qsprpred.extra.gpu.model…eural_network.STFullyConnected’>, th=[0, 1, 10, 1100], random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
alg – Algorithm to use.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_2_STFullyConnected_REGRESSION_None(**kw)
Test the DNNModel model in one configuration [with _=’STFullyConnected_REGRESSION_None’, task=<TargetTasks.REGRESSION: ‘REGRESSION’>, alg_name=’STFullyConnected’, alg=<class ‘qsprpred.extra.gpu.model…eural_network.STFullyConnected’>, th=None, random_state=[None]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
alg – Algorithm to use.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_3_STFullyConnected_REGRESSION_1_42(**kw)
Test the DNNModel model in one configuration [with _=’STFullyConnected_REGRESSION_1_42’, task=<TargetTasks.REGRESSION: ‘REGRESSION’>, alg_name=’STFullyConnected’, alg=<class ‘qsprpred.extra.gpu.model…eural_network.STFullyConnected’>, th=None, random_state=[1, 42]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
alg – Algorithm to use.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- testSingleTaskModel_4_STFullyConnected_REGRESSION_42_42(**kw)
Test the DNNModel model in one configuration [with _=’STFullyConnected_REGRESSION_42_42’, task=<TargetTasks.REGRESSION: ‘REGRESSION’>, alg_name=’STFullyConnected’, alg=<class ‘qsprpred.extra.gpu.model…eural_network.STFullyConnected’>, th=None, random_state=[42, 42]].
- Parameters:
task – Task to test.
alg_name – Name of the algorithm.
alg – Algorithm to use.
th – Threshold to use for classification models.
random_state – Seed to be used for random operations.
- validate_split(dataset)
Check if the split has the data it should have after splitting.
- class qsprpred.extra.gpu.models.tests.TestNNMonitoring(methodName='runTest')[source]
Bases:
MonitorsCheckMixIn
,TestCase
This class holds the tests for the monitoring classes.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- classmethod addClassCleanup(function, /, *args, **kwargs)
Same as addCleanup, except the cleanup items are called even if setUpClass fails (unlike tearDownClass).
- addCleanup(function, /, *args, **kwargs)
Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success.
Cleanup items are called even if setUp fails (unlike tearDown).
- addTypeEqualityFunc(typeobj, function)
Add a type specific assertEqual style function to compare a type.
This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages.
- Parameters:
typeobj – The data type to call this function on when both values are of the same type in assertEqual().
function – The callable taking two arguments and an optional msg= argument that raises self.failureException with a useful error message when the two arguments are not equal.
- assertAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
If the two objects compare equal then they will automatically compare almost equal.
- assertCountEqual(first, second, msg=None)
Asserts that two iterables have the same elements, the same number of times, without regard to order.
- self.assertEqual(Counter(list(first)),
Counter(list(second)))
- Example:
[0, 1, 1] and [1, 0, 1] compare equal.
[0, 0, 1] and [0, 1] compare unequal.
- assertDictEqual(d1, d2, msg=None)
- assertEqual(first, second, msg=None)
Fail if the two objects are unequal as determined by the ‘==’ operator.
- assertFalse(expr, msg=None)
Check that the expression is false.
- assertGreater(a, b, msg=None)
Just like self.assertTrue(a > b), but with a nicer default message.
- assertGreaterEqual(a, b, msg=None)
Just like self.assertTrue(a >= b), but with a nicer default message.
- assertIn(member, container, msg=None)
Just like self.assertTrue(a in b), but with a nicer default message.
- assertIs(expr1, expr2, msg=None)
Just like self.assertTrue(a is b), but with a nicer default message.
- assertIsInstance(obj, cls, msg=None)
Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message.
- assertIsNone(obj, msg=None)
Same as self.assertTrue(obj is None), with a nicer default message.
- assertIsNot(expr1, expr2, msg=None)
Just like self.assertTrue(a is not b), but with a nicer default message.
- assertIsNotNone(obj, msg=None)
Included for symmetry with assertIsNone.
- assertLess(a, b, msg=None)
Just like self.assertTrue(a < b), but with a nicer default message.
- assertLessEqual(a, b, msg=None)
Just like self.assertTrue(a <= b), but with a nicer default message.
- assertListEqual(list1, list2, msg=None)
A list-specific equality assertion.
- Parameters:
list1 – The first list to compare.
list2 – The second list to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertLogs(logger=None, level=None)
Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger.
This method must be used as a context manager, and will yield a recording object with two attributes:
output
andrecords
. At the end of the context manager, theoutput
attribute will be a list of the matching formatted log messages and therecords
attribute will be a list of the corresponding LogRecord objects.Example:
with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message'])
- assertMultiLineEqual(first, second, msg=None)
Assert that two multi-line strings are equal.
- assertNoLogs(logger=None, level=None)
Fail unless no log messages of level level or higher are emitted on logger_name or its children.
This method must be used as a context manager.
- assertNotAlmostEqual(first, second, places=None, msg=None, delta=None)
Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta.
Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit).
Objects that are equal automatically fail.
- assertNotEqual(first, second, msg=None)
Fail if the two objects are equal as determined by the ‘!=’ operator.
- assertNotIn(member, container, msg=None)
Just like self.assertTrue(a not in b), but with a nicer default message.
- assertNotIsInstance(obj, cls, msg=None)
Included for symmetry with assertIsInstance.
- assertNotRegex(text, unexpected_regex, msg=None)
Fail the test if the text matches the regular expression.
- assertRaises(expected_exception, *args, **kwargs)
Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertRaises(SomeException): do_something()
An optional keyword argument ‘msg’ can be provided when assertRaises is used as a context object.
The context manager keeps a reference to the exception as the ‘exception’ attribute. This allows you to inspect the exception after the assertion:
with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3)
- assertRaisesRegex(expected_exception, expected_regex, *args, **kwargs)
Asserts that the message in a raised exception matches a regex.
- Parameters:
expected_exception – Exception class expected to be raised.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertRaisesRegex is used as a context manager.
- assertRegex(text, expected_regex, msg=None)
Fail the test unless the text matches the regular expression.
- assertSequenceEqual(seq1, seq2, msg=None, seq_type=None)
An equality assertion for ordered sequences (like lists and tuples).
For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator.
- Parameters:
seq1 – The first sequence to compare.
seq2 – The second sequence to compare.
seq_type – The expected datatype of the sequences, or None if no datatype should be enforced.
msg – Optional message to use on failure instead of a list of differences.
- assertSetEqual(set1, set2, msg=None)
A set-specific equality assertion.
- Parameters:
set1 – The first set to compare.
set2 – The second set to compare.
msg – Optional message to use on failure instead of a list of differences.
assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method).
- assertTrue(expr, msg=None)
Check that the expression is true.
- assertTupleEqual(tuple1, tuple2, msg=None)
A tuple-specific equality assertion.
- Parameters:
tuple1 – The first tuple to compare.
tuple2 – The second tuple to compare.
msg – Optional message to use on failure instead of a list of differences.
- assertWarns(expected_warning, *args, **kwargs)
Fail unless a warning of class warnClass is triggered by the callable when invoked with specified positional and keyword arguments. If a different type of warning is triggered, it will not be handled: depending on the other warning filtering rules in effect, it might be silenced, printed out, or raised as an exception.
If called with the callable and arguments omitted, will return a context object used like this:
with self.assertWarns(SomeWarning): do_something()
An optional keyword argument ‘msg’ can be provided when assertWarns is used as a context object.
The context manager keeps a reference to the first matching warning as the ‘warning’ attribute; similarly, the ‘filename’ and ‘lineno’ attributes give you information about the line of Python code from which the warning was triggered. This allows you to inspect the warning after the assertion:
with self.assertWarns(SomeWarning) as cm: do_something() the_warning = cm.warning self.assertEqual(the_warning.some_attribute, 147)
- assertWarnsRegex(expected_warning, expected_regex, *args, **kwargs)
Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches.
- Parameters:
expected_warning – Warning class expected to be triggered.
expected_regex – Regex (re.Pattern object or string) expected to be found in error message.
args – Function to be called and extra positional args.
kwargs – Extra kwargs.
msg – Optional message used in case of failure. Can only be used when assertWarnsRegex is used as a context manager.
- baseMonitorTest(monitor: BaseMonitor, monitor_type: Literal['hyperparam', 'crossval', 'test', 'fit'], neural_net: bool)
Test the base monitor.
- checkOptimization(model: QSPRModel, ds: QSPRDataset, optimizer: HyperparameterOptimization)
- clearGenerated()
Remove the directories that are used for testing.
- countTestCases()
- createLargeMultitaskDataSet(name='QSPRDataset_multi_test', target_props=[{'name': 'HBD', 'task': <TargetTasks.MULTICLASS: 'MULTICLASS'>, 'th': [-1, 1, 2, 100]}, {'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
preparation_settings (dict) – dictionary containing preparation settings
random_state (int) – random state to use for splitting and shuffling
- Returns:
a
QSPRDataset
object- Return type:
- createLargeTestDataSet(name='QSPRDataset_test_large', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42, n_jobs=1, chunk_size=None)
Create a large dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createSmallTestDataSet(name='QSPRDataset_test_small', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], preparation_settings=None, random_state=42)
Create a small dataset for testing purposes.
- Parameters:
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
preparation_settings (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- createTestDataSetFromFrame(df, name='QSPRDataset_test', target_props=[{'name': 'CL', 'task': <TargetTasks.REGRESSION: 'REGRESSION'>}], random_state=None, prep=None, n_jobs=1, chunk_size=None)
Create a dataset for testing purposes from the given data frame.
- Parameters:
df (pd.DataFrame) – data frame containing the dataset
name (str) – name of the dataset
target_props (List of dicts or TargetProperty) – list of target properties
random_state (int) – random state to use for splitting and shuffling
prep (dict) – dictionary containing preparation settings
- Returns:
a
QSPRDataset
object- Return type:
- debug()
Run the test without collecting errors in a TestResult
- defaultTestResult()
- classmethod doClassCleanups()
Execute all class cleanup functions. Normally called for you after tearDownClass.
- doCleanups()
Execute all cleanup functions. Normally called for you after tearDown.
- classmethod enterClassContext(cm)
Same as enterContext, but class-wide.
- enterContext(cm)
Enters the supplied context manager.
If successful, also adds its __exit__ method as a cleanup function and returns the result of the __enter__ method.
- fail(msg=None)
Fail immediately, with the given message.
- failureException
alias of
AssertionError
- fileMonitorTest(monitor: FileMonitor, monitor_type: Literal['hyperparam', 'crossval', 'test', 'fit'], neural_net: bool)
Test if the correct files are generated
- fitTest(model: QSPRModel, ds: QSPRDataset)
Test model fitting, optimization and evaluation.
- Parameters:
model (QSPRModel) – The model to test.
ds (QSPRDataset) – The dataset to use for testing.
- classmethod getAllDescriptors()
Return a list of (ideally) all available descriptor sets. For now they need to be added manually to the list below.
TODO: would be nice to create the list automatically by implementing a descriptor set registry that would hold all installed descriptor sets.
- getBigDF()
Get a large data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- classmethod getDataPrepGrid()
Return a list of many possible combinations of descriptor calculators, splits, feature standardizers, feature filters and data filters. Again, this is not exhaustive, but should cover a lot of cases.
- Returns:
a generator that yields tuples of all possible combinations as stated above, each tuple is defined as: (descriptor_calculator, split, feature_standardizer, feature_filters, data_filters)
- Return type:
grid
- classmethod getDefaultCalculatorCombo()
Makes a list of default descriptor calculators that can be used in tests. It creates a calculator with only morgan fingerprints and rdkit descriptors, but also one with them both to test behaviour with multiple descriptor sets. Override this method if you want to test with other descriptor sets and calculator combinations.
- static getDefaultPrep()
Return a dictionary with default preparation settings.
- classmethod getPrepCombos()
Return a list of all possible preparation combinations as generated by
getDataPrepGrid
as well as their names. The generated list can be used to parameterize tests with the given named combinations.
- getSmallDF()
Get a small data frame for testing purposes.
- Returns:
a
pandas.DataFrame
containing the dataset- Return type:
pd.DataFrame
- property gridFile
Return the path to the grid file with test search spaces for hyperparameter optimization.
- id()
- listMonitorTest(monitor: ListMonitor, monitor_type: Literal['hyperparam', 'crossval', 'test', 'fit'], neural_net: bool)
- longMessage = True
- maxDiff = 640
- predictorTest(model: QSPRModel, dataset: QSPRDataset, comparison_model: QSPRModel | None = None, expect_equal_result=True, **pred_kwargs)
Test model predictions.
Checks if the shape of the predictions is as expected and if the predictions of the predictMols function are consistent with the predictions of the predict/predictProba functions. Also checks if the predictions of the model are the same as the predictions of the comparison model if given.
- Parameters:
model (QSPRModel) – The model to make predictions with.
dataset (QSPRDataset) – The dataset to make predictions for.
comparison_model (QSPRModel) – another model to compare the predictions with.
expect_equal_result (bool) – Whether the expected result should be equal or not equal to the predictions of the comparison model.
**pred_kwargs – Extra keyword arguments to pass to the predictor’s
predictMols
method.
- run(result=None)
- runMonitorTest(model, data, monitor_type, test_method, nerual_net, *args, **kwargs)
- classmethod setUpClass()
Hook method for setting up class fixture before running tests in the class.
- setUpPaths()
Set up the test environment.
- shortDescription()
Returns a one-line description of the test, or None if no description has been provided.
The default implementation of this method returns the first line of the specified test method’s docstring.
- skipTest(reason)
Skip this test.
- subTest(msg=<object object>, **params)
Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed.
- tearDown()
Remove all files and directories that are used for testing.
- classmethod tearDownClass()
Hook method for deconstructing the class fixture after running all tests in the class.
- trainModelWithMonitoring(model: ~qsprpred.models.model.QSPRModel, ds: ~qsprpred.data.tables.qspr.QSPRDataset, hyperparam_monitor: ~qsprpred.models.monitors.HyperparameterOptimizationMonitor, crossval_monitor: ~qsprpred.models.monitors.AssessorMonitor, test_monitor: ~qsprpred.models.monitors.AssessorMonitor, fit_monitor: ~qsprpred.models.monitors.FitMonitor) -> (<class 'qsprpred.models.monitors.HyperparameterOptimizationMonitor'>, <class 'qsprpred.models.monitors.AssessorMonitor'>, <class 'qsprpred.models.monitors.AssessorMonitor'>, <class 'qsprpred.models.monitors.FitMonitor'>)
- validate_split(dataset)
Check if the split has the data it should have after splitting.