This Jupyter Notebook described a metabolomics data analysis and visualisation workflow for partial least squares regression (a.k.a. projection to latent structure) with a binary classification outcome.
This computational workflow is described using a previously published NMR dataset by Ganna et al. (2014) and Ganna et al. (2015).The study compared the plasma metabolomic profile comparison across a large prospective epidemiological study of men (n=485) and women (n=483) at age 70 living in Uppsala, Sweden. For the purpose of this computational workflow, we compare only the males (Class=1) and females (Class=0) in a binary discriminant analysis. The deconvolved and annotated data from this study is deposited on Metabolights, and can be accessed directly via its Study ID: MTBLS90. The Excel file used in this workflow can be accessed via the following link: MTBLS90.xlsx.
This computational workflow requires a dataset to be in, or converted to, a previously described standardised Excel file format (Mendez et al. 2019). This format uses the Tidy Data Framework (Wickham, 2014), where each row represents an observation (e.g. sample) and each column represents a variable (e.g. age or metabolite). Each excel file (per study) contains two sheets; a data sheet and a peak sheet. The data sheet contains the metabolite concentration together with the metadata associated for each observation (requiring the inclusion of the columns: Idx, SampleID, and Class). The peak sheet contains the additional metadata that pertains to the metabolites in the data sheet (requiring the inclusion of the columns: Idx, Name, and Label). The standardisation of this format allows for the efficient re-use of this computational workflow.
Packages provide additional tools that extend beyond the basic functionality of the Python programming. Prior to usage, packages need to be imported into the Jupyter environment. The following packages need to be imported for this computational workflow:
numpy
: A standard package primarily used for the manipulation of arrayspandas
: A standard package primarily used for the manipulation of data tablescimcb
: A library of helpful functions and tools provided by the authorssklearn
: A standard package with tools for machine learning
train_test_split
: A method to split arrays into training and test subsetsimport numpy as np
import pandas as pd
import cimcb as cb
from sklearn.model_selection import train_test_split
print('All packages successfully loaded')
To reproduce the figures in the research article, set the random seed to 8. This seed is used in the train_test_split
method to reproducibly split the source data into a training and test set.
seed_split
: Seed the generator using an integer value e.g. 42 (default = None ; no seed set)
seed_split = 8
# seed_split = None
This CIMCB helper function load_dataXL()
loads the Data and Peak sheet from an Excel file. In addition, this helper function checks that the data is in the standardised Excel file format described above. After the initial checks, load_dataXL()
outputs two individual Pandas DataFrames (i.e. tables) called DataTable
and PeakTable
from the Excel file MTBLS90.xlsx. This helper function requires values for the following parameters:
filename
: The name of the excel file (.xlsx file)DataSheet
: The name of the data sheet in the filePeakSheet
: The name of the peak sheet in the filehome = 'data/'
file = 'MTBLS90.xlsx'
DataTable,PeakTable = cb.utils.load_dataXL(filename=home + file, DataSheet='Data', PeakSheet='Peak')
Prior to performing any statistical or machine learning modelling, it is best practice to assess the quality of the data and remove metabolites that lack reproducible measurements (Broadhurst et al. 2018).
The following steps are needed to extract the X
matrix of metabolite concentrations and associated Y
vector of classification labels (“M”=1 and “F”=0):
DataTable
called DataTable2
, with samples only in the Class “M” or “F”PeakList
to hold the names (M1...Mn) of the metabolites to be usedPeakList
, extract all corresponding columns (i.e. metabolite data) from DataTable2
, and place it in matrix X
Y
to a list (or 1D array) of binary outcomes based on the Class column from DataTable2
(“M”=1 and “F”=0)# Extract PeakList
PeakList = PeakTable['Name']
# Select Subset of Data (Class 1 or 0 only)
DataTable2 = DataTable[(DataTable.Class == 1) | (DataTable.Class == 0)]
# Extract X Data
X = DataTable2[PeakList]
# Create a Binary Y Vector
Outcomes = DataTable2['Class']
Y = np.array(Outcomes)
# Optional: Save Class Labels (M/F) for Figure Legends
Class = DataTable2.Sex
The train_test_split
method is used to split the X and Y data into training (2/3rd) and test (1/3rd) sets using stratified random selection. Additionally, the Class data is split for use in figure legends. The seed is selected in the optional section above. For further information on this method, refer to the scikit learn documentation.
# Split Data into Train (2/3rd) and Test (1/3rd)
XTrain, XTest, YTrain, YTest, ClassTrain, ClassTest = train_test_split(X,
Y,
Class,
test_size=1/3,
stratify=Y,
random_state=seed_split)
The X Data (XTrain
and XTest
) is natural log transformed, mean centred, and scaled to unit variance (with missing values imputed using K-Nearest Neighbour) prior to modelling following standard protocols for metabolomics (Broadhurst and Kell, 2006).
XTrain
cb.utils.scale()
, scale the natural log transformed data (XTrainLog
) to the unit variance (a.k.a. auto scaling), while also returning mu & sigma.cb.utils.knnimpute()
to give the final matrix, XTrainKnn
XTest
cb.utils.scale()
, scale the natural log transformed data (XTestLog
) to the unit variance (a.k.a. auto scaling) using the mu & sigma from above.
cb.utils.knnimpute()
to give the final matrix, XTestKnn
# Extract X Train Data
XTrainLog = np.log(XTrain)
XTrainScale, mu, sigma = cb.utils.scale(XTrainLog, method='auto', return_mu_sigma=True)
XTrainKnn = cb.utils.knnimpute(XTrainScale, k=3)
# Extract X Test Data
XTestLog = np.log(XTest)
XTestScale = cb.utils.scale(XTestLog, method='auto', mu=mu, sigma=sigma)
XTestKnn = cb.utils.knnimpute(XTestScale, k=3)
The CIMCB helper function cb.cross_val.KFold()
is used to carry out k-fold cross-validation (k=5) on a set of PLS-DA models with varying number of latent variables (1 to 6) to determine the optimal number. In k-fold cross-validation, the original dataset is randomly split into k sized folds and subsequently trained for k iterations, where the model is trained on 1 – k folds and tested on the k fold (Kohavi 1995). This helper function requires values for the following parameters:
model
: The class of model used by the function, cb.model.PLS_SIMPLS
X
: The metabolite data matrix, XTrainKnn
Y
: The binary outcome vector, YTrain
param_dict
: a dictionary, param_dict
, that describes all key:value pairs to search, with the key name corresponding to the hyperparameter in the model class and the value as the list of possible valuesfolds
: The number of folds in the k-fold cross validationn_mc
: The number of Monte Carlo repetitions of the k-fold CV# Parameter Dictionary
param_dict = {'n_components': [1, 2, 3, 4, 5, 6]}
# Initialise
cv = cb.cross_val.KFold(model=cb.model.PLS_SIMPLS,
X=XTrainKnn,
Y=YTrain,
param_dict=param_dict,
folds=5,
n_mc=10)
# Run and Plot
cv.run()
When cv.plot(metric='r2q2', method='absolute')
is run, 2 plots of $R^2$ and $Q^2$ statistics are displayed: (a) the absolute difference of ($R^2 - Q^2$) vs. $Q^2$, and (b) $R^2$ and $Q^2$ against the number of latent variables. Alternatively, if method='ratio'
, plot (a) is the absolute difference of ($R^2 - Q^2$) / $R^2$ vs. $Q^2$. The optimal number of hyperparameters is selected based on the point of inflection in figure b, or if a clear inflection point is not present, where | ($R^2 - Q^2$) | = 0.2. Note, the $R^2$ is the mean coefficient of determination for the full dataset, and the $Q^2$ is the mean coefficient of determination for cross-validated prediction dataset over the 10 Monte Carlo repetitions. The following parameters of cv.plot()
can be altered:
metric
: The metric used for the plots (default = 'r2q2'). Alternative metrics include 'auc', 'acc', 'f1score', 'prec', 'sens', and 'spec'
method
: The types of plots displayed (default = 'absolute'). Alternative value is 'ratio'
ci
: The confidence interval in figure b (default = 95)
legend
: to show legend (default = True). Alternative value is False
cv.plot(metric='r2q2',
method='absolute',
ci=95,
legend=True)
When cv.plot_projections()
is run, an n x n grid of plots are displayed, where n is the number of latent variables (LV) to interrogate. These plots include score plots, distribution plots, and receiver operating characteristic (ROC) curves.
There are C(n,2) score plots (i.e. a score plot for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). Each score plot includes the full scores (as circles) and CV scores (as crosses) coloured by group, as well as the 95% confidence interval ellipses for the full scores (as solid lines) and CV scores (as dashed lines). Additionally, the optimal line of separation (dashed grey line) and orthogonal line (solid grey line) are shown.
There are n distribution plots (a distribution plot for each LV scores). The distribution of the full and CV scores for each corresponding group (i.e. 4 discrete distributions overlayed for 2 groups). Each distribution is calculated using kernel density estimation, a standard non-parametric method used for estimation of a probability density function based on a set of data points (Silverman 1986).
There are C(n,2) ROC curves (a ROC curve for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). As the ROC curves are for every combination of 2 LVs, the discrimination is calculated based on optimal separation (i.e. the grey line from the corresponding score plot). For each ROC curve plot there is a ROC curve for the full model (green), and ROC curve for the cv model with 95% confidence intervals (yellow). Additionally, the equal distribution line (dashed black line) is shown.
components
: LVs to plot (default = "all" ; plot all components). Alternatively, list the components to plot e.g. [1,3,4]plot
: Data to show (default = 'ci' ; plot only 95% confidence interval ellipses). Alternative values include 'meanci', 'full', 'cv', and 'all'label
: Add labels to groups (default = None ; refers to groups as 0/1)
legend
: Show legends for plots (default = 'all'). Alternative values are 'scatter', 'dist', 'roc', and 'none'.cv.plot_projections(components=[1,2,3,4],
plot='ci',
label=ClassTrain,
legend='all')
cv.plot_projections(components=[1,2,3,4],
plot='meanci',
label=ClassTrain,
legend='all')
A PLS-DA model using cb.model.PLS_SIMPLS
is created and initialised using the optimal hyperparameter values determined in step 4 (i.e. the number of latent variables). The implementation of PLS in the cb.model.PLS_SIMPLS
class uses the SIMPLS algorithm (De Jong, 1993).
Following this initialisation, the PLS-DA model is trained using the .train(X, Y)
method where the X matrix is XTrainKnn
and the Y vector is YTrain
, returning the Y predicted value YPredTrain
. This model is then tested using the .test(X, Y)
method where the X matrix is XTestKnn
and the Y vector is YTest
, returning the Y predicted value YPredTest
.
The .evaluate()
method can be used to evaluate the predictability of the model using the train and test set. There are three plots produced when this method is run including a violin plot, probability density function, and a ROC curve. The violin plots show the predicted score for the train and test (by group). The distribution plot shows the probability density function of the predicted scores for the train and test (by group). The ROC curve shows the ROC curve for the train (green) and test (yellow). The following parameter values in .evaluate()
can be altered:
testset
: Plot test dataset (default = None). Alternative, add YTrue and YPredicted as a list e.g. [YTrue, YPredicted].
label
: Add labels to groups (default = None ; refer to groups as 0/1)
legend
: Show legends for plots (default = 'all'). Alternative values are 'roc', 'dist', 'violin', and 'none'# Build Model
model = cb.model.PLS_SIMPLS(n_components=2)
YPredTrain = model.train(XTrainKnn, YTrain)
YPredTest = model.test(XTestKnn)
# Put YTrain and YPredTrain in a List
EvalTrain = [YTrain, YPredTrain]
# Put YTest and YPrestTest in a List
EvalTest = [YTest, YPredTest]
# Save Weights & Feature Importance
model_weights = model.x_weights_ # [LV1, LV2]
model_fi = model.feature_importance_ # [VIP, Coefficient]
# Evaluate Model (include Test Dataset)
model.evaluate(testset=EvalTest,
label=ClassTrain,
legend='all')
After a model has been trained, the .permutation_test()
method can be used to assess the reliability of the trained model (after selecting the number of latent variables). For the permutation test, the metabolite data matrix is randomised (permuted or 'shuffled'), while the Y (i.e. outcome) is fixed, and subsequently trained and tested on this randomised data (Szymańska et al. 2012). This process is repeated (in this case, n=100) to construct a distribution to fairly access the model. For a dataset with features that have with no meaningful contribution, we would expect a similar $R^2$ and $Q^2$ to a randomised dataset, while for a dataset with features with meaningful contribution, we would expect a $R^2$ and $Q^2$ significantly higher than that of the randomised dataset. When .permutation_test()
is run, 2 plots are displayed: (a) $R^2$ and $Q^2$ against "correlation of permuted data against original data", and (b) probability density functions for $R^2$ and $Q^2$, with the $R^2$ and $Q^2$ values found for the model trained on original data presented as ball-and-stick. The following parameter value of .permutation_test()
can be altered:
metric
: The metric used for the plots (default = 'r2q2'). Alternative metrics include 'auc', 'acc', 'f1score', 'prec', 'sens', and 'spec'. Multiple metrics can be plotted using a list e.g. ['r2q2', 'auc]
nperm
: The number of permutations. (default = 100)
legend
: To show legend (default = True). Alternative value is False
model.permutation_test(metric='r2q2',
nperm=100,
legend=True)
Bootstrap resampling is a resampling method based on random resampling with replacement, commonly used to provide an estimate of sampling distribution of a test statistic (Efron, 1982). In the context of this workflow, the PLS model from step 5 with its fixed hyperparameter values (i.e. number of LVs = 2) is retrained on the resampled with replacement data (in-bag) and evaluated on the unused data (out-of-bag) for 100 resamples. After the model is evaluated for each bootstrap, metrics including the predicted values (ypred), LV scores, LV loadings, and feature importance (VIP and coefficients) are stored and used to calculate 95% confidence intervals. To calculate the 95% confidence intervals, various methods can be used including the basic percentile method, corrected percentile method (a.k.a. bias-corrected method), and the commonly used bias-corrected and accelerated (BCA) method. In this example, the BCA method is used with the class cb.boostrap.BCA
. Alternatively, use cb.boostrap.Per
to use the percentile method, or cb.bootstrap.CPer
for the corrected percentile method. To create and run the bootmodel
for any method, the following parameter values need to be set:
model
: A model with fixed hyperparameter values for boostrap resamplingbootnum
: The number of bootstrap resamples (default = 100)# Extract X Data and Train Model
XBoot = DataTable2[PeakList]
XBootLog = np.log(XBoot)
XBootScale = cb.utils.scale(XBootLog, method='auto')
XBootKnn = cb.utils.knnimpute(XBootScale, k=3)
YPredBoot = model.train(XBootKnn, Y)
# Build Boostrap Models
bootmodel = cb.bootstrap.BCA(model, bootnum=100)
bootmodel.run()
After the bootmodel
has been run, the .evaluate()
method can be used to provide an estimate of the robustness and a measure of the generalised predictability of the model. There are three plots produced when this method is run including a violin plot, probability density function, and a ROC curve. The violin plots show the distribution of the median predicted score for the in-bag and out-of-bag (i.e. train and test) by group. The distribution plot shows the probability density function of the median predicted score for the in-bag and out-of-bag (i.e. train and test) by group. The ROC curve shows the ROC curve with the median (green) and 95% CI for the in-bag (light green band) and the median (yellow) and 95% CI for the out-of-bag (light yellow band). The method used to calculate the 95% CI for the in-bag (green) is the class selected in the previous cell. In this example, the bias-corrected and accelerated method is used as cb.bootstrap.BCA
was used in the previous cell to create bootmodel
.
label
: Add labels to groups (default = None ; refer to groups as 0/1)
legend
: Show legends for plots (default = 'all'). Alternative values are 'roc', 'dist', 'violin', and 'none'trainset
: Plot train dataset instead of median in-bag (default = None). Alternatively, add YTrue and YPredicted as a list e.g. [YTrue, YPredicted].
testset
: Plot test dataset instead of median out-of-bag (default = None). Alternatively, add YTrue and YPredicted as a list e.g. [YTrue, YPredicted].
bootmodel.evaluate(label=Class,
legend='all')
After the bootmodel
has been run, the .plot_projections()
method can be used to visualise the latent variable (LV) scores. When this method is run, a n x n grid of plots are displayed, where n is the number of LVs. These plots include score plots, distribution plots, and receiver operating characteristic (ROC) curves.
There are C(n,2) score plots (i.e. a score plot for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). Each score plot includes the in-bag scores (as circles) and out-of-bag scores (as crosses) coloured by group, as well as the 95% confidence interval ellipses for the in-bag scores (as solid lines) and out-of-bag scores (as dashed lines). Additionally, the optimal line of separation (dashed grey line) and orthogonal line (solid grey line) are shown.
There are n distribution plots (a distribution plot for each LV scores). The distribution of the in-bag and out-of-bag scores for each corresponding group (i.e. 4 discrete distributions overlayed for 2 groups). Each distribution is calculated using kernel density estimation, a standard non-parametric method used for estimation of a probability density function based on a set of data points (Silverman 1986).
There are C(n,2) ROC curves (a ROC curve for every combination of 2 LVs e.g. LV1 scores vs. LV2 scores). As the ROC curves are for every combination of 2 LVs, the discrimination is calculated based on optimal separation (i.e. the grey line from the corresponding score plot). For each ROC curve plot there is a ROC curve with the LV score for the initial model with the 95% confidence intervals using the in-bag LV scores (green), and a ROC curve for the out-of-bag LV scores with 95% confidence intervals. The method used to calculate the 95% CI for the in-bag (green) is the class used to create the bootmodel
. In this example, the bias-corrected and accelerated method is used (cb.bootstrap.BCA
). Additionally, the equal distribution line (dashed black line) is shown.
plot
: Data to show in plot (default = "ci" ; plot only 95% confidence interval ellipses). Alternative values include 'meanci', 'ib', 'oob', and 'all'label
: Add labels to groups in scores plot (default = None ; refer to groups as 0/1).
legend
: Show legends for plots (default = 'all'). Alternative values are 'scatter', 'dist', 'roc', and 'none'bootmodel.plot_projections(plot='ib',
label=Class,
legend='all')
bootmodel.plot_projections(plot='oob',
label=Class,
legend='all')
bootmodel.plot_projections(plot='ci',
label=Class,
legend='all')
After the bootmodel
has been run, the .plot_weights()
method can be used to visualise the latent variable (LV) weight vectors. When this method is run, n plots are displayed, where n is the number of LVs. The circles in each plot represent the LV weight vectors for the model. The 95% confidence intervals are calculated using bias-correct (BC) bootstrap method in step 6. Any metabolite weights with a confidence interval crossing the zero line are considered non-significant to the latent variable. This method requires values for the following parameters:
PeakTable
: Cleaned PeakTable from step 3PeakList
: Peaks to include in plot (default = None; include all samples).
plot
: To plot the data or median as circles (default 'data'). Alternative values include 'median', and a list structured as [LV1, LV2, etc.]
ylabel
: Name of column in PeakTable to use as the ylabel (default = 'Label')
sort
: Whether to sort plots in absolute descending order (default = True)bootmodel.plot_weights(PeakTable,
PeakList,
plot='median',
ylabel='Label',
sort=False)
After the bootmodel
has been run, the .plot_featureimportance()
method can be used to visualise the feature importance metrics. When this method is run, 2 plots are displayed; the Coefficient Plot and Variable Importance in Projection (VIP) plot. The circles represent the values in the initial model. The 95% confidence intervals are calculated using bias-correct (BC) bootstrap method in step 6.
The coefficients (in the Coefficient plot) contain information about the overall contribution of each metabolite. The coefficient values can either a positive or negative number, and therefore, negatively or positively contribute to the model. Any metabolite coefficient value with a confidence interval crossing the zero line is considered non-significant to the model.
The values in the VIP plot contain information about the overall contribution of each metabolite. Unlike the coefficient values, the VIP is absolute, with the higher values representing a higher significance to the model. Typically, metabolites with a VIP greater than 1 are considered "important" in the model.
This method, bootmodel
exports the feature importance metrics as a pandas DataFrame (table). This method also requires values for the following parameters:
PeakTable
: Cleaned PeakTable from step 3PeakList
: Peaks to include in plot (default = None; include all samples).
plot
: To plot the data or median as circles (default 'data'). Alternative values include 'median', and a list structured as [Coef, VIP].
ylabel
: Name of column in PeakTable to use as the ylabel (default = 'Label')
sort
: Whether to sort plots in absolute descending order (default = True)feature_importance = bootmodel.plot_featureimportance(PeakTable,
PeakList,
plot='median',
ylabel='Label',
sort=False)
The feature importance table created in step 8.3 can be exported using the inbuilt .to_excel()
function within a pandas DataFrame. This function requires an input with the name of the file to create, and it can include directories by using the ‘ / ’ symbol. In the cell below, the table feature_importance
is exported as an excel file called 'PLSDA_ST001047.xlsx' in the 'results' folder.
export_folder = 'results/'
export_file = 'PLSDA_MTBLS90.xlsx'
feature_importance.to_excel(export_folder + export_file)
print("Done!")