Description of functions to perform matrix operations, algebra and some basic statistical models using Matrix (CRAN) objects.
BigDataStatMeth 0.99.32
This package implements several matrix operations using Matrix
and DelayMatrix
objects as well as HDF5 data files. Some basic algebra operations that can also be computed that are useful to implement statistical analyses using standard methodologies such as principal component analyses (PCA) or least squares estimation. The package also contains specific statistical methods mainly used in omic
data analysis such as lasso regression. All procedures referred to HDF5 can be found in BigDataStatMeth_hdf5 vignette.
The package requires other packages from CRAN
and Bioconductor
to be installed.
CRAN
: Matrix
, RcppEigen
and RSpectra
.Bioconductor
: DelayedArray
As the package can also deal with hdf5 files [See Vignette BigDataStatMeth_hdf5], these other packages are required: HDF5Array
, rhdf5
. The user can execute this code to install the required packages:
# Install BiocManager (if not previously installed)
install.packages("BiocManager")
# Install required packages
BiocManager::install(c("Matrix", "RcppEigen", "RSpectra", "DelayedArray",
"HDF5Array", "rhdf5"))
Our package needs to be installed from source code. In such cases, a collection of software (e.g. C, C++, Fortran, etc.) are required, mainly for Windows users. These programs can be installed using Rtools.
Once required packages and Rtools are installed, BigDataStatMeth
package can be installed from our GitHub repository as follows:
# Install devtools and load library (if not previously installed)
install.packages("devtools")
library(devtools)
# Install BigDataStatMeth
install_github("isglobal-brge/BigDataStatMeth")
First, let us start by loading the required packages to describe the main capabilities of the package
library(Matrix)
library(DelayedArray)
library(BigDataStatMeth)
library(ggplot2)
This packages is also required to reproduce this vignette
library(microbenchmark)
In this section, different products of matrices and vectors are introduced. The methods implement different strategies including block multiplication algorithms and the use of parallel implementations.
A block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. the implementation has been made from the adaptation of the Fox algorithm [1].
\[A*B=\begin{pmatrix} {A}_{11}&{A}_{12} \\ {A}_{21}&{A}_{22}\end{pmatrix}*\begin{pmatrix}{B}_{11}&B_{12}\\B_{21}&B_{22}\end{pmatrix}=\begin{pmatrix}{C}_{11}&C_{12}\\C_{21}&C_{22}\end{pmatrix}=C\]
Let us simulate to set of matrices to illustrate the use of the function accross the entire documment. First, we simulate a simple case with to matrices A and B with dimensions 500x500 and 500x750, respectively. Second, another example with dimensions 1000x10000 are use to evaluate the performance in large matrices. The examples with big datasets will be illustrated using real data belonging to different omic settings. We can simulate to matrices with the desired dimensions by
# Define small matrix A and B
set.seed(123456)
n <- 500
p <- 750
A <- matrix(rnorm(n*n), nrow=n, ncol=n)
B <- matrix(rnorm(n*p), nrow=n, ncol=p)
# Define big matrix Abig and Bbig
n <- 1000
p <- 10000
Abig <- matrix(rnorm(n*n), nrow=n, ncol=n)
Bbig <- matrix(rnorm(n*p), nrow=n, ncol=p)
Matrix multiplication using block matrices is implemented in the bdblockmult()
function. It only requires to setting the argument block_size
, by default block_size = 128
. An optimal block size is important to better performance but it is difficult to asses what is the optimum block size for each matrix.
# Use 10x10 blocks
AxB <- bdblockmult(A, B, block_size = 10)
As expected the results obtained using this procedure are the correct ones
all.equal(AxB, A%*%B)
[1] TRUE
Note that when the argument block_size
is larger than any of the dimensions of matrix A or B the blocks_size
is set to min(cols(A), rows(A), cols(B), rows(B))
.
The process can be speed up by making computations in parallel using paral=TRUE
an optional parameter threads can be used to indicate the number of threads
to launch simultaneously, if threads=NULL
the function takes the available threads - 1, leaving one available for user.
AxB <- bdblockmult(A, B, block_size = 10, paral = TRUE)
all.equal(AxB,A%*%B)
[1] TRUE
To work with big matrices bdblockmult()
can save matrices in hdf5 file format in order to be able to operate with them in blocks and not overload the memory, by default are considered large matrices if the number of rows or columns is greater than 5000, but it can be changed with bigmatrix
argument. Or we can also force to execute the matrix multiplication with data on memory setting the parameter onmemory = TRUE
# We want to force it to run in memory
AxBNOBig <- bdblockmult(Abig, Bbig, onmemory = TRUE)
# Run matrix multiplication with data on memory using submatrices of 256x256
AxBBig3000 <- bdblockmult(Abig, Bbig, block_size = 256 , onmemory = TRUE)
Here, we compare the performance of the block method with the different options.
bench1 <- microbenchmark(
# Parallel block size = 128
Paral128Mem = bdblockmult(Abig, Bbig, paral = TRUE),
# On disk parallel block size = 256
Paral256Disk = bdblockmult(Abig, Bbig, block_size=256, paral=TRUE),
Paral256Mem = bdblockmult(Abig, Bbig, block_size=256,
paral=TRUE, onmemory=TRUE),
Paral1024Mem = bdblockmult(Abig, Bbig, block_size=1024,
paral=TRUE, onmemory=TRUE), times = 3 )
bench1
Unit: seconds
expr min lq mean median uq max neval
Paral128Mem 88.082198 90.457034 92.10706 92.83187 94.119485 95.407099 3
Paral256Disk 87.727611 89.731516 90.76830 91.73542 92.288641 92.841860 3
Paral256Mem 13.026405 14.206137 14.85610 15.38587 15.770943 16.156017 3
Paral1024Mem 1.108945 1.183373 1.21622 1.25780 1.269858 1.281916 3
We can observe that the shortest execution time is achieved with block_size = 256
.
The same information is depicted in the next Figure which illustrates the comparison between the different assessed methods
ggplot2::autoplot(bench1)
A sparse matrix or sparse array is a matrix in which most of the elements are zero. BigDataStatMeth
allows to perform matrix multiplication with sparse matrices using the function bdblockmult_sparse
. It is necessary that at least one of the the two matrices is defined as sparse in R using dgCMatrix
class. An example of a sparse matrix could be :
\[\begin{equation} \begin{bmatrix} a_{11}&a_{12} & 0 & \cdots & \cdots & \cdots & \cdots & 0 \\ a_{21} & a_{22} & a_{23} & \ddots & && & \vdots \\ 0 & a_{32} & a_{33} & a_{34} & \ddots & & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & & \vdots \\ \vdots & & \ddots & \ddots & \ddots & \ddots & \ddots& \vdots\\ \vdots & & & & \ddots & a_{n-1,n-2} & a_{n-1,n-1} & a_{n-1,n}\\ 0 & \cdots & \cdots & \cdots & \cdots & 0 & a_{n,n-1} & a_{n,n} \\ \end{bmatrix} \end{equation}\]
k <- 1e3
# Generate 2 sparse matrix x_sparse and y_sparse
set.seed(1)
x_sparse <- sparseMatrix(
i = sample(x = k, size = k),
j = sample(x = k, size = k),
x = rnorm(n = k)
)
set.seed(2)
y_sparse <- sparseMatrix(
i = sample(x = k, size = k),
j = sample(x = k, size = k),
x = rnorm(n = k)
)
d <- bdblockmult_sparse(x_sparse, y_sparse)
f <- x_sparse%*%y_sparse
all.equal(d,f)
[1] TRUE
Here, we compare the performace using sparse matrix multiplication with sparse matrices and the blockmult multiplication with the same matrices not declared as sparce.
res <- microbenchmark(
sparse_mult = bdblockmult_sparse(x_sparse, y_sparse),
matrix_mult = bdblockmult(as.matrix(x_sparse), as.matrix(y_sparse)),
RSparse_mult = x_sparse%*% y_sparse,
times = 3 )
res
Unit: microseconds
expr min lq mean median uq max
sparse_mult 57.6 63.25 109.2667 68.9 135.10 201.3
matrix_mult 8552554.2 8577788.55 8611596.9667 8603022.9 8641118.35 8679213.8
RSparse_mult 191.5 248.25 277.3333 305.0 320.25 335.5
neval
3
3
3
The same information is depicted in the next Figure which illustrates the comparison between the different assessed methods
ggplot2::autoplot(res)
We can observe the huge difference in execution time
To perform a cross-product and tcrossproduct with BigDataStatMeth we use the same function bdcrossprod()
setting parameter transposed
to TRUE or FALSE. Like other functions implemented in BigDataStatMeth, we can work with R Objects. Setting parameter transposed for Crossproduct and tCrossProduct
if transposed = FALSE
(default), we perform a Crossproduct \[\left( C \right) ={ \left( A \right) }^{ t }\left( A \right) \]
if transposed = TRUE
: we perform a transposed-Crossproduct \[\left( C \right) = \left( A \right){ \left( A \right) }^{ t } \]
Here we shown some examples using bdcrossprod
function
n <- 500
m <- 250
A <- matrix(rnorm(n*m), nrow=n, ncol=m)
# Cross Product of a standard R matrix
cpA <- bdCrossprod(A)
We obtain the expected values computed using crossprod
R function
all.equal(cpA, crossprod(A))
[1] TRUE
you may also set transposed=TRUE
# Transposed Cross Product R matrices
tcpA <- bdtCrossprod(A)
We obtain the expected values computed using tcrossprod
function
all.equal(tcpA, tcrossprod(A))
[1] TRUE
We can show that the implemented version really improves the R implementation computational speed.
res <- microbenchmark(
bdcrossp_tr = bdtCrossprod(A),
rcrossp_tr = tcrossprod(A),
bdcrossp = bdCrossprod(A),
rcrossp = crossprod(A),
times = 3)
res
Unit: milliseconds
expr min lq mean median uq max neval
bdcrossp_tr 5.8707 5.98805 7.088333 6.1054 7.69715 9.2889 3
rcrossp_tr 14.1996 14.40590 14.604300 14.6122 14.80665 15.0011 3
bdcrossp 2.7913 3.26035 3.556900 3.7294 3.93970 4.1500 3
rcrossp 13.7870 14.80630 15.158600 15.8256 15.84440 15.8632 3
ggplot2::autoplot(res)
You can perform a weighted cross-product \(C = X^ t w X\) with bdwcrossprod()
given a matrix X as argument and a vector or matrix of weights, w.
n <- 250
X <- matrix(rnorm(n*n), nrow=n, ncol=n)
u <- runif(n)
w <- u * (1 - u)
wcpX <- bdwproduct(X, w,"xwxt")
wcpX[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] 41.8137240 4.0688017 -3.5551601 -0.9590417 -3.649829
[2,] 4.0688017 35.8957169 0.4522855 -3.6490462 3.866538
[3,] -3.5551601 0.4522855 44.5225035 2.3956115 5.998364
[4,] -0.9590417 -3.6490462 2.3956115 36.3507446 -2.287474
[5,] -3.6498289 3.8665378 5.9983636 -2.2874736 42.947037
Those are the expected values as it is indicated by executing:
all.equal( wcpX, X%*%diag(w)%*%t(X) )
[1] TRUE
With argument transposed=TRUE
, we can perform a transposed weighted cross-product \(C = A w A^t\)
wtcpX <- bdwproduct(X, w,"xtwx")
wtcpX[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] 44.319669 4.02108694 -3.006229 -1.85345133 -3.786994
[2,] 4.021087 38.82933816 -3.638959 -0.03011039 -3.451078
[3,] -3.006229 -3.63895884 47.303628 -3.16577769 2.629315
[4,] -1.853451 -0.03011039 -3.165778 40.87522997 2.700950
[5,] -3.786994 -3.45107802 2.629315 2.70094954 41.088084
Those are the expected values as it is indicated by executing:
all.equal(wtcpX, t(X)%*%diag(w)%*%X)
[1] TRUE
The Cholesky factorization is widely used for solving a system of linear equations whose coefficient matrix is symmetric and positive definite.
\[A = LL^t = U^tU\]
where \(L\) is a lower triangular matrix and U is an upper triangular matrix. To get the Inverse Cholesky we can use the function bdInvCholesky()
. Let us start by illustrating how to do this computations in a simulated data:
# Generate a positive definite matrix
Posdef <- function (n, ev = runif(n, 0, 10))
{
Z <- matrix(ncol=n, rnorm(n^2))
decomp <- qr(Z)
Q <- qr.Q(decomp)
R <- qr.R(decomp)
d <- diag(R)
ph <- d / abs(d)
O <- Q %*% diag(ph)
Z <- t(O) %*% diag(ev) %*% O
return(Z)
}
A <- Posdef(n = 500, ev = 1:500)
invchol <- bdInvCholesky(A)
round(invchol[1:5,1:5],8)
[,1] [,2] [,3] [,4] [,5]
[1,] 0.01068322 -0.00063371 -0.00104697 0.00230612 -0.00173745
[2,] -0.00063371 0.00845418 0.00029448 -0.00104800 0.00020868
[3,] -0.00104697 0.00029448 0.00878553 -0.00074755 0.00006907
[4,] 0.00230612 -0.00104800 -0.00074755 0.02041952 0.00020405
[5,] -0.00173745 0.00020868 0.00006907 0.00020405 0.01137916
We can check whether this function returns the expected values obtained with the standard R function solve
:
all.equal(invchol, solve(A))
[1] TRUE
We can show that the implemented version really improves the R implementation computational speed.
res <- microbenchmark(invchol = bdInvCholesky(A),
invcholR = solve(A),
times = 3)
res
Unit: milliseconds
expr min lq mean median uq max neval
invchol 38.2152 39.5352 42.12907 40.8552 44.08600 47.3168 3
invcholR 69.9515 71.3931 72.53587 72.8347 73.82805 74.8214 3
ggplot2::autoplot(res)
The Moore-Penrose pseudoinverse is a direct application of the SVD. The inverse of a matrix A can be used to solve the equation \({Ax}={b}\). But in the case where the set of equations have 0 or many solutions the inverse cannot be found and the equation cannot be solved. The following formula is used to find the pseudoinverse:
\[{A}^+= {VD}^+{U}^T\]
In BigDataStatMeth we implemented Moore-Penrose pseudoinverse in function bdpseudoinv
, now we get the pseudoinverse from a simulated data
m <- 1300
n <- 1200
A <- matrix(rnorm(n*m), nrow=n, ncol=m)
pseudoinv <- bdpseudoinv(A)
zapsmall(pseudoinv)[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] 0.002929598 0.001557442 0.001582314 -0.000722315 0.005212698
[2,] -0.001988456 -0.002467812 -0.002516531 -0.002622150 0.000858338
[3,] 0.006810140 0.000440079 -0.008938825 -0.000024428 0.001183548
[4,] 0.004924611 -0.003390831 -0.002072917 0.001011046 -0.001927337
[5,] -0.001204390 0.007590944 0.000216551 -0.001650204 -0.001847567
QR decomposition, also known as a QR factorization or QU factorization is a decomposition of a matrix A into a product : \[A = QR\] of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problems.
In BigDataStatMeth we implemented QR decomposition in function bdQR
, the QR decomposition can be performed in R objects. To show how to use this function we performa a QR decomposition from the previous simulated data in matrix A.
QR_A <- bdQR(A)
QR_R <- qr(A)
# Show results for Q
zapsmall(QR_A$Q[1:5,1:5])
[,1] [,2] [,3] [,4] [,5]
[1,] -0.07190909 0.01223489 0.01534280 -0.00973952 -0.02249626
[2,] 0.01038491 -0.00733741 -0.00315748 -0.00339017 -0.00921850
[3,] -0.03918318 -0.07978351 -0.01997936 0.02565472 0.05086436
[4,] -0.03374909 -0.03364251 0.01453610 -0.04501818 -0.03125047
[5,] -0.00796412 0.01887125 0.05482384 0.03332886 -0.05940098
# Show results for R
zapsmall(QR_A$R[1:5,1:5])
[,1] [,2] [,3] [,4] [,5]
[1,] 35.06348 -1.24935 -1.14042 -2.53337 -0.09714
[2,] 0.00000 35.50423 0.23703 0.45466 -1.91730
[3,] 0.00000 0.00000 -33.89895 0.25021 -0.04300
[4,] 0.00000 0.00000 0.00000 -34.15847 -1.37625
[5,] 0.00000 0.00000 0.00000 0.00000 34.16191
# Test Q
all.equal(QR_A$Q, qr.Q(QR_R), check.attributes=FALSE)
[1] TRUE
In BigDataStatMeth we implemented the function bdSolve
that computes the solution to a real system of linear equations
\[A*X = B\] where A is an N-by-N matrix and X and B are N-by-K matrices.
Here we solve a matrix equation with a squared matrix A (1000 by 1000) and B matrix (1000 by 2)
# Simulate data
m <- 1000
n <- 1000
A <- matrix(runif(n*m), nrow = n, ncol = m)
B <- matrix(runif(n*2), nrow = n)
# Solve matrix equation
X <- bdSolve(A, B)
# Show results
X[1:5,]
[,1] [,2]
[1,] 1.467716 4.229327
[2,] -2.564491 -5.306566
[3,] 3.353716 7.961567
[4,] -3.178748 -5.118864
[5,] -2.440065 -1.931006
Now we check results multiplying matrix A by results X, if all is correct \(A*X = B\)
testB <- bdblockmult(A,X)
B[1:5,]
[,1] [,2]
[1,] 0.7463583 0.7197653
[2,] 0.2701235 0.3043555
[3,] 0.2770583 0.3936576
[4,] 0.3156043 0.1706373
[5,] 0.5408386 0.1553867
testB[1:5,]
[,1] [,2]
[1,] 0.7463583 0.7197653
[2,] 0.2701235 0.3043555
[3,] 0.2770583 0.3936576
[4,] 0.3156043 0.1706373
[5,] 0.5408386 0.1553867
all.equal(B, testB)
[1] TRUE
The SVD of an \(m \times n\) real or complex matrix \(A\) is a factorization of the form:
\[U\Sigma { V }^{ T }\]
where :
-\(U\) is a \(m \times m\) real or complex unitary matrix -\(\Sigma\) is a \(m \times n\) rectangular diagonal matrix with non-negative real numbers on the diagonal -\(V\) is a \(n \times n\) real or complex unitary matrix.
Notice that:
He have implemented the SVD for R matrices in the function bdSVD()
. The method, so far, only allows SVD of real matrices \(A\). This code illustrate how to perform such computations:
# Matrix simulation
set.seed(413)
n <- 500
A <- matrix(rnorm(n*n), nrow=n, ncol=n)
# SVD
bsvd <- bdSVD(A)
# Singular values, and right and left singular vectors
bsvd$d[1:5]
[1] 44.27037 43.95609 43.31037 43.01980 42.81728
bsvd$u[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] 0.028712445 0.10169472 0.019783036 0.032804622 -0.048915416
[2,] 0.001193891 0.02296487 0.009024529 0.059683600 0.027888834
[3,] -0.008878497 -0.02414059 -0.006035909 -0.004867886 0.024570252
[4,] 0.037323916 -0.04675198 0.035928628 -0.021097593 -0.073485962
[5,] -0.054073083 0.04382626 -0.009027050 -0.002269443 -0.003071978
bsvd$v[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] -0.07105335 0.04554495 -0.03850442 0.07270812 0.078124410
[2,] 0.05989691 -0.02957792 -0.07508850 -0.01171385 0.001581823
[3,] -0.00200688 -0.04179449 0.02798939 -0.05686766 -0.073573833
[4,] -0.01284403 -0.05024617 -0.09038993 -0.05725360 0.023632656
[5,] -0.01751359 0.07320492 -0.02120536 -0.02397367 -0.067643671
We get the expected results obtained with standard R functions:
all.equal( sqrt( svd( tcrossprod( scale(A) ) )$d[1:10] ), bsvd$d[1:10] )
[1] TRUE
you get the \(\sigma_i\), \(U\) and \(V\) of normalized matrix \(A\), if you want to perform the SVD from not normalized matrix \(A\) then you have to set the parameter bcenter = false
and bscale = false
.
bsvd <- bdSVD(A, bcenter = FALSE, bscale = FALSE)
bsvd$d[1:5]
[1] 44.44007 43.89640 43.38384 43.23563 42.82658
bsvd$u[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] 3.075740e-02 0.09861737 -0.026705796 0.001569226 -0.04384078
[2,] 6.480471e-05 0.04151230 -0.049215808 0.054763599 -0.01436232
[3,] -1.857597e-02 -0.02463540 -0.001246071 0.015838141 0.05725309
[4,] 3.555600e-02 -0.05715730 -0.009596880 -0.040247700 -0.03888504
[5,] -5.290501e-02 0.04784627 -0.006624289 0.010168748 0.01934154
bsvd$v[1:5,1:5]
[,1] [,2] [,3] [,4] [,5]
[1,] -0.0677291931 0.03646765 -0.072013679 -0.06460296 0.03834539
[2,] 0.0558086446 -0.02816709 -0.067619533 0.02478961 0.01316491
[3,] 0.0024305441 -0.04679000 0.064115513 0.04134591 -0.03519768
[4,] -0.0007861777 -0.06602592 -0.062833191 0.06307396 0.03651487
[5,] -0.0165396063 0.07316212 0.002600021 0.07030477 -0.07916813
all.equal( sqrt(svd(tcrossprod(A))$d[1:10]), bsvd$d[1:10] )
[1] TRUE
A method developed by M. A. Iwen and B. W. Ong uses a distributed and incremental SVD algorithm that is useful for agglomerative data analysis on large networks. The algorithm calculates the singular values and left singular vectors of a matrix A, by first, partitioning it by columns. This creates a set of submatrices of A with the same number of rows, but only some of its columns. After that, the SVD of each of the submatrices is computed. The final step consists of combining the results obtained by merging them again and computing the SVD of the resulting matrix.
This approach is only implemented using HDF5 files. This method is implemented in bdSVD_hdf5
function, this function works directly on hdf5 data format, loading in memory only the data to perform calculations and saving the results again in the hdf5 file for later use. The user is referred to read section 7.3.1 from this vignette.
Let us illustrate how to perform a PCA using miRNA data obtained from TCGA corresponding to 3 different tumors: melanoma (ME), leukemia (LEU) and centran nervous system (CNS). Data is available at BigDataStatMeth
and can be loaded by simply:
data(miRNA)
data(cancer)
dim(miRNA)
[1] 21 537
We observe that there are a total of 21 individuals and 537 miRNAs. The vector cancer
contains the type of tumor of each individual. For each type we have:
table(cancer)
cancer
CNS LE ME
6 6 9
Now, the typical principal component analysis on the samples
can be run on the miRNA
matrix since it has miRNAs in columns and individuals in rows
pc <- prcomp(miRNA)
We can plot the two first components with:
plot(pc$x[, 1], pc$x[, 2],
main = "miRNA data on tumor samples",
xlab = "PC1", ylab = "PC2", type="n")
abline(h=0, v=0, lty=2)
points(pc$x[, 1], pc$x[, 2], col = cancer,
pch=16, cex=1.2)
legend("topright", levels(cancer), pch=16, col=1:3)
The PCA is equivalent to performing the SVD on the centered data, where the centering occurs on the columns. In that case the function bdSVD
requires to set the argument bcenter
and bscale
equal to TRUE, the dafault values.
miRNA.c <- sweep(miRNA, 2, colMeans(miRNA), "-")
svd.da <- bdSVD(miRNA.c, bcenter = FALSE, bscale = FALSE)
The PCA plot for the two principal components can then be be obtained by:
plot(svd.da$u[, 1], svd.da$u[, 2],
main = "miRNA data on tumor samples",
xlab = "PC1", ylab = "PC2", type="n")
abline(h=0, v=0, lty=2)
points(svd.da$u[, 1], svd.da$u[, 2], col = cancer,
pch=16, cex=1.2)
legend("topright", levels(cancer), pch=16, col=1:3)
We can observe that both figures are equal irrespective to a sign change of second component (that can happen in SVD).
sessionInfo()
R version 4.1.2 (2021-11-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19043)
Matrix products: default
locale:
[1] LC_COLLATE=C LC_CTYPE=Spanish_Spain.1252
[3] LC_MONETARY=Spanish_Spain.1252 LC_NUMERIC=C
[5] LC_TIME=Spanish_Spain.1252
attached base packages:
[1] stats4 stats graphics grDevices utils datasets methods
[8] base
other attached packages:
[1] microbenchmark_1.4.9 ggplot2_3.3.5 DelayedArray_0.20.0
[4] IRanges_2.28.0 S4Vectors_0.32.3 MatrixGenerics_1.6.0
[7] matrixStats_0.61.0 BiocGenerics_0.40.0 Matrix_1.4-0
[10] rmarkdown_2.13 BigDataStatMeth_0.99.32 rhdf5_2.38.1
[13] knitr_1.37 BiocStyle_2.22.0
loaded via a namespace (and not attached):
[1] tidyselect_1.1.2 xfun_0.29 bslib_0.3.1
[4] purrr_0.3.4 lattice_0.20-45 generics_0.1.2
[7] vctrs_0.3.8 colorspace_2.0-2 htmltools_0.5.2
[10] yaml_2.3.5 utf8_1.2.2 rlang_1.0.1
[13] jquerylib_0.1.4 pillar_1.7.0 withr_2.5.0
[16] DBI_1.1.2 glue_1.6.2 lifecycle_1.0.1
[19] stringr_1.4.0 munsell_0.5.0 gtable_0.3.0
[22] evaluate_0.15 fastmap_1.1.0 fansi_1.0.2
[25] highr_0.9 Rcpp_1.0.8 scales_1.1.1
[28] BiocManager_1.30.16 RcppParallel_5.1.5 jsonlite_1.8.0
[31] farver_2.1.0 digest_0.6.29 stringi_1.7.6
[34] dplyr_1.0.8 bookdown_0.25 grid_4.1.2
[37] cli_3.2.0 tools_4.1.2 bitops_1.0-7
[40] rhdf5filters_1.6.0 magrittr_2.0.2 sass_0.4.0
[43] RCurl_1.98-1.6 tibble_3.1.6 pkgconfig_2.0.3
[46] crayon_1.5.0 ellipsis_0.3.2 data.table_1.14.2
[49] assertthat_0.2.1 rstudioapi_0.13 Rhdf5lib_1.16.0
[52] R6_2.5.1 compiler_4.1.2