google.com, pub-5261878156775240, DIRECT, f08c47fec0942fa0 Integrated Knowledge Solutions

CCA for Finding Latent Relationships and Dimensionality Reduction

Canonical Correlation Analysis (CCA) is a powerful statistical technique. In machine learning and multimedia information retrieval, CCA plays a vital role in uncovering intricate relationships between different sets of variables. In this blog post, we will look into this powerful technique and show how it can be used for finding hidden correlations as well as for dimensionality reduction.

To understand CCA's capabilities, let’s take a look at two sets of observations, X and Y, shown below. These two sets of observations are made on the same set of objects and each observation represents a different variable.

Two Sets of Observations


When computing the pairwise correlation between the column vectors of X and Y, we obtain the following set of values, where the entry at (i,j) represents the correlation between the i-th column of X and the j-th column of Y.

The resulting correlation values give us some insight between the two sets of measurements. The correlation values show moderate to almost no correlation between the columns of the two datasets except a relatively higher correlation between the second column of X and the third column of Y.

Hidden Relationship

It looks like there is not much of a relationship between X and Y. Is that so? Let's wait before concluding that X and Y do not have much of a relationship.

Lets transform X and Y into one-dimensional arrays, a and b, using the vectors [-0.427 -0.576 0.696] and [0 0 -1].

a = X[-0.427 -0.576 0.696]T

b = Y[0 0 -1]T

Now, let's calculate the correlation between a and b. Wow! we get a correlation value of 0.999, meaning that the two projections of X and Y are very strongly correlated. In other words, there is a very strong hidden relationship present in our two sets of observations. Wow! How did we end up getting a and b?” The answer to this is the canonical correlation analysis. 

What is Canonical Correlation Analysis?

Canonical correlation analysis is a technique that looks for pairs of basis vectors for two sets of variables X and Y such that the correlation between the projections of X and Y  onto these basis vectors are mutually maximized. In other words, the transformed arrays show much higher correlation to bring out any hidden relationship. The number of pairs of such basis vectors is limited to the smallest dimensionality of X and Y. For example, if X is an array of size nxp and Y of size nxq, then the number of basis vectors cannot exceed min{p,q}.

Assume wx and wy be the pair of basis vectors projecting X and Y into a and b given by a=Xwx, and b=Ywy. The projections a and b are called the scores or the canonical variates. The correlation between the projections, after some algebraic manipulation, can be expressed as:

$\Large \rho = \frac{\bf{w}_{x}^T \bf{C}_{xy}\bf{w}_{y}}{\sqrt{\bf{w}_{x}^T \bf{C}_{xx}\bf{w}_{x}\bf{w}_{y}^T \bf{C}_{yy}\bf{w}_{y}}}$,

where CxxCxy and Cyy  are three covariance matrices. The canonical correlations between X and Y are found by solving the eigenvalue equations

$ \bf{C}_{xx}^{-1}\bf{C}_{xy}\bf{C}_{yy}^{-1}\bf{C}_{yx}\bf{w}_x = \rho^2 \bf{w}_x$

$ \bf{C}_{yy}^{-1}\bf{C}_{yx}\bf{C}_{xx}^{-1}\bf{C}_{xy}\bf{w}_y = \rho^2 \bf{w}_y$

The eigenvalues in the above solution correspond to the squared canonical correlations and the corresponding eigenvectors yield the needed basis vectors. The number of non-zero solutions to these equations are limited to the smallest dimensionality of X and Y.

CCA Example

Let’s take a look at an example using the wine dataset from the sklearn library. We will divide the13 features of the dataset into X and Y sets of observations. The class labels in our example will act as hidden or latent feature. First, we will load the data, split it into X and Y and perform feature normalization.

from sklearn.datasets import load_wine
import numpy as np
wine = load_wine()
X = wine.data[:, :6]# Form X using first six features
Y = wine.data[:, 6:]# Form Y using the remaining seven features
# Perform feature normalization
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(X)
Y = scaler.fit_transform(Y)

Next, we import CCA object and fit the data. After that we obtain the canonical variates. In the code below, we are calculating 3 projections, X_c and Y_c, each for X and Y.

from sklearn.cross_decomposition import CCA
cca = CCA(n_components=3)
cca.fit(X, Y)
X_c, Y_c = cca.transform(X, Y)
We can now calculate the canonical correlation coefficients to see what correlation values are obtained.
cca_corr = np.corrcoef(X_c.T, Y_c.T).diagonal(offset=3)
print(cca_corr)

[0.90293514 0.73015495 0.51667522]

The highest canonical correlation value is 0.9029, indicating a strong hidden relationship between the two sets of vectors. Let us now try to visualize whether these correlations have captured any hidden relationship or not. In the present example, the underlying latent information not available to CCA is the class-membership of different measurements in X and Y. To check this, I have plotted the scatter plots of the three sets of x-y canonical variates where each variate pair is colored using the class label not accessible to CCA. These plots are shown below. It is clear that the canonical variates associated with the highest correlation coefficient show the existence of three groups in the scatter plot. This means that CCA is able to discern the presence of a hidden variable that reflects the class membership of the different observations.


Summary

Canonical Correlation Analysis (CCA) is a valuable statistical technique that enables us to uncover hidden relationships between two sets of variables. By identifying the most significant patterns and correlations, CCA helps gain valuable insights with numerous potential applications. CCA can be also used for dimensionality reduction. In machine and deep learning, CCA has been used for cross-modal learning and cross-modal retrieval.






An Intro to Graph Convolutional Networks

Graph neural networks (GNNs) are deep learning networks that operate on graph data. These networks are increasingly getting popular as numerous real-world applications are easily modeled as graphs. Graphs are unlike images, text, time-series that are used in deep learning models. Graphs are of arbitrary size and complex topological structure. We represent graphs as a set of nodes and edges. In many instances, each node is associated with a feature vector. The adjacency matrix of a graph defines the presence of edges between the nodes. The ordering of nodes in a graph is arbitrary. These factors make it hard to use the existing deep learning architectures and call for an architecture suited to graphs as inputs.

Permutation Invariance Architecture

Since the nodes in a graph are arbitrarily ordered, it is possible that two adjacency matrices might be representing the same graph. So whatever architecture we plan for graph computation, it should be invariant to the ordering of nodes. This requirement is termed permutation invariance. Given a graph with adjacency matrix A1 and the corresponding nodes feature matrix X1 and another matrix pair of A2 and X2 representing the same graph but with different ordering of nodes, the permutation invariance implies that any function f that maps the graph to an embedding vector R of dimension d must satisfy the relationship f(A1,X1) = f(A2,X2), i.e. the function should yield the same embedding irrespective of how the nodes are numbered.

Message Passing Architecture

The deep learning architecture that works with graphs is based on message passing. The resulting deep learning model is also known as Graph Convolutional Networks (GCNs). The underlying graph for this architecture consists of an adjacency matrix and a matrix of node vectors. Each node in the graph starts by sharing its message, the associated node vector, with all its neighbors. Thus, every node receives one message per adjacent node as illustrated below.


Alt text


As each node receives messages from its neighbors, the node feature vectors for every node are updated. The GCN uses the following updating rule where $\bf{h}_u$ refers to the node vector for node u.

$\bf{h}_u^{(k+1)} = \sigma\left(\hat{\bf{D}}^{-1/2}\hat{\bf{A}}\hat{\bf{D}}^{-1/2}\bf{h}_u^{(k)}\bf{W}^{(k)}\right)$,

where $\bf{W}^{(k)}$ is the weight parameters that transforms the node features into messages $(\bf{H}^{(k)}\bf{W}^{(k)})$. We add the identity matrix to the adjacency matrix $\bf{A}$ so that each node sends its own message to itself also: $\hat{\bf{A}}=\bf{A}+\bf{I}$. In place of summing, we perform messages averaging via the degree matrix $\hat{\bf{D}}$. We can also represent the above updating rule through the following expression:

$\bf{h}_u^k = \sigma\Bigl(\bf{W}_k \sum_{v\in{N(u)\cup N(v))}} \frac{\bf{h}_v^{k-1}}{(|N(u)||N(v)|)^{-1/2}} \Bigl)$,  

where N(.) represents the number of neighbors of a node including itself. The denominator here is viewed as performing the symmetric normalization of the sum accumulated in the numerator. The normalization is important to avoid large aggregated values for nodes with many neighbors. A nonlinearity, denoted by σ, is used in the updating relationship. The ReLU is generally the nonlinearity used in GCNs.

A Simple Example to Illustrate Message Passing Computation

We will use the graph shown above for our illustration. The adjacency matrix of the graph is

The $\hat {\bf{A}}$ and $\hat {\bf{D}}$ matrices are then:

Let the matrix of node features be the following

The $\hat \bf{D}^{-1/2}$ matrix results in $\text{diag}(1/2^{1/2}, 1/4^{1/2}, 1/3^{1/2}, 1/3^{1/2})$. Assuming $\bf{W}$ to be an identity matrix of size 2, we carry out the multiplications to obtain the updated matrix of node feature vectors, before passing through the nonlinearity, as

[[0.707 1.560], [3.387 4.567], [3.910 4.866], [3.910 4.866]].

The graph convolution layer just described above limits the message passing only to immediate neighbors. With multiple such layers we have a GCN with  message passing between nodes beyond the local neighborhood. The figure below presents a visualization of a GCN with two layers of message passing. 


Training GCNs

How do we find the weights used in a graph neural network? This is done by defining a loss function and using SGD to find the optimal weight values. The choice of the loss function depends upon the task that the GCN is to be used for. We will consider node classification task for our discussion. I plan to cover the training for other tasks such as link prediction and graph classification in future posts.

Node classification implies assigning a label or category to a node based on a training set of nodes with known labels. Given $\bf{y}_u$ as the one-hot encoding of true label of node u and its node embedding output by the GNN as $\bf{z}_u$, the negative log-likelihood loss, defined below, is the most common choice to learn the weights:

$L =\sum_{u\in{\bf {V}_{\text{train}}}} - log(softmax(\bf{z}_u,\bf{y}_u)$

The above expression implies the presence of a set of training nodes. These training nodes are a subset of all nodes participating in message passing. The remaining nodes are used as test nodes to determine how well the learning has taken. It is important to remember that all nodes, whether training nodes or not , are involved in message passing to generate embeddings; only a subset of nodes marked as training nodes are used in the loss function to optimize the weight parameters. The nodes not designated as training nodes are called transductive test nodes. Since the transductive nodes do participate in embeddings generation, this setup is different from supervised learning and is termed as semi-supervised learning

Applying GNN for Node Classification

Let's now train a GCN for a node classification task. I am going to use the scikit-network library. You can find a brief introduction to this library at this blog post. The library is developed along the lines of the popular scikit-learning library for machine learning and thus provides a familiar environment to work with for graph machine learning.

Let's get started by importing all the necessary libraries.

# import necessary libraries
from IPython.display import SVG
import numpy as np
from scipy import sparse
from sknetwork.classification import get_accuracy_score
from sknetwork.gnn import GNNClassifier
from sknetwork.visualization import svg_graph

We will use the art_philo_science toy dataset. It consists of a selection of 30 Wikipedia articles with links between them. Each article is described by some words used in their summary, among a list of 11 words. Each article belongs to one of the following 3 categories: arts, philosophy or science. The goal is to retrieve the category of some articles (the test set) from the category of the other articles (the train set).

from sknetwork.data import art_philo_science
graph = art_philo_science(metadata=True)
adjacency = graph.adjacency
features = graph.biadjacency
names = graph.names
names_features = graph.names_col
names_labels = graph.names_labels
labels_true = graph.labels
position = graph.position

Just for illustration, let's print the names of all the 30 nodes.

print(names)# These are nodes     
['Isaac Newton' 'Albert Einstein' 'Carl Linnaeus' 'Charles Darwin'
'Ptolemy' 'Gottfried Wilhelm Leibniz' 'Carl Friedrich Gauss'
'Galileo Galilei' 'Leonhard Euler' 'John von Neumann' 'Leonardo da Vinci'
'Richard Wagner' 'Ludwig van Beethoven' 'Bob Dylan' 'Igor Stravinsky'
'The Beatles' 'Wolfgang Amadeus Mozart' 'Richard Strauss' 'Raphael'
'Pablo Picasso' 'Aristotle' 'Plato' 'Augustine of Hippo' 'Thomas Aquinas'
'Immanuel Kant' 'Bertrand Russell' 'David Hume' 'René Descartes'
'John Stuart Mill' 'Socrates']

Let's also print the names of the features.

print(names_features)# These are the features.     
['contribution' 'theory' 'invention' 'time' 'modern' 'century' 'study'
 'logic' 'school' 'author' 'compose']

We will use a single hidden layer.

hidden_dim = 5
n_labels = 3
gnn = GNNClassifier(dims=[hidden_dim, n_labels],
                    layer_types='Conv',
                    activations='ReLu',
                    verbose=True)
print(gnn)# Details of the gnn   
GNNClassifier(
    Convolution(layer_type: conv, out_channels: 5, activation: ReLu,
    use_bias: True, normalization: both, self_embeddings: True)
    Convolution(layer_type: conv, out_channels: 3, activation: Cross entropy,
    use_bias: True, normalization: both, self_embeddings: True))

We now select nodes for use as training nodes and start training.

# Training set
labels = labels_true.copy()
np.random.seed(42)
train_mask = np.random.random(size=len(labels)) < 0.5
labels[train_mask] = -1
# Training
labels_pred = gnn.fit_predict(adjacency, features, labels,
                    n_epochs=200, random_state=42, history=True)
In epoch   0, loss: 1.053, train accuracy: 0.462
In epoch  20, loss: 0.834, train accuracy: 0.692
In epoch  40, loss: 0.819, train accuracy: 0.692
In epoch  60, loss: 0.831, train accuracy: 0.692
In epoch  80, loss: 0.839, train accuracy: 0.692
In epoch 100, loss: 0.839, train accuracy: 0.692
In epoch 120, loss: 0.825, train accuracy: 0.692
In epoch 140, loss: 0.771, train accuracy: 0.769
In epoch 160, loss: 0.557, train accuracy: 1.000
In epoch 180, loss: 0.552, train accuracy: 1.000

We now test for accuracy on test nodes.

# Accuracy on test set
test_mask = ~train_mask
get_accuracy_score(labels_true[test_mask], labels_pred[test_mask]) 
1.0

The accuracy is 100%, not a surprise because it is a toy problem.

Issues with GCNs

One major limitation of GCNs discussed in this post is that these networks can generate embeddings only for fixed or static networks. Thus, the node classification task shown above cannot be used for evolving networks as exemplified by social networks. Another issue relates to computational load for large graphs with nodes having very large neighborhoods. In subsequent posts, we are going to look at some other GNN models. So be on a look out. 

Whose Model is Better?

You and your friend are training a neural network for classification. Both of you are using identical training data. The data has four classes with 40% examples of cat images, 10% images of dogs, and 25% each of horse and sheep images. Since the deadline for the project is nearing, both of you decide to run only a few epochs and get to report writing. At the same time, the two of you have a friendly wager of $10 going to the winner of the better model.

At the end of training, you find out that your model, Net1, is making 30% recognition errors and the resulting distribution of assigned labels to the training data is 25% each for four classes. As luck would have it, your friend's model, Net2, is also yielding 30% error rate but the assigned labels in the training set are different with 40% cats, 10% dogs, 10% horse, and 40% sheep.

Since the error rate by both models is identical, your friend declares a tie. You on the other hand are insisting that your model Net1 is slightly better and want the friend to pay.

Who is right? Let's figure it out using the concept of cross entropy.

Cross Entropy

Before defining cross entropy, let's first look at entropy. Entropy is a measure of uncertainty. Think of a training set of images of cats and dogs for an image recognition problem. Assume the training set is having equal number of cat and dog images. If you are asked to blindly pull an image of a cat from the training set, you know that you have a 50% chance of success. Let's change the mix of cat and dog images to increase the percentage of cat images to 70%. Now if you are asked to blindly pull an image of a cat, you know your chances of success are much better this time. You are more certain of success compared to previous mix. Entropy captures this notion of uncertainty and is defined through the following formula:

$E(C) =  -\sum_i p(c_i )\log p(c_i)$,

where C stand for a random variable representing class label of a training example. Let $p(c_i)$ stand for the probability of label being $c_i$. 

While entropy captures the uncertainty of a single event (a training set in our case), we are often interested in comparing a pair of events (a pair of training sets for example). Letting $C$ and $\hat{C}$ denote the respective random variables representing class labels, the cross entropy is measured by the following formula:

$H(C,\hat{C}) = -\sum_i p(c_i)\log p(\hat{c}_i)$

It can be seen that it is not a symmetric function. Cross entropy measure is popular for training multi-class classification problems because the loss function surface with this criterion is considered better for gradient search.

Now back to the wager. We are going to compute the cross entropy of the class label's distributions produced by Net1 and Net2 with respect the distribution in the training set and see how close models Net1 and Net2 are to the label distribution in the training set. Plugging in the Percentages of the labels produced by Net1 and Net2, we get

$H(C, Net1) = -(0.4*log(0.25) + 0.1*log(0.25)$ 

+ 0.25*log(0.25) + 0.25*log(0.25)  => 2.0

With Net2, the cross entropy is

$H(C, Net1) = -(0.4*log(0.4) + 0.1*log(0.1)$ 

+ 0.25*log(0.25) + 0.25*log(0.25)  => 2.02

To see whether Net1 or Net2 produced labels distribution more closer to the original distribution, let's look at the entropy of the training set. It is

$ E(C) = -(0.4*log(0.4) + 0.1*log(0.1)$ 

+ 0.25*log(0.25) + 0.25*log(0.25)  => 1.86

All above calculations are done using base2. We see that Net1 cross entropy value is slightly closer to the entropy of the training set. This means the result given by Net1 is comparatively more similar to the distribution of labels in the training set. Thus, your friend must pay you $10.