%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import ktrain
from ktrain import graph as gr
Using TensorFlow backend.
using Keras version: 2.2.4
In this notebook, we will use ktrain to perform node classificaiton on the Cora citation graph. Each node represents a paper pertaining to one of several paper topics. Links represent citations between papers. The attributes or features assigned to each node are in the form of a multi-hot-encoded vector of words appearing in the paper. The dataset is available here.
The dataset is already in the form expected by ktrain, so let's begin.
We will hold out 10% of the nodes as a test set. Since we set holdout_for_inductive=False
, the nodes being heldout will remain in the graph, but only their features (not labels) will be visible to our model. This is referred to as transductive inference. Of the remaining nodes, 10% will be used for training and the remaining nodes will be used for validation (also transductive inference). As with the holdout nodes, the features (but not labels) of validation nodes will be available to the model during training. The return value df_holdout
contain the features for the heldout nodes and G_complete
is the original graph including the holdout nodes.
(train_data, val_data, preproc, df_holdout, G_complete) = gr.graph_nodes_from_csv(
'data/cora/cora.content', # node attributes/labels
'data/cora/cora.cites', # edge list
sample_size=20,
holdout_pct=0.1, holdout_for_inductive=False,
train_pct=0.1, sep='\t')
Largest subgraph statistics: 2485 nodes, 5069 edges Size of training graph: 2485 nodes Training nodes: 223 Validation nodes: 2013 Nodes treated as unlabeled for testing/inference: 249 Holdout node features are visible during training (transductive inference)
The preproc
object includes a reference to the training graph and a dataframe showing the features and target for each node in the graph (both training and validation nodes).
preproc.df.target.value_counts()
Neural_Networks 726 Genetic_Algorithms 406 Probabilistic_Methods 379 Theory 344 Case_Based 285 Reinforcement_Learning 214 Rule_Learning 131 Name: target, dtype: int64
gr.print_node_classifiers()
graphsage: GraphSAGE: https://arxiv.org/pdf/1706.02216.pdf
learner = ktrain.get_learner(model=gr.graph_node_classifier('graphsage', train_data, ),
train_data=train_data,
val_data=val_data,
batch_size=64)
Is Multi-Label? False done
Given the small number of batches per epoch, a larger number of epochs is required to estimate the learning rate. We will cap it at 100 here.
learner.lr_find(max_epochs=100)
simulating training for different learning rates... this may take a few moments... Epoch 1/100 3/3 [==============================] - 1s 441ms/step - loss: 1.9648 - acc: 0.1302 Epoch 2/100 3/3 [==============================] - 0s 158ms/step - loss: 2.0053 - acc: 0.0873 Epoch 3/100 3/3 [==============================] - 1s 191ms/step - loss: 1.9632 - acc: 0.1510 Epoch 4/100 3/3 [==============================] - 0s 138ms/step - loss: 1.9682 - acc: 0.1411 Epoch 5/100 3/3 [==============================] - 1s 176ms/step - loss: 1.9776 - acc: 0.1304 Epoch 6/100 3/3 [==============================] - 0s 153ms/step - loss: 1.9632 - acc: 0.1536 Epoch 7/100 3/3 [==============================] - 1s 186ms/step - loss: 1.9682 - acc: 0.1562 Epoch 8/100 3/3 [==============================] - 0s 130ms/step - loss: 1.9509 - acc: 0.1275 Epoch 9/100 3/3 [==============================] - 1s 175ms/step - loss: 1.9662 - acc: 0.1137 Epoch 10/100 3/3 [==============================] - 1s 174ms/step - loss: 1.9605 - acc: 0.1919 Epoch 11/100 3/3 [==============================] - 1s 198ms/step - loss: 1.9962 - acc: 0.1042 Epoch 12/100 3/3 [==============================] - 0s 140ms/step - loss: 1.9700 - acc: 0.1179 Epoch 13/100 3/3 [==============================] - 1s 178ms/step - loss: 1.9776 - acc: 0.1578 Epoch 14/100 3/3 [==============================] - 1s 193ms/step - loss: 1.9769 - acc: 0.1406 Epoch 15/100 3/3 [==============================] - 1s 170ms/step - loss: 1.9816 - acc: 0.1510 Epoch 16/100 3/3 [==============================] - 0s 133ms/step - loss: 1.9620 - acc: 0.1481 Epoch 17/100 3/3 [==============================] - 0s 162ms/step - loss: 1.9662 - acc: 0.1591 Epoch 18/100 3/3 [==============================] - 0s 166ms/step - loss: 1.9790 - acc: 0.1288 Epoch 19/100 3/3 [==============================] - 1s 193ms/step - loss: 1.9705 - acc: 0.1198 Epoch 20/100 3/3 [==============================] - 0s 132ms/step - loss: 1.9617 - acc: 0.1508 Epoch 21/100 3/3 [==============================] - 1s 167ms/step - loss: 1.9818 - acc: 0.1317 Epoch 22/100 3/3 [==============================] - 0s 161ms/step - loss: 1.9709 - acc: 0.1288 Epoch 23/100 3/3 [==============================] - 1s 190ms/step - loss: 1.9560 - acc: 0.1615 Epoch 24/100 3/3 [==============================] - 0s 132ms/step - loss: 1.9715 - acc: 0.1508 Epoch 25/100 3/3 [==============================] - 1s 171ms/step - loss: 1.9741 - acc: 0.1851 Epoch 26/100 3/3 [==============================] - 1s 187ms/step - loss: 1.9706 - acc: 0.1406 Epoch 27/100 3/3 [==============================] - 0s 165ms/step - loss: 1.9826 - acc: 0.1398 Epoch 28/100 3/3 [==============================] - 0s 126ms/step - loss: 1.9698 - acc: 0.1262 Epoch 29/100 3/3 [==============================] - 1s 183ms/step - loss: 1.9711 - acc: 0.1523 Epoch 30/100 3/3 [==============================] - 0s 161ms/step - loss: 1.9680 - acc: 0.1549 Epoch 31/100 3/3 [==============================] - 1s 190ms/step - loss: 1.9472 - acc: 0.1615 Epoch 32/100 3/3 [==============================] - 0s 127ms/step - loss: 1.9847 - acc: 0.1646 Epoch 33/100 3/3 [==============================] - 0s 162ms/step - loss: 1.9565 - acc: 0.1411 Epoch 34/100 3/3 [==============================] - 1s 168ms/step - loss: 1.9785 - acc: 0.1549 Epoch 35/100 3/3 [==============================] - 1s 197ms/step - loss: 1.9499 - acc: 0.1927 Epoch 36/100 3/3 [==============================] - 0s 130ms/step - loss: 1.9497 - acc: 0.1578 Epoch 37/100 3/3 [==============================] - 0s 163ms/step - loss: 1.9379 - acc: 0.1880 Epoch 38/100 3/3 [==============================] - 1s 193ms/step - loss: 1.9216 - acc: 0.1823 Epoch 39/100 3/3 [==============================] - 1s 167ms/step - loss: 1.9734 - acc: 0.1358 Epoch 40/100 3/3 [==============================] - 0s 126ms/step - loss: 1.9371 - acc: 0.1481 Epoch 41/100 3/3 [==============================] - 1s 175ms/step - loss: 1.9302 - acc: 0.1468 Epoch 42/100 3/3 [==============================] - 0s 163ms/step - loss: 1.9158 - acc: 0.2099 Epoch 43/100 3/3 [==============================] - 0s 141ms/step - loss: 1.8992 - acc: 0.2222 Epoch 44/100 3/3 [==============================] - 1s 181ms/step - loss: 1.8642 - acc: 0.3021 Epoch 45/100 3/3 [==============================] - 1s 178ms/step - loss: 1.8753 - acc: 0.2552 Epoch 46/100 3/3 [==============================] - 1s 186ms/step - loss: 1.8553 - acc: 0.3281 Epoch 47/100 3/3 [==============================] - 1s 169ms/step - loss: 1.8448 - acc: 0.3155 Epoch 48/100 3/3 [==============================] - 0s 122ms/step - loss: 1.8037 - acc: 0.3582 Epoch 49/100 3/3 [==============================] - 0s 166ms/step - loss: 1.7770 - acc: 0.4334 Epoch 50/100 3/3 [==============================] - 1s 181ms/step - loss: 1.7460 - acc: 0.4323 Epoch 51/100 3/3 [==============================] - 0s 164ms/step - loss: 1.6978 - acc: 0.4980 Epoch 52/100 3/3 [==============================] - 0s 128ms/step - loss: 1.6504 - acc: 0.5324 Epoch 53/100 3/3 [==============================] - 1s 183ms/step - loss: 1.6264 - acc: 0.5573 Epoch 54/100 3/3 [==============================] - 1s 176ms/step - loss: 1.5451 - acc: 0.5914 Epoch 55/100 3/3 [==============================] - 1s 172ms/step - loss: 1.4829 - acc: 0.7040 Epoch 56/100 3/3 [==============================] - 0s 127ms/step - loss: 1.4272 - acc: 0.8013 Epoch 57/100 3/3 [==============================] - 0s 160ms/step - loss: 1.3344 - acc: 0.8698 Epoch 58/100 3/3 [==============================] - 0s 157ms/step - loss: 1.2562 - acc: 0.8808 Epoch 59/100 3/3 [==============================] - 1s 188ms/step - loss: 1.2021 - acc: 0.8646 Epoch 60/100 3/3 [==============================] - 0s 120ms/step - loss: 1.0503 - acc: 0.9575 Epoch 61/100 3/3 [==============================] - 0s 164ms/step - loss: 0.9593 - acc: 0.9562 Epoch 62/100 3/3 [==============================] - 1s 189ms/step - loss: 0.8614 - acc: 0.9479 Epoch 63/100 3/3 [==============================] - 1s 169ms/step - loss: 0.7299 - acc: 0.9836 Epoch 64/100 3/3 [==============================] - 0s 125ms/step - loss: 0.6011 - acc: 0.9781 Epoch 65/100 3/3 [==============================] - 1s 178ms/step - loss: 0.4877 - acc: 0.9836 Epoch 66/100 3/3 [==============================] - 1s 204ms/step - loss: 0.4136 - acc: 0.9740 Epoch 67/100 3/3 [==============================] - 0s 132ms/step - loss: 0.2811 - acc: 0.9941 Epoch 68/100 3/3 [==============================] - 1s 173ms/step - loss: 0.2441 - acc: 0.9896 Epoch 69/100 3/3 [==============================] - 1s 184ms/step - loss: 0.1701 - acc: 0.9948 Epoch 70/100 3/3 [==============================] - 1s 173ms/step - loss: 0.1220 - acc: 0.9945 Epoch 71/100 3/3 [==============================] - 1s 170ms/step - loss: 0.0776 - acc: 0.9945 Epoch 72/100 3/3 [==============================] - 0s 135ms/step - loss: 0.0630 - acc: 0.9945 Epoch 73/100 3/3 [==============================] - 1s 200ms/step - loss: 0.0780 - acc: 0.9844 Epoch 74/100 3/3 [==============================] - 1s 182ms/step - loss: 0.0392 - acc: 0.9945 Epoch 75/100 3/3 [==============================] - 0s 166ms/step - loss: 0.0540 - acc: 0.9836 Epoch 76/100 3/3 [==============================] - 0s 124ms/step - loss: 0.0416 - acc: 0.9945 Epoch 77/100 3/3 [==============================] - 0s 165ms/step - loss: 0.0482 - acc: 0.9945 Epoch 78/100 3/3 [==============================] - 1s 167ms/step - loss: 0.0385 - acc: 1.0000 Epoch 79/100 3/3 [==============================] - 0s 144ms/step - loss: 0.0917 - acc: 0.9643 Epoch 80/100 3/3 [==============================] - 1s 170ms/step - loss: 0.1521 - acc: 0.9427 Epoch 81/100 3/3 [==============================] - 0s 161ms/step - loss: 0.1830 - acc: 0.9286 Epoch 82/100 3/3 [==============================] - 1s 183ms/step - loss: 0.2672 - acc: 0.9115 Epoch 83/100 3/3 [==============================] - 0s 157ms/step - loss: 0.1182 - acc: 0.9671 Epoch 84/100 3/3 [==============================] - 0s 124ms/step - loss: 0.0851 - acc: 0.9726 Epoch 85/100 3/3 [==============================] - 1s 186ms/step - loss: 0.1062 - acc: 0.9688 Epoch 86/100 3/3 [==============================] - 1s 170ms/step - loss: 0.0700 - acc: 0.9684 Epoch 87/100 3/3 [==============================] - 0s 165ms/step - loss: 0.0589 - acc: 0.9890 Epoch 88/100 3/3 [==============================] - 0s 127ms/step - loss: 0.0858 - acc: 0.9849 Epoch 89/100 3/3 [==============================] - 1s 188ms/step - loss: 0.0380 - acc: 0.9794 Epoch 90/100 3/3 [==============================] - 1s 191ms/step - loss: 0.1058 - acc: 0.9688 Epoch 91/100 3/3 [==============================] - 0s 164ms/step - loss: 0.1064 - acc: 0.9739 Epoch 92/100 3/3 [==============================] - 0s 130ms/step - loss: 0.0653 - acc: 0.9836 Epoch 93/100 3/3 [==============================] - 1s 179ms/step - loss: 0.1252 - acc: 0.9507 Epoch 94/100 3/3 [==============================] - 1s 179ms/step - loss: 0.0929 - acc: 0.9643 Epoch 95/100 3/3 [==============================] - 1s 190ms/step - loss: 0.1500 - acc: 0.9583 Epoch 96/100 3/3 [==============================] - 0s 134ms/step - loss: 0.2589 - acc: 0.9343 Epoch 97/100 3/3 [==============================] - 0s 162ms/step - loss: 0.3288 - acc: 0.9246 Epoch 98/100 3/3 [==============================] - 0s 161ms/step - loss: 0.3882 - acc: 0.8931 Epoch 99/100 3/3 [==============================] - 1s 189ms/step - loss: 0.6683 - acc: 0.8854 Epoch 100/100 3/3 [==============================] - 0s 130ms/step - loss: 0.6474 - acc: 0.8957 done. Please invoke the Learner.lr_plot() method to visually inspect the loss plot to help identify the maximal learning rate associated with falling loss.
learner.lr_plot()
We will train the model using autofit
, which uses a triangular learning rate policy. The training will automatically stop when the validation loss no longer improves. We save the weights of the model during training in case we would like to reload the weights from any epoch.
learner.autofit(0.01, checkpoint_folder='/tmp/saved_weights')
early_stopping automatically enabled at patience=5 reduce_on_plateau automatically enabled at patience=2 begin training using triangular learning rate policy with max lr of 0.01... Epoch 1/1024 4/4 [==============================] - 7s 2s/step - loss: 1.9479 - acc: 0.2029 - val_loss: 1.7514 - val_acc: 0.3060 Epoch 2/1024 4/4 [==============================] - 5s 1s/step - loss: 1.6925 - acc: 0.4066 - val_loss: 1.6553 - val_acc: 0.3492 Epoch 3/1024 4/4 [==============================] - 6s 1s/step - loss: 1.5708 - acc: 0.5345 - val_loss: 1.5262 - val_acc: 0.4898 Epoch 4/1024 4/4 [==============================] - 5s 1s/step - loss: 1.4280 - acc: 0.6994 - val_loss: 1.4030 - val_acc: 0.7074 Epoch 5/1024 4/4 [==============================] - 6s 1s/step - loss: 1.2972 - acc: 0.8828 - val_loss: 1.2960 - val_acc: 0.7765 Epoch 6/1024 4/4 [==============================] - 5s 1s/step - loss: 1.1721 - acc: 0.9143 - val_loss: 1.2132 - val_acc: 0.7879 Epoch 7/1024 4/4 [==============================] - 5s 1s/step - loss: 1.0570 - acc: 0.9572 - val_loss: 1.1320 - val_acc: 0.8003 Epoch 8/1024 4/4 [==============================] - 5s 1s/step - loss: 0.9660 - acc: 0.9531 - val_loss: 1.0657 - val_acc: 0.8008 Epoch 9/1024 4/4 [==============================] - 5s 1s/step - loss: 0.8845 - acc: 0.9685 - val_loss: 1.0068 - val_acc: 0.8053 Epoch 10/1024 4/4 [==============================] - 5s 1s/step - loss: 0.8171 - acc: 0.9692 - val_loss: 0.9503 - val_acc: 0.8132 Epoch 11/1024 4/4 [==============================] - 5s 1s/step - loss: 0.7351 - acc: 0.9612 - val_loss: 0.9076 - val_acc: 0.8127 Epoch 12/1024 4/4 [==============================] - 5s 1s/step - loss: 0.6809 - acc: 0.9766 - val_loss: 0.8652 - val_acc: 0.8182 Epoch 13/1024 4/4 [==============================] - 5s 1s/step - loss: 0.6138 - acc: 0.9886 - val_loss: 0.8332 - val_acc: 0.8102 Epoch 14/1024 4/4 [==============================] - 5s 1s/step - loss: 0.5587 - acc: 0.9846 - val_loss: 0.8024 - val_acc: 0.8207 Epoch 15/1024 4/4 [==============================] - 5s 1s/step - loss: 0.5186 - acc: 0.9886 - val_loss: 0.7824 - val_acc: 0.8236 Epoch 16/1024 4/4 [==============================] - 5s 1s/step - loss: 0.4747 - acc: 0.9886 - val_loss: 0.7619 - val_acc: 0.8192 Epoch 17/1024 4/4 [==============================] - 5s 1s/step - loss: 0.4317 - acc: 0.9927 - val_loss: 0.7434 - val_acc: 0.8152 Epoch 18/1024 4/4 [==============================] - 5s 1s/step - loss: 0.4035 - acc: 0.9960 - val_loss: 0.7226 - val_acc: 0.8187 Epoch 19/1024 4/4 [==============================] - 5s 1s/step - loss: 0.3773 - acc: 0.9960 - val_loss: 0.7148 - val_acc: 0.8187 Epoch 20/1024 4/4 [==============================] - 5s 1s/step - loss: 0.3537 - acc: 1.0000 - val_loss: 0.7064 - val_acc: 0.8187 Epoch 21/1024 4/4 [==============================] - 5s 1s/step - loss: 0.3154 - acc: 1.0000 - val_loss: 0.6969 - val_acc: 0.8162 Epoch 22/1024 4/4 [==============================] - 5s 1s/step - loss: 0.2936 - acc: 1.0000 - val_loss: 0.6849 - val_acc: 0.8147 Epoch 23/1024 4/4 [==============================] - 5s 1s/step - loss: 0.2744 - acc: 1.0000 - val_loss: 0.6781 - val_acc: 0.8197 Epoch 24/1024 4/4 [==============================] - 5s 1s/step - loss: 0.2564 - acc: 1.0000 - val_loss: 0.6704 - val_acc: 0.8207 Epoch 25/1024 4/4 [==============================] - 5s 1s/step - loss: 0.2442 - acc: 1.0000 - val_loss: 0.6647 - val_acc: 0.8187 Epoch 26/1024 4/4 [==============================] - 5s 1s/step - loss: 0.2218 - acc: 1.0000 - val_loss: 0.6667 - val_acc: 0.8177 Epoch 27/1024 4/4 [==============================] - 5s 1s/step - loss: 0.2138 - acc: 1.0000 - val_loss: 0.6546 - val_acc: 0.8212 Epoch 28/1024 4/4 [==============================] - 6s 1s/step - loss: 0.2000 - acc: 1.0000 - val_loss: 0.6521 - val_acc: 0.8236 Epoch 29/1024 4/4 [==============================] - 6s 1s/step - loss: 0.1819 - acc: 1.0000 - val_loss: 0.6467 - val_acc: 0.8167 Epoch 30/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1757 - acc: 0.9960 - val_loss: 0.6344 - val_acc: 0.8202 Epoch 31/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1791 - acc: 0.9920 - val_loss: 0.6349 - val_acc: 0.8187 Epoch 32/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1593 - acc: 1.0000 - val_loss: 0.6287 - val_acc: 0.8172 Epoch 33/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1563 - acc: 0.9960 - val_loss: 0.6207 - val_acc: 0.8266 Epoch 34/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1498 - acc: 0.9960 - val_loss: 0.6221 - val_acc: 0.8222 Epoch 35/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1391 - acc: 0.9960 - val_loss: 0.6202 - val_acc: 0.8266 Epoch 36/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1330 - acc: 1.0000 - val_loss: 0.6186 - val_acc: 0.8296 Epoch 37/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1202 - acc: 1.0000 - val_loss: 0.6260 - val_acc: 0.8227 Epoch 38/1024 4/4 [==============================] - 6s 1s/step - loss: 0.1172 - acc: 1.0000 - val_loss: 0.6193 - val_acc: 0.8251 Epoch 00038: Reducing Max LR on Plateau: new max lr will be 0.005 (if not early_stopping). Epoch 39/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1163 - acc: 0.9960 - val_loss: 0.6133 - val_acc: 0.8266 Epoch 40/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1114 - acc: 1.0000 - val_loss: 0.6221 - val_acc: 0.8236 Epoch 41/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1093 - acc: 1.0000 - val_loss: 0.6210 - val_acc: 0.8222 Epoch 00041: Reducing Max LR on Plateau: new max lr will be 0.0025 (if not early_stopping). Epoch 42/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1098 - acc: 1.0000 - val_loss: 0.6320 - val_acc: 0.8207 Epoch 43/1024 4/4 [==============================] - 6s 1s/step - loss: 0.1059 - acc: 1.0000 - val_loss: 0.6224 - val_acc: 0.8222 Epoch 00043: Reducing Max LR on Plateau: new max lr will be 0.00125 (if not early_stopping). Epoch 44/1024 4/4 [==============================] - 5s 1s/step - loss: 0.1035 - acc: 1.0000 - val_loss: 0.6291 - val_acc: 0.8197 Restoring model weights from the end of the best epoch Epoch 00044: early stopping Weights from best epoch have been loaded into model.
<keras.callbacks.History at 0x7f0d1835f518>
learner.validate(class_names=preproc.get_classes())
precision recall f1-score support Case_Based 0.73 0.81 0.77 227 Genetic_Algorithms 0.90 0.96 0.93 331 Neural_Networks 0.83 0.86 0.84 592 Probabilistic_Methods 0.87 0.83 0.85 314 Reinforcement_Learning 0.80 0.75 0.77 170 Rule_Learning 0.86 0.60 0.71 106 Theory 0.73 0.70 0.72 273 accuracy 0.82 2013 macro avg 0.82 0.79 0.80 2013 weighted avg 0.82 0.82 0.82 2013
array([[183, 5, 11, 4, 6, 2, 16], [ 1, 318, 10, 0, 1, 0, 1], [ 18, 5, 507, 25, 12, 1, 24], [ 5, 0, 31, 262, 5, 0, 11], [ 3, 16, 19, 1, 128, 0, 3], [ 21, 0, 4, 2, 0, 64, 15], [ 20, 9, 29, 8, 9, 7, 191]])
p = ktrain.get_predictor(learner.model, preproc)
In transductive inference, we make predictions for unlabeled nodes whose features are visible during training. Making predictions on validation nodes in the training graph is transductive inference.
Let's see how well our prediction is for the first validation example.
p.predict_transductive(val_data.ids[0:1], return_proba=True)
array([[0.00738885, 0.00764509, 0.94959724, 0.00979447, 0.00634191, 0.00760743, 0.01162501]], dtype=float32)
val_data[0][1][0]
array([0., 0., 1., 0., 0., 0., 0.])
Let's make predictions for all test nodes in the holdout set, measure test accuracy, and visually compare some of them with ground truth.
y_pred = p.predict_transductive(df_holdout.index, return_proba=False)
y_true = df_holdout.target.values
import pandas as pd
pd.DataFrame(zip(y_true, y_pred), columns=['Ground Truth', 'Predicted']).head()
Ground Truth | Predicted | |
---|---|---|
0 | Theory | Theory |
1 | Genetic_Algorithms | Theory |
2 | Neural_Networks | Neural_Networks |
3 | Neural_Networks | Neural_Networks |
4 | Reinforcement_Learning | Reinforcement_Learning |
import numpy as np
(y_true == np.array(y_pred)).mean()
0.8232931726907631
Our final test accuracy for transductive inference on the holdout nodes is 82.32% accuracy.