Graph Neural Networks (GNNs) are a modern concept that helps analyze graphs, infer relationships between objects, and predict meaningful data from them.

The popularity of artificial intelligence and machine learning has further enhanced the demand for concepts like GNNs in businesses and consumers.

They are helpful for multiple industries and users to perform tasks like image and text classification, natural language processing, product recommendations, social media analysis, and so on.

Now, the problem is when it comes to graphs, analyzing and representing them using standard methods like CNNs can be challenging.

This is where GNNs come in to solve several graph problems.

In this article, I’ll talk about what GNNs are, how they work, their applications in the real world, and more.

So, stay tuned!

Introduction to Graphs

A graph is essentially a way of representing related data. It helps visualize the relationship between objects, people, and concepts. Graphs can also assist in training machine learning (ML) models for complex tasks. Examples of graphs could be social media networks, modeling various physical systems, analyzing fingerprints, etc.

<img alt="Introduction-to-Graphs" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Introduction-to-Graphs.png" data- decoding="async" height="400" src="data:image/svg xml,” width=”800″>

In the world of computer science, graphs are a kind of data structure that has two components – edges and nodes (or vertices). The edges are also referred to as links determining the relationship between different nodes, while the nodes are entities or objects, places, etc.

Mathematically, a graph can be described by this formula:

G = (V, E)

Here, G represents a graph, V represents a set of vertices, and E represents Edges.

A graph is of two types:

  • Directed: A directed graph signifies the directional dependency between two nodes. This type of graph can be unidirectional or bidirectional.
  • Undirected: An undirected graph lacks directional dependency where the nodes are just linked mutually.

Why Is Analyzing a Graph Challenging?

<img alt="Why-Is-Analyzing-a-Graph-Challenging" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Why-Is-Analyzing-a-Graph-Challenging.png" data- decoding="async" height="400" src="data:image/svg xml,” width=”800″>

Analyzing a graph could be a challenge due to many reasons:

  • Graphs exist in the non-Euclidean space, meaning the data deals with curved surfaces instead of flat surfaces like 2D or 3D. This makes it difficult to interpret the graph and its data.
  • A graph can have its nodes in tens, hundreds, or even millions, and the number of edges can vary for each node. Due to the large size and its dimensionality factor, the complexity of interpreting the graph increases.
  • The graphic size is dynamic with no fixed form. Two graphs could look different but may have similar representations for adjacency matrices. This way, analyzing the graph using traditional tools can be inefficient and challenging. 

On the other hand, graphs may expand or contract with time. This also means that if you want to represent graphical data by a matrix, it could be inefficient and may create scanty metrics that signify the same graph. In addition, they aren’t permutation invariant since they might not yield the same result.

Taking all these problems into account, GNNs were introduced to solve graphical prediction problems with better effectiveness.

What Are Graph Neural Networks?

A Graph Neural Network (GNN) is a type of artificial neural networks (ANNs) that’s used for processing data represented as graphs.

<img alt="graph-neural-networks" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/graph-neural-networks.png" data- decoding="async" height="300" src="data:image/svg xml,” width=”800″>

GNNs are built around the concepts of deep learning and graph theory. These networks use the predictive intelligence of deep learning, which is a subset of machine learning and artificial intelligence) in order to analyze and process data.

A GNN can take the input data as a graph with embedded nodes, edges, and context to create the output as a graph whose embeddings are updated and transformed by preserving the symmetry of the input graph.

Think of GNNs as a graph whose data are nodes and connections between the data are edges. You can directly apply GNNs to graphs and get an easier way to perform prediction tasks at the edge, node, or graph level.

GNNs can better combine the graph structure and feature information to learn graphical representations through feature aggregation and propagation.

In the real world, GNNs are used in studying and solving complex problems in multiple sectors. Some use cases are identifying specific nodes in a large network, classifying patterns, visualizing, filtering, and analyzing graphs, analyzing social networks, price prediction, and more.

GNNs was first introduced in a paper published in 2009 by Italian researchers. Two researchers from Amsterdam also demonstrated the power of GNNs with another variant called Graph Convolutional Network (GCN).

Types of GNNs

<img alt="Types-of-GNNs" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Types-of-GNNs.png" data- decoding="async" height="504" src="data:image/svg xml,” width=”996″>

Neural networks are of many types, and most have some form of CNNs in them. The types of GNNs are:

#1. Recurrent Graph Neural Networks (RGNNs)

RGNNs study diffusion patterns and can manage multi-relational graphs with nodes having multiple relations. These networks use regularizers to enhance the smoothness and avoid over-parameterization.

They use limited computational power to generate better results. RGNNs are used in machine translation, text generation, speech recognition, video tagging, image description generation, summarizing texts, and more.

#1. Gated Graph Neural Networks (GGNNs)

These networks are even better than RGNNs when it comes to performing tasks where long-term dependencies are involved. GGNNs enhance RGNNs by adding time gates, an edge, and a node on long-term dependencies. These networks are used in remembering and forgetting information in various states.

#2. Graph Convolutional Networks (GCN)

GCNs are like traditional CNNs. GCNs learn the features of an object or entity by inspecting its neighboring nodes. The GNNs aggregate various node vectors and pass the result to the dense layer. Next, the networks perform non-linearity using the activation function. GCNs are further divided into two types – Spatial and Spectral Convolutional Networks.

#3. Graph Auto-Encoder Networks

These neural networks study graph representation with the help of an encoder and try reconstructing input graphs with the help of a decoder. Furthermore, the decoder and encoder are joined with a bottleneck layer. Graph auto-encoder networks are great at link prediction because they deal with class balancers better.

How Do GNNs Work?

<img alt="howgnnworks" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/howgnnworks.png" data- decoding="async" height="300" src="data:image/svg xml,” width=”800″>

Traditional deep learning primarily focuses on text and images, which are structured data described as grids of pixels or sequences of words.

On the contrary, graphs are unstructured, can take any size or shape, and might contain any type of data apart from text and images.

GNNs can organize graphs with the help of message passing, enabling machine learning algorithms to leverage the graphs. Message passing embeds the information about its neighbor into each node. Next, the AI models use the embedded data to detect patterns and make meaningful predictions. 

For instance, edge embeddings are used in fraud detection systems to detect suspicious transactions so that appropriate actions must be taken on time and stop fraudulent activities.

GNNs utilize sparse mathematics, and their models usually have only 2 or 3 layers. On the other hand, other networks and AI models involve dense mathematics and have neural network layers in hundreds.

Difference between GNN and CNN

GNNs and Convolutional Neural Networks (CNNs) are two types of neural networks but are still different.

CNNs are another type of neural network that is used to aid machines in visualizing objects or things and performing tasks such as image recognition, object detection, image classification, etc. They are excellent for regular, flat, 2D, and 3D spaces, but GNNs shine on curved or non-Euclidean spaces.

<img alt="Difference-between-GNN-and-CNN" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Difference-between-GNN-and-CNN.png" data- decoding="async" height="300" src="data:image/svg xml,” width=”800″>

CNNs are designed to operate specifically on structured information or Euclidean spaces. On the other hand, GNNs deal with non-Euclidean spaces where their nodes are not ordered and can vary.

This means you can apply CNNS to structure information like text or images. It does not apply to unstructured data like weather, sound, etc. however, GNNs apply to unstructured and structured data. It’s a generalized form of CNNs.

Furthermore, GNNs help analyzes graphs and predict meaningful information wherever CNNs are unsuitable for analyzing graphs.

That means GNNs are more efficient in solving graph problems than CNNs.

Applications of GNNs

Organizations and businesses applying GNNs are expanding. It’s being adopted in multiple sectors, from industries to branches of science. Here are some of the applications of GNNs:

#1. Graph classification

GNNs are used in graph classification, wherein a complete graph is divided into several categories. It’s similar to image classification, except the target transforms into the graph domain.

Graph classification has many applications, such as in bioinformatics for checking if a protein belongs to an enzyme, performing social network analysis, categorizing files in NLP, and more.

#2. Graph Visualization

Graph visualization is a part of computer science and mathematics, intersecting information visualization and geometrics graph theory. It deals with representing graphs visually, revealing anomalies and structures in the data. Through graph visualization, users can better understand the graph.

<img alt="Graph-Visualization" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Graph-Visualization.png" data- decoding="async" height="493" src="data:image/svg xml,” width=”740″>

#3. Graph clustering

GNNs are used in graph clustering, a process that involves clustering data as graphs. You can perform two types of clustering on given graph data – vertex clustering and object clustering. Vertex clustering organizes or clusters the graph’s nodes into different groups of regions that are densely connected.

They are based on edge distances or weights. The latter takes graphs as objects or entries to be clustered. It groups or clusters objects according to their similarity.

#4. Node classification

One of the important applications of GNNs is node classification to determine node labeling by checking the labels of nodes’ neighbors. Here new information is added to nodes where there is an information gap.

Models in this type of task need semi-supervised training. Here, only one part of a given graph must be labeled.

For instance, if you want to determine if there are bot accounts created in your organizational network, you must train a GNN model based on the graph embeddings of known and unknown bots and classify users whether they are normal users or bots.

GNNs are helpful in linking or edge prediction to figure out the relationship between various objects or entities in a given graph. This is also done to predict if there exists any connection between the two given entities.

For example, you can use link prediction in social networks to understand social interactions and suggest to users possible friends. It’s also used in law enforcement to predict and understand criminal associations and recommendation systems to suggest the best products, movies, music, etc.

#6. Computer Vision

GNNs are applied in computer vision to solve various problems involving human-object interaction, image classification, scene graph generation, and many more.

<img alt="Computer-Vision" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Computer-Vision.png" data- decoding="async" height="505" src="data:image/svg xml,” width=”996″>

For example, you can use GNNs in scene graph generation, where the model must parse a given image into a graph consisting of objects and semantic relationships between them. This process can recognize and detect objects and forecast semantic relationships between various object pairs. 

#7. Text classification

Graphs can represent a group of words, where words are nodes and connections between these words are edges. You can perform text classification at a graph or node level.

Using GNNs for text classification has many real-world use cases, like a product recommendation, new categorization, and disease detection from certain symptoms.

#8. Classifying and segmenting point clouds

LiDAR can plot real-world data in various 3D point clouds to complete 3D segmentation processes. You can represent the point clouds using GNNs and classify and segment them easily.

#9. Representing human-object interaction

Graphs are an excellent way to represent interactions between links and objects. So, you can model objects and humans as nodes while the relations and interactions between them as edges.

#10. Natural language processing (NLP)

<img alt="nlp" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/nlp.jpg" data- decoding="async" height="417" src="data:image/svg xml,” width=”800″>

In NLP, the text is considered sequential data described by an LSTM (Long Short-Term Memory) or RNN (Recurrent Neural network). Many NLP tasks heavily use graphs because they are easy to represent and look natural and raw.

GNNs are used in solving many NLP problems, such as finding semantics in machine translation, text classification, relation extraction, text classification, answering questions, and more.

#11. Drug discovery

<img alt="Drug-discovery" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Drug-discovery.png" data- decoding="async" height="507" src="data:image/svg xml,” width=”900″>

Discovering the drug or cure for an illness or disease is not only a challenge for chemistry but also society. This field requires thorough research and billions of dollars and years to formulate a drug that cures a disease.

GNNs with AI can help shorten the research and screening processes so that a safer and more effective drug can be released to the public faster.

#12. Representing molecular interactions

GNNs are helpful in Particle Physics, which deals with laws concerning particle interactions. Graphs can be used here to understand the relationships and interactions between particles. GNNs can help predict the properties of collision dynamics.

At present, the Large Hadron Collider (LHC) uses GNNs to identify interesting particles from images generated in various experiments. 

#13. Traffic prediction

A smart transportation device or system involves traffic speed prediction and road density as main features. These prediction tasks can be done with the help of STGNNs or Spatial-Temporal Graph Neural Networks.

Here, the traffic network will be considered as a spatial-temporal graph with nodes as sensors positioned on roads and edges are the distances between node pairs. Also, consider that each node will have an average traffic speed as the input.

#14. Other applications

Apart from the above, GNNs are used in e-health records modeling, adversarial attack prevention, brain networks, social influence prediction, program, reasoning, program verification, predicting election outcomes, and many more.

Learning Resources

Below are some of the best books to learn about GNNs.

#1. Graph Neural Networks by Lingfei Wu and others

This book is a comprehensive guide to GNNs that discusses the objectives of graph representation learning.

It also elucidates the current developments, history, and future of GNNs along with some of its basic theories and methods.

#2. Graph Neural networks in Action

It’s a hands-on guide to deep learning models based on graphs and how to build advanced GNNs for molecular modeling, recommendation engines, and more.

#3. Introduction to Graph Neural Networks by Zhiyuan Liu

Learn the basic concepts, applications, and models of GNNs.   

#4. Graph Neural Network by Younes Sadat-Nejad

It’s an introductory course to GNNs available on Udemy. It will help you learn Graph Representational Learning and GNN. 

Conclusion

With AI’s increasing popularity and graphs becoming more sophisticated and richer with data, Graph Neural Networks (GNNs) are emerging to be more useful. They are a powerful tool in making predictions and are applied in multiple sectors, from networking and computer vision to chemistry, physics, and healthcare. 

You may also explore regression vs. classification in machine learning.