google.com, pub-5261878156775240, DIRECT, f08c47fec0942fa0 Integrated Knowledge Solutions: neural networks
Showing posts with label neural networks. Show all posts
Showing posts with label neural networks. Show all posts

Physics Informed Neural Networks: Bridging Machine Learning and Scientific Computing

Physics Informed Neural Networks (PINNs) represent a groundbreaking approach to solving complex physical problems by combining the power of neural networks with our knowledge of physical laws. In this post, we'll explore what PINNs are, how they work, and implement a simple example to solve a differential equation.

Understanding PINNs

Traditional neural networks learn patterns from data alone. PINNs go a step further by incorporating physical laws directly into the learning process. They do this by adding physics-based constraints to the loss function, ensuring that the network's predictions not only fit the data but also satisfy known physical equations. Thus, the loss function used in PINNs consists of two components. The first component is the commonly used data loss measure that measures how well the network fits the available data. The second component consists of the physics loss measuring how well the network satisfies the governing physical equations. For example, if we're solving a differential equation du/dt = f(u,t), the physics loss would include terms that measure how far our predicted solution is from satisfying this equation.

Key Advantages of PINNs:

1. They require fewer training data points compared to traditional neural networks

2. Solutions automatically satisfy physical constraints

3. They can handle both forward and inverse problems

4. Capable of solving complex partial differential equations (PDEs)

Implementing a Simple PINN

Let's implement a PINN to solve a basic ordinary differential equation (ODE):

du/dt = -u, u(0) = 1

This is the equation for exponential decay, with the analytical solution u(t) = exp(-t). The initial condition is specified as u(0) = 1. The code is shown below. We use a simple feedforward neural network with tanh activation functions. The input is time t, and the output is our solution u(t). Our loss combines two terms:
   - Physics loss: Measures how well our solution satisfies du/dt = -u
   - Initial condition loss: Ensures u(0) = 1
We use PyTorch's autograd to compute du/dt, which is needed for the physics loss. The network is trained using the Adam optimizer to minimize the combined loss. We also include code segments for visualization.

import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt

class PINN(nn.Module):
    def __init__(self):
        super().__init__()
        # Neural network architecture
        self.net = nn.Sequential(
            nn.Linear(1, 20),
            nn.Tanh(),
            nn.Linear(20, 20),
            nn.Tanh(),
            nn.Linear(20, 1)
        )
    
    def forward(self, t):
        return self.net(t)
    
    def loss_function(self, t, u):
        # Compute du/dt using autograd
        u_pred = self.forward(t)
        u_t = torch.autograd.grad(
            u_pred, t,
            grad_outputs=torch.ones_like(u_pred),
            create_graph=True
        )[0]
        
        # Physics loss: du/dt + u = 0
        physics_loss = torch.mean((u_t + u_pred)**2)
        
        # Initial condition loss: u(0) = 1
        ic_loss = torch.mean((self.forward(torch.zeros_like(t)) - 1.0)**2)
        
        return physics_loss + ic_loss, physics_loss.item(), ic_loss.item()

# Training setup
model = PINN()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
t = torch.linspace(0, 5, 100, requires_grad=True).reshape(-1, 1)

# Lists to store loss history
total_losses = []
physics_losses = []
ic_losses = []

# Training loop
n_epochs = 5000
for epoch in range(n_epochs):
    optimizer.zero_grad()
    total_loss, physics_loss, ic_loss = model.loss_function(t, None)
    total_loss.backward()
    optimizer.step()
    
    # Store losses
    total_losses.append(total_loss.item())
    physics_losses.append(physics_loss)
    ic_losses.append(ic_loss)
    
    if (epoch + 1) % 1000 == 0:
        print(f'Epoch {epoch+1}, Total Loss: {total_loss.item():.6f}, '
              f'Physics Loss: {physics_loss:.6f}, IC Loss: {ic_loss:.6f}')

# Create subplots for solutions and loss convergence
plt.figure(figsize=(15, 6))

# Plot 1: Solution comparison
plt.subplot(1, 2, 1)
with torch.no_grad():
    t_plot = torch.linspace(0, 5, 100).reshape(-1, 1)
    u_pred = model(t_plot)
    u_true = torch.exp(-t_plot)
    
    plt.plot(t_plot, u_pred, 'b-', label='PINN prediction')
    plt.plot(t_plot, u_true, 'r--', label='True solution')
    plt.xlabel('t')
    plt.ylabel('u(t)')
    plt.legend()
    plt.title('PINN Solution vs True Solution')
    plt.grid(True)

# Plot 2: Loss convergence
plt.subplot(1, 2, 2)
epochs = range(1, n_epochs + 1)
plt.semilogy(epochs, total_losses, 'b-', label='Total Loss')
plt.semilogy(epochs, physics_losses, 'r--', label='Physics Loss')
plt.semilogy(epochs, ic_losses, 'g-.', label='IC Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss (log scale)')
plt.legend()
plt.title('Loss Convergence')
plt.grid(True)

plt.tight_layout()
plt.show()
       
 
When run, the code above produces a plot comparing the PINN's solution to the analytical solution exp(-t). As seen, the PINN typically learns to approximate the true solution very well, even though we never explicitly told it the analytical solution. 



The loss convergence plot reveals several interesting aspects of the training process:

Initial Phase

- The total loss starts relatively high as the network's predictions are far from satisfying both the physics and initial conditions
- Both physics and initial condition (IC) losses contribute significantly to the total loss

Middle Phase

- We observe a rapid decrease in all loss components as the network learns to satisfy both constraints
- The physics loss typically takes longer to converge than the IC loss, as it needs to satisfy the differential equation across the entire domain

Final Phase

- The losses stabilize as the network finds a solution that satisfies both the physics and initial conditions
- Small fluctuations may persist due to the optimization process and the precision limits of our network.

Conclusion

Physics Informed Neural Networks represent a powerful fusion of machine learning and scientific computing. They allow us to solve complex physical problems while respecting underlying physical laws, often with less data than traditional approaches would require.

As the field continues to develop, we're seeing PINNs being applied to increasingly complex problems, from turbulent flows to quantum systems. Their ability to incorporate physical knowledge into the learning process makes them a valuable tool in scientific computing and engineering.