Physics Informed Neural Networks (PINNs) represent a groundbreaking approach to solving complex physical problems by combining the power of neural networks with our knowledge of physical laws. In this post, we'll explore what PINNs are, how they work, and implement a simple example to solve a differential equation.
Understanding PINNs
Traditional neural networks learn patterns from data alone. PINNs go a step further by incorporating physical laws directly into the learning process. They do this by adding physics-based constraints to the loss function, ensuring that the network's predictions not only fit the data but also satisfy known physical equations. Thus, the loss function used in PINNs consists of two components. The first component is the commonly used data loss measure that measures how well the network fits the available data. The second component consists of the physics loss measuring how well the network satisfies the governing physical equations. For example, if we're solving a differential equation du/dt = f(u,t), the physics loss would include terms that measure how far our predicted solution is from satisfying this equation.
Key Advantages of PINNs:
1. They require fewer training data points compared to traditional neural networks
2. Solutions automatically satisfy physical constraints
3. They can handle both forward and inverse problems
4. Capable of solving complex partial differential equations (PDEs)
Implementing a Simple PINN
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
class PINN(nn.Module):
def __init__(self):
super().__init__()
# Neural network architecture
self.net = nn.Sequential(
nn.Linear(1, 20),
nn.Tanh(),
nn.Linear(20, 20),
nn.Tanh(),
nn.Linear(20, 1)
)
def forward(self, t):
return self.net(t)
def loss_function(self, t, u):
# Compute du/dt using autograd
u_pred = self.forward(t)
u_t = torch.autograd.grad(
u_pred, t,
grad_outputs=torch.ones_like(u_pred),
create_graph=True
)[0]
# Physics loss: du/dt + u = 0
physics_loss = torch.mean((u_t + u_pred)**2)
# Initial condition loss: u(0) = 1
ic_loss = torch.mean((self.forward(torch.zeros_like(t)) - 1.0)**2)
return physics_loss + ic_loss, physics_loss.item(), ic_loss.item()
# Training setup
model = PINN()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
t = torch.linspace(0, 5, 100, requires_grad=True).reshape(-1, 1)
# Lists to store loss history
total_losses = []
physics_losses = []
ic_losses = []
# Training loop
n_epochs = 5000
for epoch in range(n_epochs):
optimizer.zero_grad()
total_loss, physics_loss, ic_loss = model.loss_function(t, None)
total_loss.backward()
optimizer.step()
# Store losses
total_losses.append(total_loss.item())
physics_losses.append(physics_loss)
ic_losses.append(ic_loss)
if (epoch + 1) % 1000 == 0:
print(f'Epoch {epoch+1}, Total Loss: {total_loss.item():.6f}, '
f'Physics Loss: {physics_loss:.6f}, IC Loss: {ic_loss:.6f}')
# Create subplots for solutions and loss convergence
plt.figure(figsize=(15, 6))
# Plot 1: Solution comparison
plt.subplot(1, 2, 1)
with torch.no_grad():
t_plot = torch.linspace(0, 5, 100).reshape(-1, 1)
u_pred = model(t_plot)
u_true = torch.exp(-t_plot)
plt.plot(t_plot, u_pred, 'b-', label='PINN prediction')
plt.plot(t_plot, u_true, 'r--', label='True solution')
plt.xlabel('t')
plt.ylabel('u(t)')
plt.legend()
plt.title('PINN Solution vs True Solution')
plt.grid(True)
# Plot 2: Loss convergence
plt.subplot(1, 2, 2)
epochs = range(1, n_epochs + 1)
plt.semilogy(epochs, total_losses, 'b-', label='Total Loss')
plt.semilogy(epochs, physics_losses, 'r--', label='Physics Loss')
plt.semilogy(epochs, ic_losses, 'g-.', label='IC Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss (log scale)')
plt.legend()
plt.title('Loss Convergence')
plt.grid(True)
plt.tight_layout()
plt.show()
When run, the code above produces a plot comparing the PINN's solution to the analytical solution exp(-t). As seen, the PINN typically learns to approximate the true solution very well, even though we never explicitly told it the analytical solution.
No comments:
Post a Comment