Skip to content

Pressure Dacy flow Init #19

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 51 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
d7a2512
Pressure Dacy flow Init
bojunZhang-heng Jul 14, 2025
740d145
./Model_Script/Transolver_Darcy.sh update
bojunZhang-heng Jul 14, 2025
78df777
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 14, 2025
06068c8
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 14, 2025
e22ccb3
My_python_job/Model_Script/ Init
bojunZhang-heng Jul 14, 2025
8bcea5f
My_python_job/Pressure_train.lsf update
bojunZhang-heng Jul 14, 2025
f2023ec
My_python_job/model Init
bojunZhang-heng Jul 14, 2025
b6fda09
My_python_job/model_dict.py Init
bojunZhang-heng Jul 14, 2025
520290d
My_python_job/utils update
bojunZhang-heng Jul 14, 2025
fa5855d
My_python_job/utils_Dri.py update
bojunZhang-heng Jul 14, 2025
feee2f3
Usage_Python/ Init
bojunZhang-heng Jul 15, 2025
909ebb9
My_python_job/Model_Script/ update
bojunZhang-heng Jul 15, 2025
4996a40
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 15, 2025
25f3111
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 17, 2025
809d9fc
My_python_job/Usage_Python/Usage_Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 17, 2025
2380699
Usage_Python/Usage_Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 17, 2025
a9e9d48
model/Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 17, 2025
a946c18
PDE-Solving-StandardBenchmark/My_python_job/tmp_foo.py update
bojunZhang-heng Jul 17, 2025
6322a63
My_python_job/Pressure_train.lsf update
bojunZhang-heng Jul 21, 2025
4dcc888
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 21, 2025
05ae7d1
My_python_job/model/Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 21, 2025
5988c5b
Model_Script/Transolver_Darcy.sh
bojunZhang-heng Jul 21, 2025
fab83fe
Usage_Python/Usage_Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 21, 2025
740d745
exp_darcy.py update
bojunZhang-heng Jul 21, 2025
8e76352
model/Transolver_Structured_Mesh_2D.py upadte
bojunZhang-heng Jul 21, 2025
95e0225
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 21, 2025
61395a7
My_python_job/model/Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 21, 2025
077d07c
My_python_job/Pressure_train.lsf update
bojunZhang-heng Jul 21, 2025
10d8c86
My_python_job/Usage_Python/Usage_Transolver_Structured_Mesh_2D.py .ge…
bojunZhang-heng Jul 21, 2025
d762637
My_python_job/exp_darcy.py .get_grd() update
bojunZhang-heng Jul 21, 2025
86b1f2f
My_python_job/model/Transolver_Structured_Mesh_2D.py .get_grid() update
bojunZhang-heng Jul 21, 2025
2b0dbef
My_python_job/Usage_Python/Usage_Transolver_Structured_Mesh_2D.py MLP…
bojunZhang-heng Jul 21, 2025
277e8c1
My_python_job/exp_darcy.py MLP class Update
bojunZhang-heng Jul 21, 2025
af5d516
My_python_job/model/Transolver_Structured_Mesh_2D.py MLP class Update
bojunZhang-heng Jul 21, 2025
4760c68
Usage_Python/Usage_Physics_Attention.py Init
bojunZhang-heng Jul 21, 2025
7ceab96
model/Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 21, 2025
37355c8
My_python_job/Model_Script/Transolver_Darcy.sh update
bojunZhang-heng Jul 23, 2025
462da16
My_python_job/Pressure_train.lsf update
bojunZhang-heng Jul 23, 2025
9cf3d7f
My_python_job/Usage_Python/Usage_Physics_Attention.py update
bojunZhang-heng Jul 23, 2025
06e60b1
My_python_job/exp_darcy.py update
bojunZhang-heng Jul 23, 2025
3696a6b
My_python_job/Usage_Python/Usage_Physics_Attention.py Update kernel_s…
bojunZhang-heng Jul 23, 2025
b8f3be4
My_python_job/Usage_Python/Usage_Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 23, 2025
b8f9e7e
My_python_job/Usage_Python/Usage_Physics_Attention.py Update
bojunZhang-heng Jul 25, 2025
94a50a3
My_python_job/exp_darcy.py Update
bojunZhang-heng Jul 25, 2025
d478a5f
My_python_job/model/Physics_Attention.py update
bojunZhang-heng Jul 25, 2025
3830bfa
My_python_job/model/Transolver_Structured_Mesh_2D.py update
bojunZhang-heng Jul 25, 2025
844abcc
My_python_job/Usage_Python/Usage_Physics_Attention.py update
bojunZhang-heng Jul 25, 2025
6fb7971
My_python_job/Pressure_train.lsf update
bojunZhang-heng Jul 25, 2025
f519c08
My_python_job/Usage_Python/Usage_Physics_Attention.py Update
bojunZhang-heng Jul 25, 2025
a12bded
My_python_job/model/Physics_Attention.py update
bojunZhang-heng Jul 25, 2025
dc73a09
Usage_Python/Usage_Physics_Attention.py update
bojunZhang-heng Jul 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
My_python_job/exp_darcy.py update
  • Loading branch information
bojunZhang-heng committed Jul 14, 2025
commit 78df777f5404a7acef8f19f548636896970aebcc
229 changes: 6 additions & 223 deletions PDE-Solving-StandardBenchmark/My_python_job/exp_darcy.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from utils.normalizer import UnitTransformer
import matplotlib.pyplot as plt

parser = argparse.ArgumentParser('Training Transolver')
parser = argparse.ArgumentParser('Training Translover')

parser.add_argument('--lr', type=float, default=1e-3)
parser.add_argument('--epochs', type=int, default=500)
Expand All @@ -32,11 +32,11 @@
parser.add_argument('--slice_num', type=int, default=32)
parser.add_argument('--eval', type=int, default=0)
parser.add_argument('--save_name', type=str, default='darcy_Transolver')
parser.add_argument('--data_path', type=str, default='/data/fno')
parser.add_argument('--data_path', type=str, default='/work/mae-zhangbj/Data_store/Data_Pressure_Darcy/')

args = parser.parse_args()

os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu

train_path = args.data_path + '/piececonst_r421_N1024_smooth1.mat'
test_path = args.data_path + '/piececonst_r421_N1024_smooth2.mat'
ntrain = args.ntrain
Expand All @@ -45,228 +45,11 @@
eval = args.eval
save_name = args.save_name


def count_parameters(model):
total_params = 0
for name, parameter in model.named_parameters():
if not parameter.requires_grad: continue
params = parameter.numel()
total_params += params
print(f"Total Trainable Params: {total_params}")
return total_params


def central_diff(x: torch.Tensor, h, resolution):
# assuming PBC
# x: (batch, n, feats), h is the step size, assuming n = h*w
x = rearrange(x, 'b (h w) c -> b h w c', h=resolution, w=resolution)
x = F.pad(x,
(0, 0, 1, 1, 1, 1), mode='constant', value=0.) # [b c t h+2 w+2]
grad_x = (x[:, 1:-1, 2:, :] - x[:, 1:-1, :-2, :]) / (2 * h) # f(x+h) - f(x-h) / 2h
grad_y = (x[:, 2:, 1:-1, :] - x[:, :-2, 1:-1, :]) / (2 * h) # f(x+h) - f(x-h) / 2h

return grad_x, grad_y


def main():
r = args.downsample
h = int(((421 - 1) / r) + 1)
s = h
r = args.downsample
h = int(((421 - 1) / r) + 1)
s = h
dx = 1.0 / s

train_data = scio.loadmat(train_path)
x_train = train_data['coeff'][:ntrain, ::r, ::r][:, :s, :s]
x_train = x_train.reshape(ntrain, -1)
x_train = torch.from_numpy(x_train).float()
y_train = train_data['sol'][:ntrain, ::r, ::r][:, :s, :s]
y_train = y_train.reshape(ntrain, -1)
y_train = torch.from_numpy(y_train)

test_data = scio.loadmat(test_path)
x_test = test_data['coeff'][:ntest, ::r, ::r][:, :s, :s]
x_test = x_test.reshape(ntest, -1)
x_test = torch.from_numpy(x_test).float()
y_test = test_data['sol'][:ntest, ::r, ::r][:, :s, :s]
y_test = y_test.reshape(ntest, -1)
y_test = torch.from_numpy(y_test)

x_normalizer = UnitTransformer(x_train)
y_normalizer = UnitTransformer(y_train)

x_train = x_normalizer.encode(x_train)
x_test = x_normalizer.encode(x_test)
y_train = y_normalizer.encode(y_train)

x_normalizer.cuda()
y_normalizer.cuda()

x = np.linspace(0, 1, s)
y = np.linspace(0, 1, s)
x, y = np.meshgrid(x, y)
pos = np.c_[x.ravel(), y.ravel()]
pos = torch.tensor(pos, dtype=torch.float).unsqueeze(0)

pos_train = pos.repeat(ntrain, 1, 1)
pos_test = pos.repeat(ntest, 1, 1)
print("Dataloading is over.")

train_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_train, x_train, y_train),
batch_size=args.batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(pos_test, x_test, y_test),
batch_size=args.batch_size, shuffle=False)

model = get_model(args).Model(space_dim=2,
n_layers=args.n_layers,
n_hidden=args.n_hidden,
dropout=args.dropout,
n_head=args.n_heads,
Time_Input=False,
mlp_ratio=args.mlp_ratio,
fun_dim=1,
out_dim=1,
slice_num=args.slice_num,
ref=args.ref,
unified_pos=args.unified_pos,
H=s, W=s).cuda()

optimizer = torch.optim.AdamW(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)

print(args)
print(model)
count_parameters(model)

scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, epochs=epochs,
steps_per_epoch=len(train_loader))
myloss = TestLoss(size_average=False)
de_x = TestLoss(size_average=False)
de_y = TestLoss(size_average=False)

if eval:
print("model evaluation")
print(s, s)
model.load_state_dict(torch.load("./checkpoints/" + save_name + ".pt"), strict=False)
model.eval()
showcase = 10
id = 0
if not os.path.exists('./results/' + save_name + '/'):
os.makedirs('./results/' + save_name + '/')

with torch.no_grad():
rel_err = 0.0
with torch.no_grad():
for x, fx, y in test_loader:
id += 1
x, fx, y = x.cuda(), fx.cuda(), y.cuda()
out = model(x, fx=fx.unsqueeze(-1)).squeeze(-1)
out = y_normalizer.decode(out)
tl = myloss(out, y).item()

rel_err += tl

if id < showcase:
print(id)
plt.figure()
plt.axis('off')
plt.imshow(out[0, :].reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
plt.colorbar()
plt.savefig(
os.path.join('./results/' + save_name + '/',
"case_" + str(id) + "_pred.pdf"))
plt.close()
# ============ #
plt.figure()
plt.axis('off')
plt.imshow(y[0, :].reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
plt.colorbar()
plt.savefig(
os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_gt.pdf"))
plt.close()
# ============ #
plt.figure()
plt.axis('off')
plt.imshow((y[0, :] - out[0, :]).reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
plt.colorbar()
plt.clim(-0.0005, 0.0005)
plt.savefig(
os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_error.pdf"))
plt.close()
# ============ #
plt.figure()
plt.axis('off')
plt.imshow((fx[0, :].unsqueeze(-1)).reshape(85, 85).detach().cpu().numpy(), cmap='coolwarm')
plt.colorbar()
plt.savefig(
os.path.join('./results/' + save_name + '/', "case_" + str(id) + "_input.pdf"))
plt.close()

rel_err /= ntest
print("rel_err:{}".format(rel_err))
else:
for ep in range(args.epochs):
model.train()
train_loss = 0
reg = 0
for x, fx, y in train_loader:
x, fx, y = x.cuda(), fx.cuda(), y.cuda()
optimizer.zero_grad()

out = model(x, fx=fx.unsqueeze(-1)).squeeze(-1) # B, N , 2, fx: B, N, y: B, N
out = y_normalizer.decode(out)
y = y_normalizer.decode(y)

l2loss = myloss(out, y)

out = rearrange(out.unsqueeze(-1), 'b (h w) c -> b c h w', h=s)
out = out[..., 1:-1, 1:-1].contiguous()
out = F.pad(out, (1, 1, 1, 1), "constant", 0)
out = rearrange(out, 'b c h w -> b (h w) c')
gt_grad_x, gt_grad_y = central_diff(y.unsqueeze(-1), dx, s)
pred_grad_x, pred_grad_y = central_diff(out, dx, s)
deriv_loss = de_x(pred_grad_x, gt_grad_x) + de_y(pred_grad_y, gt_grad_y)
loss = 0.1 * deriv_loss + l2loss
loss.backward()

if args.max_grad_norm is not None:
torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm)
optimizer.step()
train_loss += l2loss.item()
reg += deriv_loss.item()
scheduler.step()

train_loss /= ntrain
reg /= ntrain
print("Epoch {} Reg : {:.5f} Train loss : {:.5f}".format(ep, reg, train_loss))

model.eval()
rel_err = 0.0
id = 0
with torch.no_grad():
for x, fx, y in test_loader:
id += 1
if id == 2:
vis = True
else:
vis = False
x, fx, y = x.cuda(), fx.cuda(), y.cuda()
out = model(x, fx=fx.unsqueeze(-1)).squeeze(-1)
out = y_normalizer.decode(out)
tl = myloss(out, y).item()
rel_err += tl

rel_err /= ntest
print("rel_err:{}".format(rel_err))

if ep % 100 == 0:
if not os.path.exists('./checkpoints'):
os.makedirs('./checkpoints')
print('save model')
torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))

if not os.path.exists('./checkpoints'):
os.makedirs('./checkpoints')
print('save model')
torch.save(model.state_dict(), os.path.join('./checkpoints', save_name + '.pt'))


if __name__ == "__main__":
main()
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy