Skip to content

RoyalSkye/Routing-MVMoE

Repository files navigation

MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts

Paper    Paper    License    Paper

The PyTorch Implementation of ICML 2024 -- MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts. MVMoE is a unified neural solver that can cope with 16 VRP variants simultaneously, even in a zero-shot manner. Concretely, the training tasks include CVRP, OVRP, VRPB, VRPL, VRPTW, and OVRPTW. The test tasks include OVRPB, OVRPL, VRPBL, VRPBTW, VRPLTW, OVRPBL, OVRPBTW, OVRPLTW, VRPBLTW, and OVRPBLTW.

  • 🇦🇹 We will attend ICML 2024. Welcome to stop by our poster (Session 6 @ Hall C 4-9 #1003) for discussion.
  • 🚀 RL4CO adds support for MVMoE - example.
  • 🎓 We release an awesome paper list on Foundation Model for Combinatorial Optimization - FM4CO.
Overview

Poster

Dependencies

  • Python >= 3.8
  • Pytorch >= 1.12

How to Run

Train
# Default: --problem_size=100 --pomo_size=100 --gpu_id=0
# 0. POMO
python train.py --problem={PROBLEM} --model_type=SINGLE

# 1. POMO-MTL
python train.py --problem=Train_ALL --model_type=MTL

# 2. MVMoE/4E 
python train.py --problem=Train_ALL --model_type=MOE --num_experts=4 --routing_level=node --routing_method=input_choice

# 3. MVMoE/4E-L
python train.py --problem=Train_ALL --model_type=MOE_LIGHT --num_experts=4 --routing_level=node --routing_method=input_choice
Evaluation
# 0. POMO
python test.py --problem={PROBLEM} --model_type=SINGLE --checkpoint={MODEL_PATH}

# 1. POMO-MTL
python test.py --problem=ALL --model_type=MTL --checkpoint={MODEL_PATH}

# 2. MVMoE/4E
python test.py --problem=ALL --model_type=MOE --num_experts=4 --routing_level=node --routing_method=input_choice --checkpoint={MODEL_PATH}

# 3. MVMoE/4E-L
python test.py --problem=ALL --model_type=MOE_LIGHT --num_experts=4 --routing_level=node --routing_method=input_choice --checkpoint={MODEL_PATH}

# 4. Evaluation on CVRPLIB
python test.py --problem=CVRP --model_type={MODEL_TYPE} --checkpoint={MODEL_PATH} --test_set_path=../data/CVRP-LIB
Baseline
# 0. LKH3 - Support for ["CVRP", "OVRP", "VRPL", "VRPTW"]
python LKH_baseline.py --problem={PROBLEM} --datasets={DATASET_PATH} -n=1000 --cpus=32 -runs=1 -max_trials=10000

# 1. HGS - Support for ["CVRP", "VRPTW"]
python HGS_baseline.py --problem={PROBLEM} --datasets={DATASET_PATH} -n=1000 --cpus=32 -max_iteration=20000

# 2. OR-Tools - Support for all 16 VRP variants
python OR-Tools_baseline.py --problem={PROBLEM} --datasets={DATASET_PATH} -n=1000 --cpus=32 -timelimit=20

How to Customize MoE

MoEs can be easily used in Transformer-based models by replacing a Linear/MLP with an MoE layer. The input and output dimensions are kept the same as the original layer. Below, we provide two examples of how to customize MoEs.

# 0. Our implementation based on https://github.com/davidmrau/mixture-of-experts
# Supported routing levels: node/instance/problem
# Supported routing methods: input_choice/expert_choice/soft_moe/random (only for node/instance gating levels)
from MOELayer import MoE
moe_layer = MoE(input_size={INPUT_DIM}, output_size={OUTPUT_DIM}, hidden_size={HIDDEN_DIM},
                num_experts={NUM_EXPERTS}, k=2, T=1.0, noisy_gating=True, 
                routing_level="node", routing_method="input_choice", moe_model="MLP")

# 1. tutel - https://github.com/microsoft/tutel
from tutel import moe as tutel_moe
moe_layer = tutel_moe.moe_layer(
                gate_type={'type': 'top', 'k': 2},
                model_dim={INPUT_DIM},
                experts={'type': 'ffn', 'count_per_node': {NUM_EXPERTS},
                         'hidden_size_per_expert': {HIDDEN_DIM},
                         'activation_fn': lambda x: torch.nn.functional.relu(x)},
            )

Citation

@inproceedings{zhou2024mvmoe,
title       ={MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts},
author      ={Jianan Zhou and Zhiguang Cao and Yaoxin Wu and Wen Song and Yining Ma and Jie Zhang and Chi Xu},
booktitle   ={International Conference on Machine Learning},
year        ={2024}
}

Acknowledgments

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy