Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat] Adding GLOP model #182

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from
Draft

[Feat] Adding GLOP model #182

wants to merge 8 commits into from

Conversation

cbhua
Copy link
Member

@cbhua cbhua commented May 27, 2024

Description

This PR adds the implementation of Global and Local Optimization Policies (GLOP), together with the implementation of Shortest Hamiltonian Path Problem (SHPP) environment.

Motivation and Context

GLOP is an important non-autoregressive (NAR) model for routing problems. For more details, please refer to the original paper.

Types of changes

  • New feature (non-breaking change which adds core functionality)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature). The test for SHPP is added, but the test for GLOP is not added yet.
  • I have updated the documentation accordingly.

⚠️ Working on Debugging

The current implementation of GLOP is runnable but it can not learn.

I added one test notebook at the examples/other/3-glop.ipynb. This notebook including the test for SHPP environment, greedy rollout for untrained GLOP policy (including visualization for a better understanding), and launching the training for the GLOP. Please play with it and have a look.

There are following components not implemented yet compare with the original GLOP:

  • Maximum number of vehicles constraint;
  • Polar coordinates embedding;
  • Sparsify the input graph;

I will add these missing parts soon. And here are some possible ideas to help to reproduce the results:

  • Using the same number of node, capacity settings as the original GLOP;
  • Instead of the AM for SHPP as reviser, use insertion to solve sub-TSPs for more efficient training;

If @henry-yeh @Furffico have time, could you help to have a look about the implementation? We need to reproduce the GLOP's result close recently.

@ai4co ai4co deleted a comment from codecov-commenter May 27, 2024
@ai4co ai4co deleted a comment from codecov-commenter May 28, 2024
@henry-yeh
Copy link
Member

After some experiments with the original GLOP implementation, I found that none of the current discrepancies (Maximum number of vehicles constraint; Polar coordinates embedding; Sparsify the input graph; ...) should be the reason for training failure on CVRP100. It's weird @Furffico

@Furffico Furffico self-assigned this May 31, 2024
@fedebotu
Copy link
Member

fedebotu commented Jun 8, 2024

Could you push the latest version of GLOP?

@Furffico
Copy link
Member

Furffico commented Jun 8, 2024

Could you push the latest version of GLOP?

We were experimenting with the debug version, which involves a part of code from the official GLOP implementation. The "pure RL4CO" version remains not learning and still requires some debugging.

@fedebotu
Copy link
Member

fedebotu commented Jun 8, 2024

I see, given it's still in RL4CO (just not totally refactored) I'd suggest merging it and when the pure RL4CO version is ready that will be merged.

What do you think?

Cc: @cbhua

@cbhua
Copy link
Member Author

cbhua commented Jun 17, 2024

As the GLOP worked at the submission version. We will clean up this branch and push a clean final implementation of it then.

@fedebotu fedebotu added this to the 0.5.0 milestone Jun 19, 2024
@fedebotu fedebotu removed this from the 0.5.0 milestone Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants