Follow these steps to set up and run the DDPM project:
-
Clone the Repository
git clone https://github.com/ArintonAkos/diffusion-model/ -
Install Dependencies
- Install the required dependencies by running:
pip install -r requirements.txt - For detailed PyTorch installation instructions, visit: PyTorch Get Started
- Install the required dependencies by running:
-
Running the Code
- Execute
main.pyusing the command line with the desired parameters: - Options include:
-A, --all: Run setup, pre-train, and full-train steps.-S, --setup: Run setup steps (visualize and data-analysis).-P, --pre-train: Run pre-train steps (overfit-test and lr-find).-F, --full-train: Run full-train steps (train and evaluate).--model-type: Specify the model type (required). Choices: ['classification', 'ddpm'].-N, --new-run: Create a new run ID.--run-id: Specify a custom run ID.--visualize: Visualize the dataset.--data-analysis: Analyze the dataset.--overfit-test: Perform an overfitting test.--lr-find: Find an optimal learning rate.--train: Train the model on the full dataset.--evaluate: Evaluate the model on the validation set.--federated: Enable federated training.--model-path: Specify the path to a pre-trained model.--dataset-name: Specify the dataset name.--config-file: Specify the path to the configuration file. Default: 'configs/config.yml'.
- Execute
Example command to train the model: python main.py --train --new-run --config=configs/homogeneous.yml --federated
Edit the configuration file configs/config.yml to set various parameters like batch size, learning rate, etc., to suit your requirements.
Contributions to this project are welcome. Please follow the standard Git workflow - fork, clone, feature branch, commits, and pull request.
- Centralized train of MLP:
- Run: ddpm_20231207-004644
- Model: model_ep_99_20231207-010315.pth
- Federated train of MLP:
For more in-depth understanding and background of DDPM, refer to the following papers and resources: