Skip to content

An Approach to Tight I/O Lower Bounds for Algorithms with Composite Procedures

Notifications You must be signed in to change notification settings

DongFengZero/COCOON24

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

An Approach to Tight I/O Lower Bounds for Algorithms with Composite Procedures

Due to space limitations, we provide the paper's appendix in this repository.

Some additional explanations on NAS task applications (skip if not interested)

Our exploratory experiments on NAS tasks are particularly grateful for the work on Efficient Neural Architecture Search (ENAS). https://github.com/carpedm20/ENAS-pytorch

For readers who are interested in "Introducing the I/O Lower Bound Theorem into NAS Tasks", we strongly recommend that you read the appendix first, where we explain the experimental setup and the suggestions and future directions we give based on our experimental experience. If you have more suggestions or questions, you are welcome to discuss them via email.

Taking into account the randomness of the experimental results, we provide all the neural network models generated in 100 (CIFAR100:101) epochs of the experiment that generated the data given in the paper in TEST.zip for verification.

For more information, see Address:https://pan.baidu.com/s/1Z0M8JnUaqcWcoA1FCvXpYA?pwd=6nc7 Code:6nc7

Running the following code will help you reproduce the data in Table 4 of the paper.

Case Configuration Printing Accuracy test Structure diagram
CIFAR10_NO_IOLB testcnn_cifar10_no_p.py testcnn_cifar10_no.py CIFAR10_2024-03-29_15-11-05_NO\networks\099-010000-73.9943-best
CIFAR10_IOLB testcnn_cifar10_p.py testcnn_cifar10.py CIFAR10_2024-03-29_15-11-35\networks\099-010000-35.8451-best
CIFAR100_NO_IOLB testcnn_cifar100_no_p.py testcnn_cifar100_no.py CIFAR100_2024-04-04_18-00-10_No\networks\100-010100-14.7985-best
CIFAR100_IOLB testcnn_cifar100_p.py testcnn_cifar100.py CIFAR100_2024-04-04_18-00-14\networks\100-010100-0.1335-best

Note:

  1. We provide the model weights generated by the training that produced the corresponding results. Similar results may be produced based on these weights (not guaranteed to be the same. For completely identical results, please refer to the 100 (CIFAR100:101) rounds of neural network structure diagram we provided). These files serve as evidence that we implemented the experiment and are for researchers' reference only (CIFAR100 is 99-100 rounds, CIFAR100 is 108-109 rounds. Unfortunately, the weights of CIFAR100 training for 101-st epoch were automatically cleared and not saved. Therefore, we provide the model weights for epochs 108-109 when training was stopped later to demonstrate that we performed this experiment.)
  2. Update: On January 25, 2025, we fixed an issue in the code. Specifically, the previous version overestimated the I/O operations to a certain extent when determining the number of I/Os during cache reuse. This issue has now been corrected. Please note that the search process described in this article is based on the original unfixed code, and we also provide the original data of the search process as a reference. For more rigorous academic discussion, we include both the original and updated versions of the code in the repository.
  3. In addition, our subsequent work will focus on compilation optimization and code generation, and no longer focus on NAS tasks. If you still need help with I/O theory, you are welcome to contact me by email.

If you have further questions, please contact [email protected]

About

An Approach to Tight I/O Lower Bounds for Algorithms with Composite Procedures

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published