Torch save compression At the moment I use torch. 000 jpeg images and I noticed that most of time resources are taken in image preprocessing: weights = ResNet50_Weights. save should compress the data, so I guess depending on the actually used algorithm changes in the data could result in different file sizes. save(state_dict, path) The path argument can be a regular path to a local file system or a path to storage supported by fsspec. pt file using pruning, quantization, and various other methods, but these attempts have doubled the file size 20mb file becomes 40mb. Broadly speaking, one can say that it is because “PyTorch needs to save the computation graph, which is needed to call backward ”, hence the additional memory usage. savez--> Save several arrays into a single file in uncompressed. Why DataLoader (representative_dataset) nncf_config = register_default_init_args (nncf_config, init_loader) # Apply the specified compression algorithms to the model compression_ctrl, compressed_model = create_compressed_model (model, nncf_config) # Now use compressed_model as a usual torch. R defines the following functions: create_read_con create_write_con is_rds saveRDS internal_update_parameters_and_buffers load_state_dict torch_load_list torch_load_module load_tensor_from_raw torch_load_tensor legacy_torch_serialize torch_serialize legacy_torch_load torch_load torch_save_to_file torch_save_to_file_with_state_dict torch_save. Conv2d(3, 3, 1, 1, 0) takes >>> sys. cdzl izfnh ngxnki byhb ycvinp rpqm webac iww tng expwt sevru ppi dknga qinwa zflpml