List of commits:
Subject Hash Author Date (UTC)
fix b2235d1abdb20c3e1bfcc0d42870dad4b706babc Thai Thien 2020-04-23 16:13:17
debug 6cda7a90f14c5768be5bfbb9d87a985cae845f4c Thai Thien 2020-04-23 11:27:53
best score checkpoint , timer bb26cec915aa04d68a0dd00911c273542f9b34b5 Thai Thien 2020-04-23 11:16:24
m4 t2 a02b9610e868f4ba5e64496dc0c861a269f4cb9f Thai Thien 2020-04-17 16:01:39
fix url 358f164d558dab393f65c0829d8d9c37b1437ff3 Thai Thien 2020-04-16 14:32:49
increase epoch 03be68a9e02df1ffa245394ea3096990e8f9d44b Thai Thien 2020-04-16 14:30:15
add load model 044a398d62add2e854b79b0b3c48c961a4a20bb0 Thai Thien 2020-04-16 14:27:43
M4 c960a8e3ddbfb7fc57f3f843fa4184c063cf8cdb Thai Thien 2020-04-16 14:22:37
typo again 3dbe3ce4634b8d4ca30b012851c5b9690b1d88d7 Thai Thien 2020-04-13 15:49:23
typo 9b9a84ed5bfffc6e8979fe8b9aa2d6411bfd70c2 Thai Thien 2020-04-13 15:47:51
typo 69cdd6f3037ef0357783ad4c3f8cdf8de2258c3b Thai Thien 2020-04-13 15:38:14
small tall on split branch 36c80eee740df7449c112f4dd4925e0ffbc7ac5a Thai Thien 2020-04-13 15:23:33
fix load model 68c03563a5aa3acda165eae104ff4d2df83201b2 Thai Thien 2020-04-09 17:30:14
get lr and weight_decay b43666da2cb8bb5710f30d0f3bfd2c3b1e7b6473 Thai Thien 2020-04-09 17:26:13
1201 epoch (because cur epoch continue when load) ee303fe3945fcc6cc3c26601039de77a58fa3d60 Thai Thien 2020-04-09 17:23:18
shb load model 062126f959c021577dbf08224aeb442ca308587c Thai Thien 2020-04-09 17:20:27
4 2f49bfa380e997c177af1de34c8d6882ed7099e9 Thai Thien 2020-04-09 17:06:28
11 331644c623b1bf5a34c4db432e1526f6bc34a398 Thai Thien 2020-04-09 17:04:49
batchsize 24 f6aeba845ee1915cb0aeb8fce298e56a6ba40b3a Thai Thien 2020-04-09 16:46:25
t10 91d9e83c80a6a533535fd91278b9175c839c9715 Thai Thien 2020-04-09 16:43:38
Commit b2235d1abdb20c3e1bfcc0d42870dad4b706babc - fix
Author: Thai Thien
Author date (UTC): 2020-04-23 16:13
Committer name: Thai Thien
Committer date (UTC): 2020-04-23 16:13
Parent(s): 6cda7a90f14c5768be5bfbb9d87a985cae845f4c
Signing key:
Tree: 171caa1e281c205d7e3360db3d94c9b81499ade1
File Lines added Lines deleted
experiment_meow_main.py 16 9
local_train_script/M4_t2_shb.sh 2 2
logs/local_M4_t2_shb.log 94 0
logs/local_M4_t2_shb_2.log 59 0
logs/local_M4_t2_shb_3.log 233 0
saved_model_best/keep 1 0
saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=57.744759390625774.pth 0 0
saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=59.44025740442397.pth 0 0
saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=62.796104585068136.pth 0 0
saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=69.45232154749617.pth 0 0
saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=132.52883276154722.pth 0 0
saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=139.45122398907625.pth 0 0
saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=169.65900459772425.pth 0 0
saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=191.4196609183203.pth 0 0
saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=200.82528000843675.pth 0 0
train_script/meow_one/M4_t2_sha.sh 1 1
train_script/meow_one/M4_t2_sha_c.sh 5 4
train_script/meow_one/M4_t2_sha_shb.sh 5 4
train_script/meow_one/M4_t3_shb.sh 2 2
File experiment_meow_main.py changed (mode: 100644) (index 0bad71e..bed194c)
... ... if __name__ == "__main__":
93 93 weight_decay=args.decay) weight_decay=args.decay)
94 94
95 95 trainer = create_supervised_trainer(model, optimizer, loss_fn, device=device) trainer = create_supervised_trainer(model, optimizer, loss_fn, device=device)
96 evaluator = create_supervised_evaluator(model,
96 evaluator_train = create_supervised_evaluator(model,
97 metrics={
98 'mae': CrowdCountingMeanAbsoluteError(),
99 'mse': CrowdCountingMeanSquaredError(),
100 'loss': Loss(loss_fn)
101 }, device=device)
102
103 evaluator_validate = create_supervised_evaluator(model,
97 104 metrics={ metrics={
98 105 'mae': CrowdCountingMeanAbsoluteError(), 'mae': CrowdCountingMeanAbsoluteError(),
99 106 'mse': CrowdCountingMeanSquaredError(), 'mse': CrowdCountingMeanSquaredError(),
 
... ... if __name__ == "__main__":
105 112
106 113
107 114 # timer # timer
108 train_timer = Timer() # time to train whole epoch
115 train_timer = Timer(average=True) # time to train whole epoch
109 116 batch_timer = Timer(average=True) # every batch batch_timer = Timer(average=True) # every batch
110 evaluate_timer = Timer()
117 evaluate_timer = Timer(average=True)
111 118
112 119 batch_timer.attach(trainer, batch_timer.attach(trainer,
113 120 start =Events.EPOCH_STARTED, start =Events.EPOCH_STARTED,
 
... ... if __name__ == "__main__":
143 150
144 151 @trainer.on(Events.EPOCH_COMPLETED) @trainer.on(Events.EPOCH_COMPLETED)
145 152 def log_training_results(trainer): def log_training_results(trainer):
146 evaluator.run(train_loader)
147 metrics = evaluator.state.metrics
153 evaluator_train.run(train_loader)
154 metrics = evaluator_train.state.metrics
148 155 timestamp = get_readable_time() timestamp = get_readable_time()
149 156 print(timestamp + " Training set Results - Epoch: {} Avg mae: {:.2f} Avg mse: {:.2f} Avg loss: {:.2f}" print(timestamp + " Training set Results - Epoch: {} Avg mae: {:.2f} Avg mse: {:.2f} Avg loss: {:.2f}"
150 157 .format(trainer.state.epoch, metrics['mae'], metrics['mse'], metrics['loss'])) .format(trainer.state.epoch, metrics['mae'], metrics['mse'], metrics['loss']))
 
... ... if __name__ == "__main__":
163 170 @trainer.on(Events.EPOCH_COMPLETED) @trainer.on(Events.EPOCH_COMPLETED)
164 171 def log_validation_results(trainer): def log_validation_results(trainer):
165 172 evaluate_timer.resume() evaluate_timer.resume()
166 evaluator.run(test_loader)
173 evaluator_validate.run(test_loader)
167 174 evaluate_timer.pause() evaluate_timer.pause()
168 175 evaluate_timer.step() evaluate_timer.step()
169 176
170 metrics = evaluator.state.metrics
177 metrics = evaluator_validate.state.metrics
171 178 timestamp = get_readable_time() timestamp = get_readable_time()
172 179 print(timestamp + " Validation set Results - Epoch: {} Avg mae: {:.2f} Avg mse: {:.2f} Avg loss: {:.2f}" print(timestamp + " Validation set Results - Epoch: {} Avg mae: {:.2f} Avg mse: {:.2f} Avg loss: {:.2f}"
173 180 .format(trainer.state.epoch, metrics['mae'], metrics['mse'], metrics['loss'])) .format(trainer.state.epoch, metrics['mae'], metrics['mse'], metrics['loss']))
 
... ... if __name__ == "__main__":
180 187 print("evaluate_timer ", evaluate_timer.value()) print("evaluate_timer ", evaluate_timer.value())
181 188
182 189 def checkpoint_valid_mae_score_function(engine): def checkpoint_valid_mae_score_function(engine):
183 score = engine.state.metrics['valid_mae']
190 score = engine.state.metrics['mae']
184 191 return score return score
185 192
186 193
 
... ... if __name__ == "__main__":
195 202 n_saved=5) n_saved=5)
196 203
197 204 trainer.add_event_handler(Events.EPOCH_COMPLETED(every=5), save_handler) trainer.add_event_handler(Events.EPOCH_COMPLETED(every=5), save_handler)
198 trainer.add_event_handler(Events.EPOCH_COMPLETED(every=1), save_handler_best)
205 evaluator_validate.add_event_handler(Events.EPOCH_COMPLETED(every=1), save_handler_best)
199 206
200 207
201 208 trainer.run(train_loader, max_epochs=args.epochs) trainer.run(train_loader, max_epochs=args.epochs)
File local_train_script/M4_t2_shb.sh changed (mode: 100644) (index d2853c8..05c6148)
1 task="local_M4_t2_shb"
1 task="local_M4_t2_shb_3"
2 2
3 3 nohup python experiment_meow_main.py \ nohup python experiment_meow_main.py \
4 4 --task_id $task \ --task_id $task \
 
... ... nohup python experiment_meow_main.py \
7 7 --input /data/ShanghaiTech/part_B \ --input /data/ShanghaiTech/part_B \
8 8 --lr 1e-4 \ --lr 1e-4 \
9 9 --decay 1e-4 \ --decay 1e-4 \
10 --batch_size 5 \
10 --batch_size 6 \
11 11 --datasetname shanghaitech_rnd \ --datasetname shanghaitech_rnd \
12 12 --epochs 301 > logs/$task.log & --epochs 301 > logs/$task.log &
13 13
File logs/local_M4_t2_shb.log added (mode: 100644) (index 0000000..8eeb12a)
1 COMET INFO: old comet version (3.1.2) detected. current: 3.1.6 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
2 COMET INFO: Experiment is live on comet.ml https://www.comet.ml/ttpro1995/crowd-counting-debug/296c15f703d944abbc899509217a2948
3
4 cuda
5 Namespace(batch_size=5, datasetname='shanghaitech_rnd', decay=0.0001, epochs=301, input='/data/ShanghaiTech/part_B', load_model='', lr=0.0001, model='M4', momentum=0.9, note='M4 shanghaitech_rnd', task_id='local_M4_t2_shb', test=False)
6 cannot detect dataset_name
7 current dataset_name is shanghaitech_rnd
8 in ListDataset dataset_name is |shanghaitech_rnd|
9 in ListDataset dataset_name is |shanghaitech_rnd|
10 len train_loader 320
11 M4(
12 (front_cnn_1): Conv2d(3, 20, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
13 (front_cnn_2): Conv2d(20, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
14 (front_cnn_3): Conv2d(16, 14, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
15 (front_cnn_4): Conv2d(14, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
16 (max_pooling): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
17 (c0): Conv2d(40, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
18 (c1): Conv2d(60, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
19 (c2): Conv2d(60, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
20 (c3): Conv2d(60, 30, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
21 (c4): Conv2d(30, 15, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
22 (c5): Conv2d(15, 10, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
23 (output): Conv2d(10, 1, kernel_size=(1, 1), stride=(1, 1))
24 )
25 Namespace(batch_size=5, datasetname='shanghaitech_rnd', decay=0.0001, epochs=301, input='/data/ShanghaiTech/part_B', load_model='', lr=0.0001, model='M4', momentum=0.9, note='M4 shanghaitech_rnd', task_id='local_M4_t2_shb', test=False)
26 do not load, keep training
27 2020-04-23 18:29 Epoch[1] Loss: 11.75
28 2020-04-23 18:31 Epoch[1] Loss: 1.50
29 2020-04-23 18:33 Epoch[1] Loss: 186.33
30 2020-04-23 18:34 Training set Results - Epoch: 1 Avg mae: 10.82 Avg mse: 28.86 Avg loss: 43.24
31 batch_timer 1.1728258961000022
32 train_timer 375.97995851999985
33 2020-04-23 18:35 Validation set Results - Epoch: 1 Avg mae: 58.16 Avg mse: 77.97 Avg loss: 40.49
34 evaluate_timer 474.39532556299946
35 Engine run is terminating due to exception: 'valid_mae'.
36 Traceback (most recent call last):
37 File "experiment_meow_main.py", line 201, in <module>
38 trainer.run(train_loader, max_epochs=args.epochs)
39 File "/home/tt/miniconda3/envs/pytorch_gpu/lib/python3.7/site-packages/ignite/engine/engine.py", line 850, in run
40 return self._internal_run()
41 File "/home/tt/miniconda3/envs/pytorch_gpu/lib/python3.7/site-packages/ignite/engine/engine.py", line 952, in _internal_run
42 self._handle_exception(e)
43 File "/home/tt/miniconda3/envs/pytorch_gpu/lib/python3.7/site-packages/ignite/engine/engine.py", line 716, in _handle_exception
44 raise e
45 File "/home/tt/miniconda3/envs/pytorch_gpu/lib/python3.7/site-packages/ignite/engine/engine.py", line 942, in _internal_run
46 self._fire_event(Events.EPOCH_COMPLETED)
47 File "/home/tt/miniconda3/envs/pytorch_gpu/lib/python3.7/site-packages/ignite/engine/engine.py", line 607, in _fire_event
48 func(self, *(event_args + args), **kwargs)
49 File "/home/tt/miniconda3/envs/pytorch_gpu/lib/python3.7/site-packages/ignite/handlers/checkpoint.py", line 171, in __call__
50 priority = self._score_function(engine)
51 File "experiment_meow_main.py", line 183, in checkpoint_valid_mae_score_function
52 score = engine.state.metrics['valid_mae']
53 KeyError: 'valid_mae'
54 COMET INFO: ----------------------------
55 COMET INFO: Comet.ml Experiment Summary:
56 COMET INFO: Data:
57 COMET INFO: url: https://www.comet.ml/ttpro1995/crowd-counting-debug/296c15f703d944abbc899509217a2948
58 COMET INFO: Metrics [count] (min, max):
59 COMET INFO: batch_timer : (1.1728258961000022, 1.1728258961000022)
60 COMET INFO: epoch : (1.0, 1.0)
61 COMET INFO: evaluate_timer : (474.39532556299946, 474.39532556299946)
62 COMET INFO: loss [32] : (1.0665087699890137, 585.623291015625)
63 COMET INFO: lr : (0.0001, 0.0001)
64 COMET INFO: sys.cpu.percent.01 [8] : (6.0, 59.9)
65 COMET INFO: sys.cpu.percent.02 [8] : (4.7, 51.6)
66 COMET INFO: sys.cpu.percent.03 [8] : (2.5, 39.7)
67 COMET INFO: sys.cpu.percent.04 [8] : (5.0, 74.2)
68 COMET INFO: sys.cpu.percent.05 [8] : (2.3, 27.1)
69 COMET INFO: sys.cpu.percent.06 [8] : (4.1, 27.6)
70 COMET INFO: sys.cpu.percent.avg [8] : (19.55, 27.066666666666666)
71 COMET INFO: sys.gpu.0.free_memory [8] : (1940848640.0, 3300524032.0)
72 COMET INFO: sys.gpu.0.gpu_utilization [8]: (4.0, 100.0)
73 COMET INFO: sys.gpu.0.total_memory : (4234936320.0, 4234936320.0)
74 COMET INFO: sys.gpu.0.used_memory [8] : (934412288.0, 2294087680.0)
75 COMET INFO: sys.load.avg [8] : (1.07, 1.63)
76 COMET INFO: sys.ram.total [8] : (33607774208.0, 33607774208.0)
77 COMET INFO: sys.ram.used [8] : (6373793792.0, 7998341120.0)
78 COMET INFO: train_loss : (43.24176017493009, 43.24176017493009)
79 COMET INFO: train_mae : (10.818375327587127, 10.818375327587127)
80 COMET INFO: train_mse : (28.85955970076625, 28.85955970076625)
81 COMET INFO: train_timer : (375.97995851999985, 375.97995851999985)
82 COMET INFO: valid_loss : (40.486990099277676, 40.486990099277676)
83 COMET INFO: valid_mae : (58.15887939477269, 58.15887939477269)
84 COMET INFO: valid_mse : (77.97296361611996, 77.97296361611996)
85 COMET INFO: Other [count]:
86 COMET INFO: Name : local_M4_t2_shb
87 COMET INFO: model : M4
88 COMET INFO: model_note: We replace 5x5 7x7 9x9 with 3x3, no batchnorm yet, change tail to dilated max 60 with dilated 2
89 COMET INFO: n_param : 115002
90 COMET INFO: Uploads:
91 COMET INFO: git-patch : 1
92 COMET INFO: text-sample: 1
93 COMET INFO: ----------------------------
94 COMET INFO: Uploading stats to Comet before program termination (may take several seconds)
File logs/local_M4_t2_shb_2.log added (mode: 100644) (index 0000000..c8d356a)
1 COMET INFO: old comet version (3.1.2) detected. current: 3.1.6 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
2 COMET INFO: Experiment is live on comet.ml https://www.comet.ml/ttpro1995/crowd-counting-debug/9812f8e91306454bb2c86c7c75833e2e
3
4 cuda
5 Namespace(batch_size=5, datasetname='shanghaitech_rnd', decay=0.0001, epochs=301, input='/data/ShanghaiTech/part_B', load_model='', lr=0.0001, model='M4', momentum=0.9, note='M4 shanghaitech_rnd', task_id='local_M4_t2_shb_2', test=False)
6 cannot detect dataset_name
7 current dataset_name is shanghaitech_rnd
8 in ListDataset dataset_name is |shanghaitech_rnd|
9 in ListDataset dataset_name is |shanghaitech_rnd|
10 len train_loader 320
11 M4(
12 (front_cnn_1): Conv2d(3, 20, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
13 (front_cnn_2): Conv2d(20, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
14 (front_cnn_3): Conv2d(16, 14, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
15 (front_cnn_4): Conv2d(14, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
16 (max_pooling): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
17 (c0): Conv2d(40, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
18 (c1): Conv2d(60, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
19 (c2): Conv2d(60, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
20 (c3): Conv2d(60, 30, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
21 (c4): Conv2d(30, 15, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
22 (c5): Conv2d(15, 10, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
23 (output): Conv2d(10, 1, kernel_size=(1, 1), stride=(1, 1))
24 )
25 Namespace(batch_size=5, datasetname='shanghaitech_rnd', decay=0.0001, epochs=301, input='/data/ShanghaiTech/part_B', load_model='', lr=0.0001, model='M4', momentum=0.9, note='M4 shanghaitech_rnd', task_id='local_M4_t2_shb_2', test=False)
26 do not load, keep training
27 2020-04-23 18:47 Epoch[1] Loss: 274.38
28 2020-04-23 18:50 Epoch[1] Loss: 10.19
29 2020-04-23 18:52 Epoch[1] Loss: 14.75
30 2020-04-23 18:53 Training set Results - Epoch: 1 Avg mae: 13.02 Avg mse: 41.18 Avg loss: 46.16
31 batch_timer 1.23345003553124
32 train_timer 395.3803381779999
33 2020-04-23 18:54 Validation set Results - Epoch: 1 Avg mae: 69.45 Avg mse: 104.63 Avg loss: 41.68
34 evaluate_timer 494.94025821000014
35 2020-04-23 18:55 Epoch[2] Loss: 156.00
36 2020-04-23 18:58 Epoch[2] Loss: 3.47
37 2020-04-23 19:00 Epoch[2] Loss: 35.67
38 2020-04-23 19:02 Training set Results - Epoch: 2 Avg mae: 9.75 Avg mse: 32.70 Avg loss: 44.95
39 batch_timer 1.2935090125187485
40 train_timer 414.5965156809998
41 2020-04-23 19:02 Validation set Results - Epoch: 2 Avg mae: 57.74 Avg mse: 91.63 Avg loss: 40.40
42 evaluate_timer 539.1798762900007
43 2020-04-23 19:04 Epoch[3] Loss: 18.57
44 2020-04-23 19:06 Epoch[3] Loss: 45.77
45 2020-04-23 19:08 Epoch[3] Loss: 14.50
46 2020-04-23 19:10 Training set Results - Epoch: 3 Avg mae: 11.34 Avg mse: 38.19 Avg loss: 45.53
47 batch_timer 1.3080307883156195
48 train_timer 419.2482117620002
49 2020-04-23 19:11 Validation set Results - Epoch: 3 Avg mae: 59.44 Avg mse: 95.48 Avg loss: 41.33
50 evaluate_timer 583.7960872350013
51 2020-04-23 19:12 Epoch[4] Loss: 12.30
52 2020-04-23 19:14 Epoch[4] Loss: 240.78
53 2020-04-23 19:16 Epoch[4] Loss: 147.18
54 2020-04-23 19:19 Training set Results - Epoch: 4 Avg mae: 12.01 Avg mse: 37.14 Avg loss: 43.37
55 batch_timer 1.3212819507000149
56 train_timer 423.4915877759995
57 2020-04-23 19:20 Validation set Results - Epoch: 4 Avg mae: 62.80 Avg mse: 94.34 Avg loss: 38.80
58 evaluate_timer 628.1580848920012
59 2020-04-23 19:20 Epoch[5] Loss: 11.43
File logs/local_M4_t2_shb_3.log added (mode: 100644) (index 0000000..8b6c2cc)
1 COMET INFO: old comet version (3.1.2) detected. current: 3.1.6 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
2 COMET INFO: Experiment is live on comet.ml https://www.comet.ml/ttpro1995/crowd-counting-debug/52632ec152104228b2343616446d410b
3
4 cuda
5 Namespace(batch_size=6, datasetname='shanghaitech_rnd', decay=0.0001, epochs=301, input='/data/ShanghaiTech/part_B', load_model='', lr=0.0001, model='M4', momentum=0.9, note='M4 shanghaitech_rnd', task_id='local_M4_t2_shb_3', test=False)
6 cannot detect dataset_name
7 current dataset_name is shanghaitech_rnd
8 in ListDataset dataset_name is |shanghaitech_rnd|
9 in ListDataset dataset_name is |shanghaitech_rnd|
10 len train_loader 267
11 M4(
12 (front_cnn_1): Conv2d(3, 20, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
13 (front_cnn_2): Conv2d(20, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
14 (front_cnn_3): Conv2d(16, 14, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
15 (front_cnn_4): Conv2d(14, 10, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
16 (max_pooling): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
17 (c0): Conv2d(40, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
18 (c1): Conv2d(60, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
19 (c2): Conv2d(60, 60, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
20 (c3): Conv2d(60, 30, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
21 (c4): Conv2d(30, 15, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
22 (c5): Conv2d(15, 10, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2))
23 (output): Conv2d(10, 1, kernel_size=(1, 1), stride=(1, 1))
24 )
25 Namespace(batch_size=6, datasetname='shanghaitech_rnd', decay=0.0001, epochs=301, input='/data/ShanghaiTech/part_B', load_model='', lr=0.0001, model='M4', momentum=0.9, note='M4 shanghaitech_rnd', task_id='local_M4_t2_shb_3', test=False)
26 do not load, keep training
27 2020-04-23 19:25 Epoch[1] Loss: 157.76
28 2020-04-23 19:28 Epoch[1] Loss: 40.31
29 2020-04-23 19:31 Training set Results - Epoch: 1 Avg mae: 25.26 Avg mse: 68.05 Avg loss: 63.05
30 batch_timer 1.6027500071872778
31 train_timer 428.5975191990001
32 2020-04-23 19:32 Validation set Results - Epoch: 1 Avg mae: 139.45 Avg mse: 156.09 Avg loss: 46.80
33 evaluate_timer 532.3665182590003
34 2020-04-23 19:32 Epoch[2] Loss: 21.56
35 2020-04-23 19:35 Epoch[2] Loss: 8.69
36 2020-04-23 19:38 Epoch[2] Loss: 5.47
37 2020-04-23 19:40 Training set Results - Epoch: 2 Avg mae: 9.33 Avg mse: 29.47 Avg loss: 54.34
38 batch_timer 1.6025197360037389
39 train_timer 428.50685849599995
40 2020-04-23 19:40 Validation set Results - Epoch: 2 Avg mae: 62.14 Avg mse: 91.67 Avg loss: 42.43
41 evaluate_timer 288.88640787399936
42 2020-04-23 19:42 Epoch[3] Loss: 27.98
43 2020-04-23 19:45 Epoch[3] Loss: 45.41
44 2020-04-23 19:47 Epoch[3] Loss: 59.12
45 2020-04-23 19:48 Training set Results - Epoch: 3 Avg mae: 23.31 Avg mse: 64.04 Avg loss: 54.42
46 batch_timer 1.5609535033856465
47 train_timer 417.40394999399905
48 2020-04-23 19:49 Validation set Results - Epoch: 3 Avg mae: 115.29 Avg mse: 145.45 Avg loss: 42.98
49 evaluate_timer 206.72797289633277
50 2020-04-23 19:52 Epoch[4] Loss: 38.42
51 2020-04-23 19:54 Epoch[4] Loss: 105.40
52 2020-04-23 19:57 Training set Results - Epoch: 4 Avg mae: 41.71 Avg mse: 106.21 Avg loss: 56.26
53 batch_timer 1.516032846382014
54 train_timer 405.41096944499986
55 2020-04-23 19:57 Validation set Results - Epoch: 4 Avg mae: 191.42 Avg mse: 210.89 Avg loss: 44.67
56 evaluate_timer 165.64150846399957
57 2020-04-23 19:58 Epoch[5] Loss: 2.71
58 2020-04-23 20:01 Epoch[5] Loss: 197.20
59 2020-04-23 20:03 Epoch[5] Loss: 5.83
60 2020-04-23 20:05 Training set Results - Epoch: 5 Avg mae: 35.60 Avg mse: 91.66 Avg loss: 54.85
61 batch_timer 1.514120053531807
62 train_timer 404.8973960450003
63 2020-04-23 20:06 Validation set Results - Epoch: 5 Avg mae: 169.66 Avg mse: 189.90 Avg loss: 43.60
64 evaluate_timer 140.9829996239996
65 2020-04-23 20:07 Epoch[6] Loss: 10.17
66 2020-04-23 20:10 Epoch[6] Loss: 35.72
67 2020-04-23 20:12 Epoch[6] Loss: 10.66
68 2020-04-23 20:13 Training set Results - Epoch: 6 Avg mae: 8.65 Avg mse: 27.33 Avg loss: 51.90
69 batch_timer 1.5127704124007095
70 train_timer 404.5389165050001
71 2020-04-23 20:14 Validation set Results - Epoch: 6 Avg mae: 58.83 Avg mse: 89.87 Avg loss: 40.90
72 evaluate_timer 124.53610803233293
73 2020-04-23 20:17 Epoch[7] Loss: 3.90
74 2020-04-23 20:19 Epoch[7] Loss: 22.24
75 2020-04-23 20:22 Training set Results - Epoch: 7 Avg mae: 10.08 Avg mse: 29.83 Avg loss: 51.61
76 batch_timer 1.510363992580539
77 train_timer 403.89823696299936
78 2020-04-23 20:22 Validation set Results - Epoch: 7 Avg mae: 57.48 Avg mse: 81.85 Avg loss: 40.66
79 evaluate_timer 112.79780120142836
80 2020-04-23 20:23 Epoch[8] Loss: 5.81
81 2020-04-23 20:26 Epoch[8] Loss: 16.46
82 2020-04-23 20:28 Epoch[8] Loss: 23.18
83 2020-04-23 20:30 Training set Results - Epoch: 8 Avg mae: 12.32 Avg mse: 38.93 Avg loss: 51.69
84 batch_timer 1.5113861737753065
85 train_timer 404.16859211799965
86 2020-04-23 20:31 Validation set Results - Epoch: 8 Avg mae: 76.20 Avg mse: 108.27 Avg loss: 40.73
87 evaluate_timer 103.99689667949986
88 2020-04-23 20:32 Epoch[9] Loss: 17.89
89 2020-04-23 20:35 Epoch[9] Loss: 59.44
90 2020-04-23 20:38 Epoch[9] Loss: 124.24
91 2020-04-23 20:38 Training set Results - Epoch: 9 Avg mae: 12.22 Avg mse: 38.42 Avg loss: 51.38
92 batch_timer 1.5097762611049539
93 train_timer 403.74319395400016
94 2020-04-23 20:39 Validation set Results - Epoch: 9 Avg mae: 77.93 Avg mse: 108.94 Avg loss: 40.39
95 evaluate_timer 97.13890678133319
96 2020-04-23 20:42 Epoch[10] Loss: 78.78
97 2020-04-23 20:44 Epoch[10] Loss: 5.98
98 2020-04-23 20:47 Training set Results - Epoch: 10 Avg mae: 11.58 Avg mse: 32.89 Avg loss: 50.37
99 batch_timer 1.509434576812702
100 train_timer 403.6505684540007
101 2020-04-23 20:48 Validation set Results - Epoch: 10 Avg mae: 58.96 Avg mse: 79.14 Avg loss: 39.58
102 evaluate_timer 91.6523462709999
103 2020-04-23 20:48 Epoch[11] Loss: 2.38
104 2020-04-23 20:51 Epoch[11] Loss: 42.88
105 2020-04-23 20:53 Epoch[11] Loss: 62.60
106 2020-04-23 20:55 Training set Results - Epoch: 11 Avg mae: 19.52 Avg mse: 51.88 Avg loss: 50.17
107 batch_timer 1.5108423394756736
108 train_timer 404.0260751580008
109 2020-04-23 20:56 Validation set Results - Epoch: 11 Avg mae: 80.62 Avg mse: 94.73 Avg loss: 39.34
110 evaluate_timer 87.16732233890906
111 2020-04-23 20:57 Epoch[12] Loss: 2.89
112 2020-04-23 21:00 Epoch[12] Loss: 21.69
113 2020-04-23 21:03 Epoch[12] Loss: 19.22
114 2020-04-23 21:04 Training set Results - Epoch: 12 Avg mae: 26.26 Avg mse: 69.29 Avg loss: 50.30
115 batch_timer 1.5111950944120076
116 train_timer 404.1229257729992
117 2020-04-23 21:04 Validation set Results - Epoch: 12 Avg mae: 120.18 Avg mse: 142.85 Avg loss: 39.93
118 evaluate_timer 83.41845450158333
119 2020-04-23 21:07 Epoch[13] Loss: 43.01
120 2020-04-23 21:09 Epoch[13] Loss: 89.63
121 2020-04-23 21:12 Training set Results - Epoch: 13 Avg mae: 11.75 Avg mse: 32.99 Avg loss: 48.79
122 batch_timer 1.507054667876416
123 train_timer 403.01198405300056
124 2020-04-23 21:13 Validation set Results - Epoch: 13 Avg mae: 56.59 Avg mse: 73.64 Avg loss: 37.76
125 evaluate_timer 80.24973219507683
126 2020-04-23 21:13 Epoch[14] Loss: 4.90
127 2020-04-23 21:16 Epoch[14] Loss: 1.77
128 2020-04-23 21:18 Epoch[14] Loss: 2.80
129 2020-04-23 21:20 Training set Results - Epoch: 14 Avg mae: 16.21 Avg mse: 44.64 Avg loss: 49.65
130 batch_timer 1.5054719848314344
131 train_timer 402.59181541700127
132 2020-04-23 21:21 Validation set Results - Epoch: 14 Avg mae: 66.64 Avg mse: 80.88 Avg loss: 37.96
133 evaluate_timer 77.53475438057136
134 2020-04-23 21:22 Epoch[15] Loss: 5.92
135 2020-04-23 21:25 Epoch[15] Loss: 123.88
136 2020-04-23 21:28 Epoch[15] Loss: 39.55
137 2020-04-23 21:29 Training set Results - Epoch: 15 Avg mae: 14.73 Avg mse: 40.01 Avg loss: 47.69
138 batch_timer 1.5052845093596534
139 train_timer 402.54458960900047
140 2020-04-23 21:29 Validation set Results - Epoch: 15 Avg mae: 67.93 Avg mse: 82.52 Avg loss: 37.44
141 evaluate_timer 75.17388044700002
142 2020-04-23 21:32 Epoch[16] Loss: 357.07
143 2020-04-23 21:34 Epoch[16] Loss: 3.89
144 2020-04-23 21:37 Training set Results - Epoch: 16 Avg mae: 17.02 Avg mse: 45.19 Avg loss: 46.72
145 batch_timer 1.5042562783519622
146 train_timer 402.26867924
147 2020-04-23 21:38 Validation set Results - Epoch: 16 Avg mae: 71.12 Avg mse: 83.18 Avg loss: 36.37
148 evaluate_timer 73.10339040662501
149 2020-04-23 21:38 Epoch[17] Loss: 18.66
150 2020-04-23 21:41 Epoch[17] Loss: 10.53
151 2020-04-23 21:43 Epoch[17] Loss: 8.12
152 2020-04-23 21:45 Training set Results - Epoch: 17 Avg mae: 14.81 Avg mse: 43.63 Avg loss: 47.25
153 batch_timer 1.5014096361497882
154 train_timer 401.5089577210001
155 2020-04-23 21:46 Validation set Results - Epoch: 17 Avg mae: 72.58 Avg mse: 100.21 Avg loss: 37.29
156 evaluate_timer 71.27870769105876
157 2020-04-23 21:47 Epoch[18] Loss: 60.89
158 2020-04-23 21:50 Epoch[18] Loss: 15.95
159 2020-04-23 21:52 Epoch[18] Loss: 11.37
160 2020-04-23 21:53 Training set Results - Epoch: 18 Avg mae: 11.68 Avg mse: 32.95 Avg loss: 50.38
161 batch_timer 1.5010375493521484
162 train_timer 401.40834010800063
163 2020-04-23 21:54 Validation set Results - Epoch: 18 Avg mae: 61.62 Avg mse: 79.94 Avg loss: 40.23
164 evaluate_timer 69.64817162199977
165 2020-04-23 21:57 Epoch[19] Loss: 11.10
166 2020-04-23 21:59 Epoch[19] Loss: 5.64
167 2020-04-23 22:02 Training set Results - Epoch: 19 Avg mae: 9.71 Avg mse: 27.28 Avg loss: 44.59
168 batch_timer 1.5014901278951542
169 train_timer 401.5331730129983
170 2020-04-23 22:02 Validation set Results - Epoch: 19 Avg mae: 47.83 Avg mse: 62.23 Avg loss: 34.58
171 evaluate_timer 68.19908461989445
172 2020-04-23 22:03 Epoch[20] Loss: 4.24
173 2020-04-23 22:06 Epoch[20] Loss: 4.98
174 2020-04-23 22:08 Epoch[20] Loss: 12.58
175 2020-04-23 22:10 Training set Results - Epoch: 20 Avg mae: 7.17 Avg mse: 23.41 Avg loss: 45.63
176 batch_timer 1.5019663932621727
177 train_timer 401.66113662299904
178 2020-04-23 22:11 Validation set Results - Epoch: 20 Avg mae: 44.14 Avg mse: 66.38 Avg loss: 36.00
179 evaluate_timer 66.89079298074962
180 2020-04-23 22:12 Epoch[21] Loss: 46.10
181 2020-04-23 22:15 Epoch[21] Loss: 1.63
182 2020-04-23 22:17 Epoch[21] Loss: 69.28
183 2020-04-23 22:18 Training set Results - Epoch: 21 Avg mae: 20.70 Avg mse: 53.60 Avg loss: 44.60
184 batch_timer 1.5005766239251415
185 train_timer 401.2885265520017
186 2020-04-23 22:19 Validation set Results - Epoch: 21 Avg mae: 87.68 Avg mse: 95.28 Avg loss: 34.93
187 evaluate_timer 65.704032795571
188 2020-04-23 22:21 Epoch[22] Loss: 14.10
189 2020-04-23 22:24 Epoch[22] Loss: 1.55
190 2020-04-23 22:27 Training set Results - Epoch: 22 Avg mae: 8.69 Avg mse: 24.62 Avg loss: 43.32
191 batch_timer 1.4999934545505196
192 train_timer 401.1275074130026
193 2020-04-23 22:27 Validation set Results - Epoch: 22 Avg mae: 42.49 Avg mse: 54.77 Avg loss: 33.46
194 evaluate_timer 64.63013197804507
195 2020-04-23 22:28 Epoch[23] Loss: 151.10
196 2020-04-23 22:31 Epoch[23] Loss: 232.67
197 2020-04-23 22:33 Epoch[23] Loss: 60.74
198 2020-04-23 22:35 Training set Results - Epoch: 23 Avg mae: 10.74 Avg mse: 30.91 Avg loss: 43.83
199 batch_timer 1.4999699035091607
200 train_timer 401.1245063899987
201 2020-04-23 22:36 Validation set Results - Epoch: 23 Avg mae: 54.61 Avg mse: 70.14 Avg loss: 34.02
202 evaluate_timer 63.654571201608356
203 2020-04-23 22:37 Epoch[24] Loss: 235.90
204 2020-04-23 22:40 Epoch[24] Loss: 5.35
205 2020-04-23 22:42 Epoch[24] Loss: 21.96
206 2020-04-23 22:44 Training set Results - Epoch: 24 Avg mae: 51.36 Avg mse: 126.97 Avg loss: 47.62
207 batch_timer 1.5584932232882425
208 train_timer 416.762719467999
209 2020-04-23 22:44 Validation set Results - Epoch: 24 Avg mae: 200.83 Avg mse: 206.89 Avg loss: 36.55
210 evaluate_timer 62.85229290791634
211 2020-04-23 22:47 Epoch[25] Loss: 2.61
212 2020-04-23 22:49 Epoch[25] Loss: 12.53
213 2020-04-23 22:52 Training set Results - Epoch: 25 Avg mae: 7.23 Avg mse: 23.68 Avg loss: 42.18
214 batch_timer 1.59536279685012
215 train_timer 426.6178994440015
216 2020-04-23 22:53 Validation set Results - Epoch: 25 Avg mae: 41.98 Avg mse: 61.75 Avg loss: 32.98
217 evaluate_timer 62.14902110495972
218 2020-04-23 22:54 Epoch[26] Loss: 8.89
219 2020-04-23 22:56 Epoch[26] Loss: 26.58
220 2020-04-23 22:59 Epoch[26] Loss: 233.90
221 2020-04-23 23:01 Training set Results - Epoch: 26 Avg mae: 33.57 Avg mse: 84.02 Avg loss: 43.95
222 batch_timer 1.5795833819401868
223 train_timer 422.40113607900275
224 2020-04-23 23:02 Validation set Results - Epoch: 26 Avg mae: 132.53 Avg mse: 138.88 Avg loss: 33.93
225 evaluate_timer 61.47791835126894
226 2020-04-23 23:03 Epoch[27] Loss: 4.53
227 2020-04-23 23:06 Epoch[27] Loss: 12.28
228 2020-04-23 23:09 Epoch[27] Loss: 1.33
229 2020-04-23 23:10 Training set Results - Epoch: 27 Avg mae: 11.91 Avg mse: 33.52 Avg loss: 42.49
230 batch_timer 1.5722735658576206
231 train_timer 420.44970723000006
232 2020-04-23 23:11 Validation set Results - Epoch: 27 Avg mae: 59.38 Avg mse: 73.66 Avg loss: 33.14
233 evaluate_timer 60.897926688888624
File saved_model_best/keep added (mode: 100644) (index 0000000..469894b)
1 meow,text here to keep folder
File saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=57.744759390625774.pth added (mode: 100644) (index 0000000..e933b2b)
File saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=59.44025740442397.pth added (mode: 100644) (index 0000000..16b198a)
File saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=62.796104585068136.pth added (mode: 100644) (index 0000000..d715e51)
File saved_model_best/local_M4_t2_shb_2/local_M4_t2_shb_2_checkpoint_valid_mae=69.45232154749617.pth added (mode: 100644) (index 0000000..bf44b5a)
File saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=132.52883276154722.pth added (mode: 100644) (index 0000000..5b63e3a)
File saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=139.45122398907625.pth added (mode: 100644) (index 0000000..967ca8a)
File saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=169.65900459772425.pth added (mode: 100644) (index 0000000..7efb4e7)
File saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=191.4196609183203.pth added (mode: 100644) (index 0000000..f75067d)
File saved_model_best/local_M4_t2_shb_3/local_M4_t2_shb_3_checkpoint_valid_mae=200.82528000843675.pth added (mode: 100644) (index 0000000..4b712eb)
File train_script/meow_one/M4_t2_sha.sh changed (mode: 100644) (index ad6bdd4..56fcf9a)
1 1 task="M4_t2_sha" task="M4_t2_sha"
2 2
3 CUDA_VISIBLE_DEVICES=3 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \
3 CUDA_VISIBLE_DEVICES=2 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \
4 4 --task_id $task \ --task_id $task \
5 5 --note "M4 shanghaitech_rnd" \ --note "M4 shanghaitech_rnd" \
6 6 --model "M4" \ --model "M4" \
File train_script/meow_one/M4_t2_sha_c.sh copied from file train_script/meow_one/big_tail/bigtail3_t1_sha.sh (similarity 58%) (mode: 100644) (index fb030f2..c62911d)
1 task="bigtail3_t1_sha"
1 task="M4_t2_sha_coun"
2 2
3 3 CUDA_VISIBLE_DEVICES=2 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \ CUDA_VISIBLE_DEVICES=2 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \
4 4 --task_id $task \ --task_id $task \
5 --note "bigtail3 shanghaitech_rnd" \
6 --model "BigTail3" \
5 --note "M4 20 percentage aug, continue train to 800 epochs" \
6 --model "M4" \
7 7 --input /data/rnd/thient/thient_data/ShanghaiTech/part_A \ --input /data/rnd/thient/thient_data/ShanghaiTech/part_A \
8 8 --lr 1e-4 \ --lr 1e-4 \
9 9 --decay 1e-4 \ --decay 1e-4 \
10 --load_model saved_model/M4_t2_sha/M4_t2_sha_checkpoint_360000.pth \
10 11 --datasetname shanghaitech_20p \ --datasetname shanghaitech_20p \
11 --epochs 602 > logs/$task.log &
12 --epochs 801 > logs/$task.log &
12 13
13 14 echo logs/$task.log # for convenience echo logs/$task.log # for convenience
File train_script/meow_one/M4_t2_sha_shb.sh copied from file train_script/meow_one/M4_t2_sha.sh (similarity 55%) (mode: 100644) (index ad6bdd4..769c8f6)
1 task="M4_t2_sha"
1 task="M4_t2_sha_shb"
2 2
3 3 CUDA_VISIBLE_DEVICES=3 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \ CUDA_VISIBLE_DEVICES=3 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \
4 4 --task_id $task \ --task_id $task \
5 --note "M4 shanghaitech_rnd" \
5 --note "M4 train sha 20p then shb random crop 1/4 " \
6 6 --model "M4" \ --model "M4" \
7 7 --input /data/rnd/thient/thient_data/ShanghaiTech/part_A \ --input /data/rnd/thient/thient_data/ShanghaiTech/part_A \
8 8 --lr 1e-4 \ --lr 1e-4 \
9 9 --decay 1e-4 \ --decay 1e-4 \
10 --datasetname shanghaitech_20p \
11 --epochs 301 > logs/$task.log &
10 --load_model saved_model/M4_t2_sha/M4_t2_sha_checkpoint_360000.pth \
11 --datasetname shanghaitech_rnd \
12 --epochs 1000 > logs/$task.log &
12 13
13 14 echo logs/$task.log # for convenience echo logs/$task.log # for convenience
File train_script/meow_one/M4_t3_shb.sh copied from file train_script/meow_one/M4_t2_shb.sh (similarity 83%) (mode: 100644) (index 6f9663d..824850f)
1 task="M4_t2_shb"
1 task="M4_t3_shb"
2 2
3 3 CUDA_VISIBLE_DEVICES=4 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \ CUDA_VISIBLE_DEVICES=4 HTTPS_PROXY="http://10.60.28.99:86" nohup python experiment_meow_main.py \
4 4 --task_id $task \ --task_id $task \
5 --note "M4 shanghaitech_rnd" \
5 --note "M4 return M4 t2 because it does not log" \
6 6 --model "M4" \ --model "M4" \
7 7 --input /data/rnd/thient/thient_data/ShanghaiTech/part_B \ --input /data/rnd/thient/thient_data/ShanghaiTech/part_B \
8 8 --lr 1e-4 \ --lr 1e-4 \
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/hahattpro/crowd_counting_framework

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/hahattpro/crowd_counting_framework

Clone this repository using git:
git clone git://git.rocketgit.com/user/hahattpro/crowd_counting_framework

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main