List of commits:
Subject Hash Author Date (UTC)
eval context aware network on ShanghaiTechB can run e8c454d2b6d287c830c1286c9a37884b3cfc615f Thai Thien 2020-02-02 04:09:14
import ShanghaiTechDataPath in data_util e81eb56315d44375ff5c0e747d61456601492f8f Thai Thien 2020-02-02 04:04:36
add model_context_aware_network.py 2a36025c001d85afc064c090f4d22987b328977b Thai Thien 2020-02-02 03:46:38
PACNN (TODO: test this) 44d5ae7ec57c760fb4f105dd3e3492148a0cc075 Thai Thien 2020-02-02 03:40:26
add data path 80134de767d0137a663f343e4606bafc57a1bc1f Thai Thien 2020-02-02 03:38:21
test if ShanghaiTech datapath is correct 97ee84944a4393ec3732879b24f614826f8e7798 Thai Thien 2020-02-01 03:57:31
refactor and test ShanghaiTech datapath 9542ebc00f257edc38690180b7a4353794be4019 Thai Thien 2020-02-01 03:53:49
fix the unzip flow b53c5989935335377eb6a88c942713d3eccc5df7 Thai Thien 2020-02-01 03:53:13
data_script run seem ok 67420c08fc1c10a66404d3698994865726a106cd Thai Thien 2020-02-01 03:33:18
add perspective 642d6fff8c9f31e510fda85a7fb631fb855d8a6d Thai Thien 2019-10-06 16:54:44
fix padding with p 86c2fa07822d956a34b3b37e14da485a4249f01b Thai Thien 2019-10-06 02:52:58
pacnn perspective loss fb673e38a5f24ae9004fe2b7b93c88991e0c2304 Thai Thien 2019-10-06 01:38:28
data_flow shanghaitech_pacnn_with_perspective seem working 91d350a06f358e03223966297d124daee94123d0 Thai Thien 2019-10-06 01:31:11
multiscale loss and final loss only mode c65dd0e74ad28503821e5c8651a3b47b4a0c7c64 Thai Thien 2019-10-05 15:58:19
wip : perspective map eac63f2671dc5b064753acc4f40bf0f9f216ad2a Thai Thien 2019-10-04 16:26:56
shell script f2106e700b6f6174d4dd276f25ec6f3d9ff239bb thient 2019-10-04 07:42:51
WIP 42c7c8e1d772fbbda61a4bdf9e329f74e1efb600 tthien 2019-10-03 17:52:47
add readme 580cf43d1edddd67b1f6a2c57fdd5cee3dba925c Thai Thien 2019-10-02 17:44:49
update script, debug ddb68b95389be1c1d398118677dd227a8bb2b70b Thai Thien 2019-10-02 15:52:31
add d (output density map) to loss function) a0c71bf4bf2ab7393d60b06a84db8dfbbfb1a6c2 tthien 2019-09-30 16:32:39
Commit e8c454d2b6d287c830c1286c9a37884b3cfc615f - eval context aware network on ShanghaiTechB can run
Author: Thai Thien
Author date (UTC): 2020-02-02 04:09
Committer name: Thai Thien
Committer date (UTC): 2020-02-02 04:09
Parent(s): 39b10e744904df9ab7dfe0ce77aa271cd37b3881
Signer:
Signing key:
Signing status: N
Tree: 8807269793f18b44254c1c7cae00747ea4fe05c5
File Lines added Lines deleted
eval_context_aware_network.py 81 0
models/context_aware_network.py 0 0
File eval_context_aware_network.py added (mode: 100644) (index 0000000..760e6bd)
1 import h5py
2 import PIL.Image as Image
3 import numpy as np
4 import os
5 import glob
6 import torch
7 from torch.autograd import Variable
8 from sklearn.metrics import mean_squared_error,mean_absolute_error
9 from torchvision import transforms
10 from models.context_aware_network import CANNet
11 from data_util import ShanghaiTechDataPath
12 from hard_code_variable import HardCodeVariable
13
14 _description="""
15 This file run predict
16 Data path = /home/tt/project/ShanghaiTechCAN/part_B/test_data/images
17 model path = /home/tt/project/MODEL/Context-aware/part_B_pre.pth.tar
18 """
19
20 transform=transforms.Compose([
21 transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406],
22 std=[0.229, 0.224, 0.225]),
23 ])
24
25 # the folder contains all the test images
26 hard_code = HardCodeVariable()
27 shanghaitech_data = ShanghaiTechDataPath(root=hard_code.SHANGHAITECH_PATH)
28 # img_folder='/home/tt/project/ShanghaiTechCAN/part_B/test_data/images'
29 img_folder = shanghaitech_data.get_b().get_test().get_images()
30 print("image folder = " + str(img_folder))
31
32 img_paths=[]
33
34 for img_path in glob.glob(os.path.join(img_folder, '*.jpg')):
35 img_paths.append(img_path)
36 # img_paths = img_paths[:10]
37
38
39 model = CANNet()
40
41 model = model.cuda()
42
43 checkpoint = torch.load('/home/tt/project/MODEL/Context-aware/part_B_pre.pth.tar')
44
45 model.load_state_dict(checkpoint['state_dict'])
46
47 model.eval()
48
49 pred= []
50 gt = []
51
52 for i in range(len(img_paths)):
53 img = transform(Image.open(img_paths[i]).convert('RGB')).cuda()
54 img = img.unsqueeze(0)
55 h,w = img.shape[2:4]
56 h_d = int(h/2)
57 w_d = int(w/2)
58 img_1 = Variable(img[:,:,:h_d,:w_d].cuda())
59 img_2 = Variable(img[:,:,:h_d,w_d:].cuda())
60 img_3 = Variable(img[:,:,h_d:,:w_d].cuda())
61 img_4 = Variable(img[:,:,h_d:,w_d:].cuda())
62 density_1 = model(img_1).data.cpu().numpy()
63 density_2 = model(img_2).data.cpu().numpy()
64 density_3 = model(img_3).data.cpu().numpy()
65 density_4 = model(img_4).data.cpu().numpy()
66
67 pure_name = os.path.splitext(os.path.basename(img_paths[i]))[0]
68 gt_file = h5py.File(img_paths[i].replace('.jpg','.h5').replace('images','ground-truth-h5'),'r')
69 groundtruth = np.asarray(gt_file['density'])
70 pred_sum = density_1.sum()+density_2.sum()+density_3.sum()+density_4.sum()
71 pred.append(pred_sum)
72 gt.append(np.sum(groundtruth))
73 print("done ", i, "pred ",pred_sum, " gt ", np.sum(groundtruth))
74
75 print(len(pred))
76 print(len(gt))
77 mae = mean_absolute_error(pred,gt)
78 rmse = np.sqrt(mean_squared_error(pred,gt))
79
80 print('MAE: ',mae)
81 print('RMSE: ',rmse)
File models/context_aware_network.py renamed from models/model_context_aware_network.py (similarity 100%)
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/hahattpro/crowd_counting_framework

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/hahattpro/crowd_counting_framework

Clone this repository using git:
git clone git://git.rocketgit.com/user/hahattpro/crowd_counting_framework

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main