File README.md changed (mode: 100644) (index cbf05bc..87e1df6) |
... |
... |
Preperation |
7 |
7 |
Check your Flatfile Version, the current version of this program works with the appereal flatfile of 2019. |
Check your Flatfile Version, the current version of this program works with the appereal flatfile of 2019. |
8 |
8 |
|
|
9 |
9 |
Make sure to prepare your PlentyMarkets Account you need the following list of dataformats: |
Make sure to prepare your PlentyMarkets Account you need the following list of dataformats: |
10 |
|
- Item upload: |
|
11 |
|
Erstellung der Parentartikel, der Artikel aggiert dabei als eine Art Hülle zum eigentlich parent. |
|
12 |
|
Die Grund version nutzt, eine Liste, interner Nummern um die ItemID zu bestimmen. Dies ist aber |
|
13 |
|
nicht notwendig. |
|
14 |
|
Creation of the Parentarticle, the article acts as a shell for the actual parentvariation. |
|
15 |
|
The basic version of the program uses a intern number list to fill in the ItemID. But this is no |
|
|
10 |
|
|
|
11 |
|
de = [Erstellung der Parentartikel, der Artikel aggiert dabei als eine Art Hülle zum eigentlich parent. |
|
12 |
|
Die Grund version nutzt, eine Liste, interner Nummern um die ItemID zu bestimmen. Dies ist aber |
|
13 |
|
nicht notwendig.] |
|
14 |
|
en = [Creation of the Parentarticle, the article acts as a shell for the actual parentvariation. |
|
15 |
|
The basic version of the program uses a intern number list to fill in the ItemID. But this is not |
|
16 |
|
necessary.] |
16 |
17 |
- Item Upload |
- Item Upload |
17 |
18 |
{ |
{ |
18 |
19 |
[Typ: Item] |
[Typ: Item] |
19 |
|
CategoryLevel1Name: IMPORT |
|
20 |
|
CategoryLevel2Name: IMPORT |
|
21 |
|
CategoryLevel3Name: IMPORT |
|
22 |
|
CategoryLevel4Name: IMPORT |
|
23 |
|
CategoryLevel5Name: IMPORT |
|
24 |
|
CategoryLevel6Name: IMPORT |
|
25 |
|
ItemID: IMPORT |
|
26 |
|
PrimaryVariationCustomNumber: IMPORT |
|
27 |
|
PrimaryVariationLengthMM: IMPORT |
|
28 |
|
PrimaryVariationWidthMM: IMPORT |
|
29 |
|
PrimaryVariationHeightMM: IMPORT |
|
30 |
|
PrimaryVariationWeightG: IMPORT |
|
31 |
|
PrimaryVariationName: IMPORT |
|
32 |
|
PrimaryVariationPurchasePrice: IMPORT |
|
33 |
|
PrimaryVariationMainWarehouse: IMPORT |
|
34 |
|
ItemOriginCountry: IMPORT |
|
35 |
|
ItemProducer: IMPORT |
|
36 |
|
ItemProducerID: IMPORT |
|
37 |
|
ItemTextName: IMPORT |
|
38 |
|
ItemTextDescription: IMPORT |
|
|
20 |
|
ItemID: IMPORT, [1] |
|
21 |
|
PrimaryVariationCustomNumber: IMPORT, [2] |
|
22 |
|
PrimaryVariationLengthMM: IMPORT, [3] |
|
23 |
|
PrimaryVariationWidthMM: IMPORT, [4] |
|
24 |
|
PrimaryVariationHeightMM: IMPORT, [5] |
|
25 |
|
PrimaryVariationWeightG: IMPORT, [6] |
|
26 |
|
PrimaryVariationName: IMPORT, [7] |
|
27 |
|
PrimaryVariationMainWarehouse: IMPORT, [8] |
|
28 |
|
PrimaryVariationPurchasePrice: IMPORT, [9] |
|
29 |
|
ItemOriginCountry: IMPORT, [10] |
|
30 |
|
ItemProducer: IMPORT, [11] |
|
31 |
|
ItemProducerID: IMPORT, [12] |
|
32 |
|
ItemProductType: IMPORT, [13] |
|
33 |
|
ItemTextName: IMPORT, [14] |
|
34 |
|
ItemTextDescription: IMPORT, [15] |
|
35 |
|
ItemTextKeywords: IMPORT, [16] |
|
36 |
|
ItemTextLang: IMPORT, [17] |
|
37 |
|
PrimaryVariationExternalID: IMPORT, [18] |
|
38 |
|
PrimaryVariationActive: IMPORT, [19] |
|
39 |
|
PrimaryVariationAutoStockInvisible: IMPORT, [20] |
|
40 |
|
PrimaryVariationAutoStockNoPositiveStockIcon: IMPORT, [21] |
|
41 |
|
PrimaryVariationAutoStockPositiveStockIcon: IMPORT, [22] |
|
42 |
|
PrimaryVariationAutoStockVisible: IMPORT, [23] |
|
43 |
|
PrimaryVariationAvailability: IMPORT, [24] |
|
44 |
|
ItemMarking1: IMPORT, [25] |
|
45 |
|
ItemMarking2: IMPORT [26] |
39 |
46 |
} |
} |
40 |
47 |
- Variation Upload: |
- Variation Upload: |
41 |
48 |
Erstellung der Variationen, diese werden dem im Itemupload hochgeladenen parent zugewiesen. |
Erstellung der Variationen, diese werden dem im Itemupload hochgeladenen parent zugewiesen. |
|
... |
... |
Make sure to prepare your PlentyMarkets Account you need the following list of d |
55 |
62 |
MainWarehouse: IMPORT |
MainWarehouse: IMPORT |
56 |
63 |
Availability: IMPORT |
Availability: IMPORT |
57 |
64 |
AutoStockVisible: IMPORT |
AutoStockVisible: IMPORT |
|
65 |
|
ExternalID: Import |
58 |
66 |
} |
} |
59 |
67 |
- Attribute Upload |
- Attribute Upload |
60 |
68 |
Erstellung der in der Liste genutzten Farben ,Größen und Materialien im Plentymarket system. |
Erstellung der in der Liste genutzten Farben ,Größen und Materialien im Plentymarket system. |
File packages/amazon_data_upload.py changed (mode: 100644) (index 5fc97a7..1c6e169) |
1 |
1 |
import csv |
import csv |
2 |
2 |
from os.path import isfile |
from os.path import isfile |
3 |
3 |
import sys |
import sys |
4 |
|
from variation_upload import writeCSV |
|
|
4 |
|
from packages import variation_upload |
5 |
5 |
try: |
try: |
6 |
6 |
from sortedcontainers import SortedDict |
from sortedcontainers import SortedDict |
7 |
7 |
except ImportError: |
except ImportError: |
|
... |
... |
def amazonSkuUpload(flatfile, export): |
31 |
31 |
Data[row['item_sku']]['SKU'] = row['item_sku'] |
Data[row['item_sku']]['SKU'] = row['item_sku'] |
32 |
32 |
Data[row['item_sku']]['ParentSKU'] = row['parent_sku'] |
Data[row['item_sku']]['ParentSKU'] = row['parent_sku'] |
33 |
33 |
|
|
34 |
|
output_path = writeCSV(Data, 'sku_amazon', column_names) |
|
|
34 |
|
output_path = variation_upload.writeCSV(Data, 'sku_amazon', column_names) |
35 |
35 |
|
|
36 |
36 |
|
|
37 |
37 |
def amazonDataUpload(flatfile, export): |
def amazonDataUpload(flatfile, export): |
38 |
38 |
|
|
39 |
|
column_names = [ |
|
40 |
|
'ItemAmazonProductType', 'ItemProductType', 'bullet_point1' |
|
41 |
|
, 'bullet_point2', 'bullet_point3', 'bullet_point4' |
|
42 |
|
, 'bullet_point5', 'fit_type' |
|
43 |
|
, 'lifestyle', 'batteries_required' |
|
44 |
|
, 'supplier_declared_dg_hz_regulation1' |
|
45 |
|
, 'supplier_declared_dg_hz_regulation2' |
|
46 |
|
, 'supplier_declared_dg_hz_regulation3' |
|
47 |
|
, 'supplier_declared_dg_hz_regulation4' |
|
48 |
|
, 'supplier_declared_dg_hz_regulation5', 'ItemID'] |
|
|
39 |
|
column_names = ['ItemAmazonProductType', 'ItemProductType', 'bullet_point1', |
|
40 |
|
'bullet_point2', 'bullet_point3', 'bullet_point4', |
|
41 |
|
'bullet_point5', 'fit_type', |
|
42 |
|
'lifestyle', 'batteries_required', |
|
43 |
|
'supplier_declared_dg_hz_regulation1', |
|
44 |
|
'supplier_declared_dg_hz_regulation2', |
|
45 |
|
'supplier_declared_dg_hz_regulation3', |
|
46 |
|
'supplier_declared_dg_hz_regulation4', |
|
47 |
|
'supplier_declared_dg_hz_regulation5', 'ItemID'] |
49 |
48 |
|
|
50 |
49 |
Data = SortedDict() |
Data = SortedDict() |
51 |
50 |
|
|
|
... |
... |
def amazonDataUpload(flatfile, export): |
54 |
53 |
|
|
55 |
54 |
for row in reader: |
for row in reader: |
56 |
55 |
if(row['parent_child'] == 'parent'): |
if(row['parent_child'] == 'parent'): |
57 |
|
values = [row['product_type'], row['product_type'] |
|
58 |
|
, row['bullet_point1'], row['bullet_point2'] |
|
59 |
|
, row['bullet_point3'], row['bullet_point4'] |
|
60 |
|
, row['bullet_point5'], row['fit_type'] |
|
61 |
|
, row['lifestyle'], row['batteries_required'] |
|
62 |
|
, row['supplier_declared_dg_hz_regulation1'] |
|
63 |
|
, row['supplier_declared_dg_hz_regulation2'] |
|
64 |
|
, row['supplier_declared_dg_hz_regulation3'] |
|
65 |
|
, row['supplier_declared_dg_hz_regulation4'] |
|
66 |
|
, row['supplier_declared_dg_hz_regulation5'] |
|
67 |
|
, ''] |
|
|
56 |
|
values = [row['feed_product_type'], row['feed_product_type'], |
|
57 |
|
row['bullet_point1'], row['bullet_point2'], |
|
58 |
|
row['bullet_point3'], row['bullet_point4'], |
|
59 |
|
row['bullet_point5'], row['fit_type'], |
|
60 |
|
row['lifestyle'], row['batteries_required'], |
|
61 |
|
row['supplier_declared_dg_hz_regulation1'], |
|
62 |
|
row['supplier_declared_dg_hz_regulation2'], |
|
63 |
|
row['supplier_declared_dg_hz_regulation3'], |
|
64 |
|
row['supplier_declared_dg_hz_regulation4'], |
|
65 |
|
row['supplier_declared_dg_hz_regulation5'], |
|
66 |
|
''] |
68 |
67 |
Data[row['item_sku']] = SortedDict(zip(column_names, values)) |
Data[row['item_sku']] = SortedDict(zip(column_names, values)) |
69 |
68 |
|
|
70 |
69 |
with open(export, mode='r') as item: |
with open(export, mode='r') as item: |
|
... |
... |
def amazonDataUpload(flatfile, export): |
74 |
73 |
if(row['VariationNumber'] in [*Data]): |
if(row['VariationNumber'] in [*Data]): |
75 |
74 |
Data[row['VariationNumber']]['ItemID'] = row['ItemID'] |
Data[row['VariationNumber']]['ItemID'] = row['ItemID'] |
76 |
75 |
|
|
77 |
|
writeCSV(dataobject=Data, name='amazon_data', columns=column_names) |
|
|
76 |
|
variation_upload.writeCSV(dataobject=Data, name='amazon_data', columns=column_names) |
|
77 |
|
|
|
78 |
|
|
|
79 |
|
def asinUpload(export, stock): |
|
80 |
|
|
|
81 |
|
column_names = ['ASIN', 'MarketplaceCountry', 'Position', 'VariationID'] |
|
82 |
|
|
|
83 |
|
Data = {} |
|
84 |
|
|
|
85 |
|
with open(export, mode='r') as item: |
|
86 |
|
reader = csv.DictReader(item, delimiter=';') |
|
87 |
|
|
|
88 |
|
for row in reader: |
|
89 |
|
if row['VariationID']: |
|
90 |
|
values = [ '', '1', '', row['VariationID'] ] |
|
91 |
|
|
|
92 |
|
Data[row['VariationNumber']] = dict(zip(column_names, values)) |
|
93 |
|
|
|
94 |
|
with open(stock, mode='r') as item: |
|
95 |
|
reader = csv.DictReader(item, delimiter=';') |
|
96 |
|
|
|
97 |
|
for row in reader: |
|
98 |
|
if row['MASTER'] in [*Data]: |
|
99 |
|
Data[row['MASTER']]['ASIN'] = row['asin'] |
|
100 |
|
|
|
101 |
|
variation_upload.writeCSV(dataobject=Data, name='asin', columns=column_names) |
|
102 |
|
|
File packages/item_upload.py changed (mode: 100644) (index e63a688..98f0978) |
1 |
1 |
import csv |
import csv |
|
2 |
|
import re |
2 |
3 |
from os.path import isfile |
from os.path import isfile |
3 |
4 |
from sys import exit |
from sys import exit |
4 |
|
from variation_upload import writeCSV |
|
|
5 |
|
from packages import variation_upload |
5 |
6 |
|
|
6 |
7 |
|
|
7 |
8 |
try: |
try: |
|
... |
... |
except ImportError: |
11 |
12 |
raise ImportError |
raise ImportError |
12 |
13 |
|
|
13 |
14 |
|
|
14 |
|
def itemUpload(filepath, intern_number): |
|
15 |
|
# The column headers for the output file as expected from the plentymarkets dataformat |
|
16 |
|
column_names_output = ['CategoryLevel1Name', 'CategoryLevel2Name', |
|
17 |
|
'CategoryLevel3Name', 'CategoryLevel4Name', |
|
18 |
|
'CategoryLevel5Name', 'CategoryLevel6Name', |
|
19 |
|
'ItemID', 'PrimaryVariationCustomNumber', |
|
20 |
|
'PrimaryVariationLengthMM', |
|
21 |
|
'PrimaryVariationWidthMM', |
|
22 |
|
'PrimaryVariationHeightMM', |
|
23 |
|
'PrimaryVariationWeightG', |
|
24 |
|
'PrimaryVariationName', |
|
25 |
|
'PrimaryVariationPurchasePrice', 'ItemImageURL', |
|
26 |
|
'PrimaryVariationMainWarehouse', |
|
27 |
|
'ItemOriginCountry', 'ItemProducer', |
|
28 |
|
'ItemProducerID', 'ItemTextName', |
|
29 |
|
'ItemTextDescription'] |
|
30 |
|
|
|
31 |
|
# default values: CategoryLevel5Name : '' , CategoryLevel6Name : '', ItemOriginCountry : '62' , ItemProducer : 'PANASIAM', ItemProducerID : '3' |
|
|
15 |
|
def itemUpload(flatfile, intern): |
|
16 |
|
# The column headers for the output file as expected from the |
|
17 |
|
# plentymarkets dataformat |
|
18 |
|
column_names = ['ItemID', 'PrimaryVariationCustomNumber', |
|
19 |
|
'PrimaryVariationLengthMM', 'PrimaryVariationWidthMM', |
|
20 |
|
'PrimaryVariationHeightMM', 'PrimaryVariationWeightG', |
|
21 |
|
'PrimaryVariationName', 'PrimaryVariationMainWarehouse', |
|
22 |
|
'PrimaryVariationPurchasePrice', 'ItemOriginCountry', |
|
23 |
|
'ItemProducer', 'ItemProducerID', 'ItemProductType', |
|
24 |
|
'ItemTextName', 'ItemTextDescription', 'ItemTextKeywords', |
|
25 |
|
'ItemTextLang', 'PrimaryVariationExternalID', |
|
26 |
|
'PrimaryVariationActive', |
|
27 |
|
'PrimaryVariationAutoStockInvisible', |
|
28 |
|
'PrimaryVariationAutoStockNoPositiveStockIcon', |
|
29 |
|
'PrimaryVariationAutoStockPositiveStockIcon', |
|
30 |
|
'PrimaryVariationAutoStockVisible', |
|
31 |
|
'PrimaryVariationAvailability', |
|
32 |
|
'ItemMarking1', 'ItemMarking2'] |
|
33 |
|
|
|
34 |
|
# default values: CategoryLevel5Name : '' , CategoryLevel6Name : '', |
|
35 |
|
# ItemOriginCountry : '62' , ItemProducer : 'PANASIAM', |
|
36 |
|
# ItemProducerID : '3' |
32 |
37 |
|
|
33 |
38 |
# Unpack File and scrap data |
# Unpack File and scrap data |
34 |
39 |
# INPUT |
# INPUT |
35 |
40 |
# -------------------------------------------------------------- |
# -------------------------------------------------------------- |
36 |
41 |
Data = SortedDict() |
Data = SortedDict() |
37 |
42 |
|
|
38 |
|
with open(filepath, mode='r') as item: |
|
|
43 |
|
with open(flatfile, mode='r') as item: |
39 |
44 |
reader = csv.DictReader(item, delimiter=";") |
reader = csv.DictReader(item, delimiter=";") |
|
45 |
|
|
|
46 |
|
|
|
47 |
|
relationship = ['parent_child', 'Variantenbestandteil'] |
40 |
48 |
for row in reader: |
for row in reader: |
41 |
|
# if the item is a parent scrap the name and the desc from the |
|
42 |
|
# flatfile |
|
43 |
|
if(row['parent_child'] == 'parent'): |
|
|
49 |
|
try: |
|
50 |
|
if(row[relationship[0]]): |
|
51 |
|
relationcolum = relationship[0] |
|
52 |
|
except KeyError: |
|
53 |
|
if(row[relationship[1]]): |
|
54 |
|
relationcolum = relationship[1] |
|
55 |
|
except KeyError as err: |
|
56 |
|
print(err) |
|
57 |
|
print("There seems to be a new Flatfile, please check column for parent\n", |
|
58 |
|
" & child relationship for the headername and enter it within the\n", |
|
59 |
|
" first with open(flatfile....)") |
|
60 |
|
exit(1) |
|
61 |
|
# transform the text format to integer in order to adjust the |
|
62 |
|
# height, width, length numbers from centimeter to milimeter |
|
63 |
|
|
|
64 |
|
if(row[relationcolum]): |
44 |
65 |
try: |
try: |
45 |
|
if(row['package_height'] and row['package_length'] and row['package_width']): |
|
|
66 |
|
if(row['package_height'] and |
|
67 |
|
row['package_length'] and |
|
68 |
|
row['package_width']): |
|
69 |
|
|
46 |
70 |
row['package_height'] = int(row['package_height']) |
row['package_height'] = int(row['package_height']) |
47 |
71 |
row['package_length'] = int(row['package_length']) |
row['package_length'] = int(row['package_length']) |
48 |
72 |
row['package_width'] = int(row['package_width']) |
row['package_width'] = int(row['package_width']) |
|
73 |
|
|
|
74 |
|
# if the number is a floating point number it has to be |
|
75 |
|
# transformed into a float first befor the integer conversion |
49 |
76 |
except ValueError as err: |
except ValueError as err: |
50 |
77 |
row['package_height'] = int(float(row['package_height'])) |
row['package_height'] = int(float(row['package_height'])) |
51 |
78 |
row['package_length'] = int(float(row['package_length'])) |
row['package_length'] = int(float(row['package_length'])) |
52 |
79 |
row['package_width'] = int(float(row['package_width'])) |
row['package_width'] = int(float(row['package_width'])) |
|
80 |
|
|
53 |
81 |
except ValueError as err: |
except ValueError as err: |
54 |
82 |
print(err) |
print(err) |
55 |
|
print( |
|
56 |
|
"/nPlease copy the values for height, length, width and weight\nfrom the children to the parent variation in the flatfile.\n") |
|
|
83 |
|
print("/nPlease copy the values for height, length, width", |
|
84 |
|
"and weight\nfrom the children to the parent", |
|
85 |
|
"variation in the flatfile.\n") |
57 |
86 |
exit() |
exit() |
|
87 |
|
|
|
88 |
|
# get the keywords from the flatfile if it is a old flatfile |
|
89 |
|
# combine the keyword columns into a single one |
|
90 |
|
# after that check the size of the keywords |
|
91 |
|
# because the maximum for amazon is 250byte |
|
92 |
|
if(row['generic_keywords1']): |
|
93 |
|
keywords = '' |
|
94 |
|
try: |
|
95 |
|
keywords = str(row['generic_keywords1'] + '' + |
|
96 |
|
row['generic_keywords2'] + '' + |
|
97 |
|
row['generic_keywords3'] + '' + |
|
98 |
|
row['generic_keywords4'] + '' + |
|
99 |
|
row['generic_keywords5']) |
|
100 |
|
except Exception as err: |
|
101 |
|
print(err) |
|
102 |
|
print("The combination of the keywords failed!") |
|
103 |
|
else if(row['generic_keywords']): |
|
104 |
|
keywords = 'generic_keywords' |
|
105 |
|
|
58 |
106 |
try: |
try: |
59 |
|
values = ['', '', '', '', '', '', '', row['item_sku'], |
|
60 |
|
row['package_length'] * 10, |
|
|
107 |
|
values = ['', row['item_sku'], row['package_length'] * 10, |
61 |
108 |
row['package_width'] * 10, |
row['package_width'] * 10, |
62 |
109 |
row['package_height'] * 10, |
row['package_height'] * 10, |
63 |
|
row['package_weight'], |
|
64 |
|
row['item_name'], |
|
65 |
|
row['standard_price'], |
|
66 |
|
'', 'Badel', '62', 'PANASIAM', '3', |
|
67 |
|
'', row['product_description']] |
|
|
110 |
|
row['package_weight'], row['item_name'], |
|
111 |
|
'104', '', '62', row['brand_name'].upper(), '3', |
|
112 |
|
row['feed_product_type'], '', |
|
113 |
|
row['product_description'], keywords, 'de', |
|
114 |
|
'', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 9, 1] |
|
115 |
|
|
68 |
116 |
except Exception as err: |
except Exception as err: |
69 |
117 |
print(err) |
print(err) |
70 |
|
Data[row['item_sku']] = SortedDict( |
|
71 |
|
zip(column_names_output, values)) |
|
|
118 |
|
Data[row['item_sku']] = SortedDict(zip(column_names, values)) |
72 |
119 |
|
|
73 |
|
# open the intern number csv to get the item ID |
|
74 |
|
with open(intern_number, mode='r') as item: |
|
75 |
|
reader = csv.DictReader(item, delimiter=";") |
|
76 |
|
for row in reader: |
|
77 |
|
if(row['amazon_sku'] in [*Data]): |
|
78 |
|
Data[row['amazon_sku']]['ItemID'] = row['article_id'] |
|
|
120 |
|
# open the intern number csv to get the item ID |
|
121 |
|
with open(intern, mode='r') as item: |
|
122 |
|
reader = csv.DictReader(item, delimiter=";") |
|
123 |
|
for row in reader: |
|
124 |
|
if(row['amazon_sku'] in [*Data]): |
|
125 |
|
Data[row['amazon_sku']]['ItemID'] = row['article_id'] |
|
126 |
|
Data[row['amazon_sku']]['PrimaryVariationExternalID'] = row['full_number'] |
79 |
127 |
|
|
80 |
|
# Write Data into new CSV for Upload |
|
81 |
|
# OUTPUT |
|
82 |
|
# -------------------------------------------------------------- |
|
|
128 |
|
# Write Data into new CSV for Upload |
|
129 |
|
# OUTPUT |
|
130 |
|
# -------------------------------------------------------------- |
83 |
131 |
|
|
84 |
|
writeCSV(Data, "item", column_names_output) |
|
|
132 |
|
variation_upload.writeCSV(Data, "item", column_names) |
85 |
133 |
|
|
86 |
134 |
|
|
87 |
135 |
def itemPropertyUpload(flatfile, export): |
def itemPropertyUpload(flatfile, export): |
|
... |
... |
def itemPropertyUpload(flatfile, export): |
90 |
138 |
reader = csv.DictReader(item, delimiter=';', lineterminator='\n') |
reader = csv.DictReader(item, delimiter=';', lineterminator='\n') |
91 |
139 |
|
|
92 |
140 |
material = {} |
material = {} |
|
141 |
|
value = {} |
93 |
142 |
# search for a material name and assign a number that correlates to it |
# search for a material name and assign a number that correlates to it |
94 |
143 |
for row in reader: |
for row in reader: |
95 |
144 |
if(row['parent_child'] == 'parent'): |
if(row['parent_child'] == 'parent'): |
96 |
145 |
if(re.search(r'(cotton|baumwolle)', |
if(re.search(r'(cotton|baumwolle)', |
97 |
|
row['outer_material_type'].lower())): |
|
|
146 |
|
row['outer_material_type'].lower())): |
|
147 |
|
|
98 |
148 |
material[row['item_sku']] = 4 |
material[row['item_sku']] = 4 |
99 |
|
if(re.search(r'(hemp|hanf)', |
|
100 |
|
row['outer_material_type'].lower())): |
|
101 |
|
material[row['item_sku']] = 5 |
|
102 |
|
if(re.search(r'(viskose|viscose)', |
|
103 |
|
row['outer_material_type'].lower())): |
|
104 |
|
material[row['item_sku']] = 6 |
|
|
149 |
|
value[row['item_sku']] = "Baumwolle" |
|
150 |
|
if(re.search(r'(hemp|hanf)', |
|
151 |
|
row['outer_material_type'].lower())): |
|
152 |
|
|
|
153 |
|
material[row['item_sku']] = 5 |
|
154 |
|
value[row['item_sku']] = "Hanf" |
|
155 |
|
if(re.search(r'(viskose|viscose)', |
|
156 |
|
row['outer_material_type'].lower())): |
|
157 |
|
|
|
158 |
|
material[row['item_sku']] = 6 |
|
159 |
|
value[row['item_sku']] = "Viskose" |
105 |
160 |
|
|
106 |
161 |
with open(export, mode='r') as item: |
with open(export, mode='r') as item: |
107 |
162 |
reader = csv.DictReader(item, delimiter=';', lineterminator='\n') |
reader = csv.DictReader(item, delimiter=';', lineterminator='\n') |
108 |
163 |
|
|
109 |
|
column_names = ['PropertyItemID', 'ItemID', |
|
110 |
|
'PrimaryVariationCustomNumber'] |
|
|
164 |
|
column_names = ['PropertyItemID', 'ItemID', 'PrimaryVariationCustomNumber', |
|
165 |
|
'Lang', 'Value'] |
111 |
166 |
|
|
112 |
167 |
Data = {} |
Data = {} |
113 |
168 |
for row in reader: |
for row in reader: |
114 |
169 |
if(row['AttributeValueSetID'] == ''): |
if(row['AttributeValueSetID'] == ''): |
115 |
170 |
values = ['3', |
values = ['3', |
116 |
171 |
row['ItemID'], |
row['ItemID'], |
117 |
|
row['VariationName']] |
|
|
172 |
|
row['VariationName'], |
|
173 |
|
'de', |
|
174 |
|
'PANASIAM'] |
118 |
175 |
|
|
119 |
176 |
Data[row['VariationNumber'] + '1'] = dict(zip(column_names, |
Data[row['VariationNumber'] + '1'] = dict(zip(column_names, |
120 |
|
values)) |
|
|
177 |
|
values)) |
121 |
178 |
values = [material[row['VariationNumber']], |
values = [material[row['VariationNumber']], |
122 |
179 |
row['ItemID'], |
row['ItemID'], |
123 |
|
row['VariationName']] |
|
|
180 |
|
row['VariationName'], |
|
181 |
|
'de', |
|
182 |
|
value[row['VariationNumber']]] |
124 |
183 |
|
|
125 |
184 |
Data[row['VariationNumber'] + '2'] = dict(zip(column_names, |
Data[row['VariationNumber'] + '2'] = dict(zip(column_names, |
126 |
|
values)) |
|
127 |
|
writeCSV(Data, "property", column_names) |
|
|
185 |
|
values)) |
|
186 |
|
variation_upload.writeCSV(Data, "property", column_names) |
File packages/variation_upload.py changed (mode: 100644) (index 455a31f..c48af03) |
... |
... |
def writeCSV(dataobject, name, columns): |
37 |
37 |
def variationUpload(flatfile, intern_number): |
def variationUpload(flatfile, intern_number): |
38 |
38 |
|
|
39 |
39 |
# The column header names |
# The column header names |
40 |
|
names = ['ItemID', 'VariationID', 'VariationNumber', 'VariationName', 'Position', 'LengthMM', 'WidthMM', 'HeightMM', |
|
41 |
|
'WeightG', 'VariationAttributes', 'PurchasePrice', 'MainWarehouse', 'Availability', 'AutoStockVisible'] |
|
|
40 |
|
names = ['ItemID', 'VariationID', 'VariationNumber', 'VariationName', 'Position', |
|
41 |
|
'LengthMM', 'WidthMM', 'HeightMM', 'WeightG', 'VariationAttributes', |
|
42 |
|
'PurchasePrice', 'MainWarehouse', 'Availability', 'AutoStockVisible', |
|
43 |
|
'ExternalID'] |
42 |
44 |
|
|
43 |
45 |
# create a Data Dictionary and fill it with the necessary values from the flatfile |
# create a Data Dictionary and fill it with the necessary values from the flatfile |
44 |
46 |
Data = SortedDict() |
Data = SortedDict() |
45 |
47 |
|
|
46 |
48 |
with open(flatfile, mode='r') as item: |
with open(flatfile, mode='r') as item: |
47 |
49 |
reader = DictReader(item, delimiter=";") |
reader = DictReader(item, delimiter=";") |
|
50 |
|
|
|
51 |
|
relationship = ['parent_child', 'Variantenbestandteil'] |
48 |
52 |
for row in reader: |
for row in reader: |
49 |
|
if(row['parent_child'] == 'parent'): |
|
|
53 |
|
try: |
|
54 |
|
if(row[relationship[0]]): |
|
55 |
|
relationcolum = relationship[0] |
|
56 |
|
except KeyError: |
|
57 |
|
if(row[relationship[1]]): |
|
58 |
|
relationcolum = relationship[1] |
|
59 |
|
except KeyError as err: |
|
60 |
|
print(err) |
|
61 |
|
print("There seems to be a new Flatfile, please check column for parent\n", |
|
62 |
|
" & child relationship for the headername and enter it within the\n", |
|
63 |
|
" first with open(flatfile....") |
|
64 |
|
if(row[relationcolum] == 'parent'): |
50 |
65 |
item_name = row['item_name'] |
item_name = row['item_name'] |
51 |
|
if(row['parent_child'] == 'child'): |
|
|
66 |
|
if(row[relationcolum] == 'child'): |
52 |
67 |
try: |
try: |
53 |
|
if(row['package_height'] and row['package_length'] and row['package_width']): |
|
|
68 |
|
if(row['package_height'] and |
|
69 |
|
row['package_length'] and |
|
70 |
|
row['package_width']): |
|
71 |
|
|
54 |
72 |
row['package_height'] = int(row['package_height']) |
row['package_height'] = int(row['package_height']) |
55 |
73 |
row['package_length'] = int(row['package_length']) |
row['package_length'] = int(row['package_length']) |
56 |
74 |
row['package_width'] = int(row['package_width']) |
row['package_width'] = int(row['package_width']) |
|
... |
... |
def variationUpload(flatfile, intern_number): |
61 |
79 |
except ValueError as err: |
except ValueError as err: |
62 |
80 |
print(err) |
print(err) |
63 |
81 |
print( |
print( |
64 |
|
"/nPlease copy the values for height, length, width and weight\nfrom the children to the parent variation in the flatfile.\n") |
|
|
82 |
|
'/nPlease copy the values for height, length, width and weight\n', |
|
83 |
|
'from the children to the parent variation in the flatfile.\n') |
65 |
84 |
exit() |
exit() |
66 |
85 |
|
|
67 |
86 |
if(row['color_name']): |
if(row['color_name']): |
|
... |
... |
def variationUpload(flatfile, intern_number): |
69 |
88 |
if(row['size_name']): |
if(row['size_name']): |
70 |
89 |
attributes += ';size_name:' + row['size_name'] |
attributes += ';size_name:' + row['size_name'] |
71 |
90 |
try: |
try: |
72 |
|
values = ['', '', row['item_sku'], item_name, '', int(row['package_length']) * 10, int(row['package_width']) * 10, int( |
|
73 |
|
row['package_height']) * 10, row['package_weight'], attributes, row['standard_price'], 'Badel', 'Y', 'Y'] |
|
|
91 |
|
values = ['', '', row['item_sku'], item_name, '', |
|
92 |
|
int(row['package_length']) * 10, |
|
93 |
|
int(row['package_width']) * 10, |
|
94 |
|
int(row['package_height']) * 10, |
|
95 |
|
row['package_weight'], attributes, |
|
96 |
|
row['standard_price'], 'Badel', 'Y', 'Y', ''] |
74 |
97 |
except Exception as err: |
except Exception as err: |
75 |
98 |
print(err) |
print(err) |
76 |
99 |
exit() |
exit() |
77 |
100 |
Data[row['item_sku']] = SortedDict(zip(names, values)) |
Data[row['item_sku']] = SortedDict(zip(names, values)) |
78 |
101 |
|
|
79 |
|
# open the intern numbers csv and fill in the remaining missing fields by using the item_sku as dict key |
|
|
102 |
|
# open the intern numbers csv and fill in the remaining missing fields by using the |
|
103 |
|
# item_sku as dict key |
80 |
104 |
with open(intern_number, mode='r') as item: |
with open(intern_number, mode='r') as item: |
81 |
105 |
reader = DictReader(item, delimiter=';') |
reader = DictReader(item, delimiter=';') |
82 |
106 |
for row in reader: |
for row in reader: |
|
... |
... |
def variationUpload(flatfile, intern_number): |
85 |
109 |
Data[row['amazon_sku']]['ItemID'] = row['article_id'] |
Data[row['amazon_sku']]['ItemID'] = row['article_id'] |
86 |
110 |
if(not(row['position'] == 0)): |
if(not(row['position'] == 0)): |
87 |
111 |
Data[row['amazon_sku']]['Position'] = row['position'] |
Data[row['amazon_sku']]['Position'] = row['position'] |
|
112 |
|
Data[row['amazon_sku']]['ExternalID'] = row['full_number'] |
88 |
113 |
|
|
89 |
114 |
output_path = writeCSV(Data, 'variation', names) |
output_path = writeCSV(Data, 'variation', names) |
90 |
115 |
|
|
|
... |
... |
def variationUpload(flatfile, intern_number): |
92 |
117 |
|
|
93 |
118 |
|
|
94 |
119 |
def setActive(flatfile, export): |
def setActive(flatfile, export): |
95 |
|
# because of a regulation of the plentyMarkets system the active status has to be delivered as an extra upload |
|
|
120 |
|
# because of a regulation of the plentyMarkets system the active status has to be |
|
121 |
|
# delivered as an extra upload |
96 |
122 |
column_names = ['Active', 'VariationID'] |
column_names = ['Active', 'VariationID'] |
97 |
123 |
Data = {} |
Data = {} |
98 |
124 |
# open the flatfile to get the sku names |
# open the flatfile to get the sku names |
|
... |
... |
def setActive(flatfile, export): |
112 |
138 |
|
|
113 |
139 |
|
|
114 |
140 |
def EANUpload(flatfile, export): |
def EANUpload(flatfile, export): |
115 |
|
# open the flatfile get the ean for an sku and save it into a dictionary with columnheaders of the plentymarket dataformat |
|
|
141 |
|
# open the flatfile get the ean for an sku and save it into a dictionary with |
|
142 |
|
# columnheaders of the plentymarket dataformat |
116 |
143 |
|
|
117 |
144 |
column_names = ['BarcodeID', 'BarcodeName', 'BarcodeType', |
column_names = ['BarcodeID', 'BarcodeName', 'BarcodeType', |
118 |
145 |
'Code', 'VariationID', 'VariationNumber'] |
'Code', 'VariationID', 'VariationNumber'] |