initBasti / Amazon2PlentySync (public) (License: GPLv3) (since 2019-01-27) (hash sha1)
Transfer your data from you Amazon Flatfile spreadsheet over to the Plentymarkets system. How to is included in the readme
List of commits:
Subject Hash Author Date (UTC)
Added FNSKU code and removed freetext fields to replace them with properties 4f77496d60be1afbd90a7279b14643267b4430cc Sebastian Fricke 2019-03-27 11:09:55
Update to property Upload because it isn't necessary to have a property for each material 37c01ce472dc85a9f7a39bd164a2ea53c28b4955 Sebastian Fricke 2019-03-18 16:08:14
added a filter for items with only 1 size that currently works with single parent child combinations fd3bf2b659614d5518884eb3da77b564cd0018eb Sebastian Fricke 2019-02-28 16:08:57
Market connection adjusted to AmazonFBA 1feb4b2e96c6a55ad0494696fd18fd6fb42babb0 Sebastian Fricke 2019-02-25 12:21:05
current version Feb 2019 00b24836dd378f21942ed323c2b66f928b9fb4c4 Sebastian Fricke 2019-02-25 09:00:00
Changes to fit to the new flatfile format 91cd339571f607e88f6e922f1a47630c4c8d62a7 Sebastian Fricke 2019-02-08 13:28:02
Small removal of redundant code b271de0b1be1d83be088b00a35b5618af088b58a Sebastian Fricke 2019-01-30 18:08:15
General improvements and property upload bb48084db4359210eb892a04f1322f6fda822bef Sebastian Fricke 2019-01-30 17:43:32
Fixed scripts according to dataformat changes + readme dec28d9e6ff5c5c903d5ca01a969e661d43b66c6 Sebastian Fricke 2019-01-29 21:08:04
Working Checkboxes and file import 25378c68a6220c1c6570642920e6150a50415153 Sebastian Fricke 2019-01-29 21:03:23
Added checkboxes, descriptions, import and runbutton 2021f0960e70c8c229ec08488165dc01b998a6e0 Sebastian Fricke 2019-01-27 22:19:18
Added market connection, cosmetics in product import c9a771d5e7a3a80adc650e773c568e00dd8e2aea Sebastian Fricke 2019-01-23 15:01:47
Amazon Data Upload 33dbd0ed6945c01d8917ceae3cf3964f051a2288 Sebastian Fricke 2019-01-22 14:43:39
Readme started, amazon sku upload, vari upload, images f43a9e83598c3e4623bcb08667e2b4e649b2cdea Sebastian Fricke 2019-01-22 10:44:40
Amazon SKU Upload 8586da2ae91d49c81a0d9b6ff220c8a1b1b011a6 Sebastian Fricke 2019-01-16 18:36:54
Inital Commit with current working version of the CLI Tool and the work in progress of the GUI. 207fef4277f7c169aa79eb39ec1aaaab258b888c Sebastian Fricke 2019-01-16 09:47:43
Initial commit ba965ee75fe09437fb08da5edd25b20e39e17eff Sebastian Fricke 2019-01-16 09:42:30
Commit 4f77496d60be1afbd90a7279b14643267b4430cc - Added FNSKU code and removed freetext fields to replace them with properties
Author: Sebastian Fricke
Author date (UTC): 2019-03-27 11:09
Committer name: Sebastian Fricke
Committer date (UTC): 2019-03-27 11:09
Parent(s): 37c01ce472dc85a9f7a39bd164a2ea53c28b4955
Signing key:
Tree: 48d4950646f684c3a3c1c2c222f4575cc9c2051c
File Lines added Lines deleted
packages/amazon_data_upload.py 1 19
packages/item_upload.py 57 25
packages/stock_upload.py 4 2
packages/variation_upload.py 38 7
product_import.py 2 2
File packages/amazon_data_upload.py changed (mode: 100644) (index 0e82e4b..e08b996)
... ... def amazonDataUpload(flatfile, export):
38 38
39 39 column_names = [ column_names = [
40 40 'ItemAmazonProductType', 'ItemAmazonFBA', 'ItemAmazonProductType', 'ItemAmazonFBA',
41 'bullet_point1','bullet_point2', 'bullet_point3',
42 'bullet_point4', 'bullet_point5',
43 'fit_type', 'lifestyle', 'batteries_required',
44 'supplier_declared_dg_hz_regulation1',
45 'department_name', 'variation_theme', 'collection_name',
46 'material_composition', 'size_map', 'size_name',
47 'color_map', 'ItemID','ItemShippingWithAmazonFBA'
41 'ItemID','ItemShippingWithAmazonFBA'
48 42 ] ]
49 43
50 44 Data = SortedDict() Data = SortedDict()
 
... ... def amazonDataUpload(flatfile, export):
72 66 product_type = type_id[key] product_type = type_id[key]
73 67
74 68 values = [product_type, '1', values = [product_type, '1',
75 row['bullet_point1'], row['bullet_point2'],
76 row['bullet_point3'], row['bullet_point4'],
77 row['bullet_point5'], row['fit_type'],
78 row['lifestyle'], row['batteries_required'],
79 row['supplier_declared_dg_hz_regulation1'],
80 row['department_name'],
81 row['variation_theme'],
82 row['collection_name'],
83 row['material_composition'],
84 row['size_map'],
85 row['size_name'],
86 row['color_map'],
87 69 '0','1'] '0','1']
88 70
89 71 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
File packages/item_upload.py changed (mode: 100644) (index 09aff3f..b839ec6)
... ... def itemPropertyUpload(flatfile, export):
123 123 with open(flatfile, mode='r') as item: with open(flatfile, mode='r') as item:
124 124 reader = csv.DictReader(item, delimiter=';', lineterminator='\n') reader = csv.DictReader(item, delimiter=';', lineterminator='\n')
125 125
126 material = {}
127 value = {}
128 # search for a material name and assign a number that correlates to it
126 # define the names of the property fields within the flatfile
127 property_names = ['bullet_point1', 'bullet_point2'
128 , 'bullet_point3', 'bullet_point4'
129 , 'bullet_point5', 'fit_type'
130 , 'lifestyle', 'batteries_required'
131 , 'supplier_declared_dg_hz_regulation'
132 , 'department_name', 'variation_theme'
133 , 'collection_name', 'material_composition'
134 , 'outer_material_type']
135
136 # Assign the Plentymarkets property ID to the property_names
137 property_id = dict()
138
139 id_values = ['15', '16'
140 , '17', '24'
141 , '19', '20'
142 , '9', '10'
143 , '14'
144 , '13', '12'
145 , '11', '8'
146 , '7']
147
148 property_id = dict( zip(property_names, id_values) )
149
150 properties = dict()
151
129 152 for row in reader: for row in reader:
130 153 if(row['parent_child'] == 'parent'): if(row['parent_child'] == 'parent'):
131 if(re.search(r'(cotton|baumwolle)',
132 row['outer_material_type'].lower())):
133
134 material[row['item_sku']] = 7
135 value[row['item_sku']] = "Baumwolle"
136 if(re.search(r'(hemp|hanf)',
137 row['outer_material_type'].lower())):
154 try:
155 values = [row[property_names[0]], row[property_names[1]]
156 , row[property_names[2]], row[property_names[3]]
157 , row[property_names[4]], row[property_names[5]]
158 , row[property_names[6]], row[property_names[7]]
159 , row[property_names[8] + '1']
160 , row[property_names[9]], row[property_names[10]]
161 , row[property_names[11]], row[property_names[12]]
162 , row[property_names[13]]
163 ]
164 except ValueError as err:
165 print("In property Upload: One of the values wasn't found : ", err)
138 166
139 material[row['item_sku']] = 7
140 value[row['item_sku']] = "Hanf"
141 if(re.search(r'(viskose|viscose)',
142 row['outer_material_type'].lower())):
167 # Check for empty values
168 #for index, item in enumerate( values ):
169 # if(not(item)):
170 # print(row['item_sku'], " has no value on ", property_names[index], " !")
143 171
144 material[row['item_sku']] = 7
145 value[row['item_sku']] = "Viskose"
172 properties[row['item_sku']] = dict(zip(property_names, values))
146 173
147 174 with open(export, mode='r') as item: with open(export, mode='r') as item:
148 175 reader = csv.DictReader(item, delimiter=';', lineterminator='\n') reader = csv.DictReader(item, delimiter=';', lineterminator='\n')
 
... ... def itemPropertyUpload(flatfile, export):
153 180 Data = {} Data = {}
154 181 for row in reader: for row in reader:
155 182 if(row['AttributeValueSetID'] == ''): if(row['AttributeValueSetID'] == ''):
156 if(row['VariationNumber'] in [*material]):
157 values = [material[row['VariationNumber']],
158 row['ItemID'],
159 row['VariationName'],
160 'de',
161 value[row['VariationNumber']]]
162
163 Data[row['VariationNumber'] + '1'] = dict(zip(column_names,
164 values))
183 if(row['VariationNumber'] in [*properties]):
184 for number, key in enumerate( properties[row['VariationNumber']] ):
185 if(properties[row['VariationNumber']][key]):
186 values = [
187 property_id[key],
188 row['ItemID'],
189 row['VariationName'],
190 'de',
191 properties[row['VariationNumber']][key]
192 ]
193
194 Data[row['VariationNumber'] + str( number )] = dict(zip(column_names,
195 values))
165 196 variation_upload.writeCSV(Data, "property", column_names) variation_upload.writeCSV(Data, "property", column_names)
197
File packages/stock_upload.py changed (mode: 100644) (index 7046116..1b60548)
... ... def priceUpload(flatfile, export):
41 41 # The column header names # The column header names
42 42 column_names = ['VariationID', 'IsNet', 'VariationPrice', 'SalesPriceID'] column_names = ['VariationID', 'IsNet', 'VariationPrice', 'SalesPriceID']
43 43
44 price_id = ['1', '4']
44 45 # create a Data Dictionary and fill it with the necessary values from the # create a Data Dictionary and fill it with the necessary values from the
45 46 # flatfile # flatfile
46 47 Data = SortedDict() Data = SortedDict()
 
... ... def priceUpload(flatfile, export):
49 50 reader = DictReader(item, delimiter=";") reader = DictReader(item, delimiter=";")
50 51 for row in reader: for row in reader:
51 52 if(row['external_product_id']): if(row['external_product_id']):
52 values = ['', 0, row['standard_price'], 1]
53 Data[row['item_sku']] = SortedDict(zip(column_names, values))
53 for ident in price_id:
54 values = ['', 0, row['standard_price'], ident]
55 Data[row['item_sku']] = SortedDict(zip(column_names, values))
54 56
55 57 with open(export, mode='r') as item: with open(export, mode='r') as item:
56 58 reader = DictReader(item, delimiter=";") reader = DictReader(item, delimiter=";")
File packages/variation_upload.py changed (mode: 100644) (index c1b4b5a..63ed635)
... ... def setActive(flatfile, export):
127 127 output_path = writeCSV(Data, 'active', column_names) output_path = writeCSV(Data, 'active', column_names)
128 128
129 129
130 def EANUpload(flatfile, export):
130 def EANUpload(flatfile, export, stocklist):
131 131 # open the flatfile get the ean for an sku and save it into a dictionary with # open the flatfile get the ean for an sku and save it into a dictionary with
132 132 # columnheaders of the plentymarket dataformat # columnheaders of the plentymarket dataformat
133 133
134 134 column_names = ['BarcodeID', 'BarcodeName', 'BarcodeType', column_names = ['BarcodeID', 'BarcodeName', 'BarcodeType',
135 135 'Code', 'VariationID', 'VariationNumber'] 'Code', 'VariationID', 'VariationNumber']
136
137 barcode_types = {'EAN' : {'id' : 3, 'name' : 'UPC', 'type' : 'UPC'},
138 'FNSKU' : {'id' : 5, 'name' : 'FNSKU', 'type' : 'EAN_13'}}
136 139 Data = {} Data = {}
137 140 with open(flatfile, mode='r') as item: with open(flatfile, mode='r') as item:
138 141 reader = csv.DictReader(item, delimiter=";") reader = csv.DictReader(item, delimiter=";")
139 142
140 143 for row in reader: for row in reader:
141 values = ['3', 'UPC', 'UPC',
142 row['external_product_id'], '', row['item_sku']]
143 Data[row['item_sku']] = dict(zip(column_names, values))
144 if(row['parent_child'] == 'child'):
145 for barcode in barcode_types:
146 # Set code to an empty String if the barcode type matches EAN set it to to
147 # the external_product_id
148 code = ''
149 if(barcode == 'EAN'):
150 code = row['external_product_id']
151
152 values = [
153 barcode_types[barcode]['id'], barcode_types[barcode]['name'],
154 barcode_types[barcode]['type'], code,
155 '', row['item_sku']
156 ]
157 Data[row['item_sku'] + barcode] = dict(zip(column_names, values))
144 158
145 159 # open the exported file to get the variation id # open the exported file to get the variation id
146 160 with open(export, mode='r') as item: with open(export, mode='r') as item:
147 161 reader = csv.DictReader(item, delimiter=";") reader = csv.DictReader(item, delimiter=";")
148 162
149 163 for row in reader: for row in reader:
150 if(row['VariationNumber'] in [*Data]):
151 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
164 for barcode in barcode_types:
165 if(row['VariationNumber'] + barcode in [*Data]):
166 Data[row['VariationNumber'] + barcode]['VariationID'] = row['VariationID']
167
168 with open(stocklist, mode='r') as item:
169 reader = csv.DictReader(item, delimiter=";")
152 170
153 output_path = writeCSV(Data, 'EAN', column_names)
171 for row in reader:
172 print(row['fnsku'])
173 for barcode in barcode_types:
174 if(row['MASTER'] + barcode in [*Data]):
175 # Set code to an empty String if the barcode type matches FNSKU set it to to
176 # the external_product_id
177 code = ''
178 if(barcode == 'FNSKU'):
179 code = row['fnsku']
180
181 if(code):
182 Data[row['MASTER'] + barcode]['Code'] = code
183
184 output_path = writeCSV(Data, 'Barcode', column_names)
154 185
155 186
156 187 def marketConnection(export, ebay=0, amazon=0): def marketConnection(export, ebay=0, amazon=0):
File product_import.py changed (mode: 100644) (index bc61aa0..857c562)
... ... def main():
51 51 print("Something went wrong at the Export file import!") print("Something went wrong at the Export file import!")
52 52 print("spreadsheet csv containing the export : ", export) print("spreadsheet csv containing the export : ", export)
53 53 try: try:
54 print("EAN, Active, Merkmale & Price Upload")
55 EANUpload(sheet, export)
54 print("Active, Merkmale & Price Upload")
56 55 setActive(sheet, export) setActive(sheet, export)
57 56 itemPropertyUpload(sheet, export) itemPropertyUpload(sheet, export)
58 57 priceUpload(sheet, export) priceUpload(sheet, export)
 
... ... def main():
64 63 stocklist = askopenfilename() stocklist = askopenfilename()
65 64 print("spreadsheet csv containing the current stock : ", stocklist) print("spreadsheet csv containing the current stock : ", stocklist)
66 65
66 EANUpload(sheet, export, stocklist)
67 67 stockUpload(sheet, stocklist) stockUpload(sheet, stocklist)
68 68
69 69 print("\nCreate a upload file for the SKU and Parent_SKU\nto connect existing items from amazon to plentyMarkets.\n") print("\nCreate a upload file for the SKU and Parent_SKU\nto connect existing items from amazon to plentyMarkets.\n")
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using git:
git clone git://git.rocketgit.com/user/initBasti/Amazon2PlentySync

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main