initBasti / Amazon2PlentySync (public) (License: GPLv3) (since 2019-01-27) (hash sha1)
Transfer your data from you Amazon Flatfile spreadsheet over to the Plentymarkets system. How to is included in the readme
List of commits:
Subject Hash Author Date (UTC)
added home product properties, improved dictionary iteration, fixed a key error in get_attributes 30d4aed4403c39a6865e30c0384c3360d375cbb6 Sebastian Fricke 2020-01-14 10:21:56
removed warning for missing flatfile columns as requested bfe6e22f7acb282a3af8423c81ceacf9fcf21ef4 Sebastian Fricke 2020-01-13 15:05:27
added initialization for the position variable 8331f92d62deb9ba7be7e97201c7c6afa7cf732a Sebastian Fricke 2020-01-13 14:47:57
improved code style and fixed problem where the dictionary containing the path was given to functions instead of the path itself 1a5bf99751b599f48d4687a9a6cbd55ffe213f5a Sebastian Fricke 2020-01-13 14:47:13
removed Barcode missing warning on parents b592779c6cc1588e2ae40394cab53d0d047746e7 Sebastian Fricke 2020-01-13 14:46:16
Added support for the amazon product types furnitureanddecor, bedandbath, swimwear b56708e55c3283a6cc2d3803b2abbb99bb125928 Sebastian Fricke 2020-01-13 14:16:40
fix failing attribute sync 87ea4bce17eba6c9c9842eaf9eb26249bf8d7da5 Sebastian Fricke 2020-01-13 12:15:35
new config handling d65bdfae89eceab6b1319d01373cf70ac7d8b63e Sebastian Fricke 2019-11-13 08:57:14
Fixed a problem, that caused Data to not pass sorting; Fixed error handling with the product type; Updated category ids 9a62d369fb24bc80765cd19e31fb255398fb8ed5 Sebastian Fricke 2019-09-12 09:27:54
fixed a merge conflict bug e6b4d9613237009d980cdbfc7ec65c3383a3495a Sebastian Fricke 2019-08-16 11:31:02
current status 15.8 94db3a5c98c596b24f00624fa4b772b9fd830b03 Sebastian Fricke 2019-08-15 14:26:42
Added manual file choosing in case of empty config 2df178528d70be15bfb2e1c9058f69e128236622 Sebastian Fricke 2019-08-15 10:11:41
Added Markdown choosing, fixed various bugs 991ed44df370cf80fc2e2c51d7427d63e221888f Sebastian Fricke 2019-08-15 09:30:55
Changed the item upload format to fix errors in the sync, moved active upload to properties because it has to be done seperatly to the creation process 3b466364e3dcdf14b4cef5b8649ec9573c992324 Sebastian Fricke 2019-06-17 14:09:23
Removed the image upload from item upload and added a exportfile together with functions to get the variation id for the image upload, the image upload is now a single process 6349c4a7177345c25aa6d8ecd03740a75fa2520f Sebastian Fricke 2019-06-13 12:58:36
Updated the feature list to the current active list b5d1675bcb56c38a97c928d7800b6a29c2dea116 LagerBadel PC:Magdalena 2019-06-11 12:11:06
fixed a bug with the encoding of very large datasets were the 10000 letters were not enough, increased waiting time but removed some mistakes that way 8f431d6a68fb21699950b1ca48a1592976789c74 LagerBadel PC:Magdalena 2019-06-06 13:41:52
small debugging improvements in writeCSV and missing colors 88db9e1362a4178805671f443554a7f0d3db9e69 LagerBadel PC:Magdalena 2019-06-06 11:52:31
Major Update added a gui category chooser, attribute id connection and updated the whole Script to work on Elastic Sync a8a6156e26e2783a695c87bda35aba725fd77023 Sebastian Fricke 2019-06-05 12:45:29
fixed a bug with the encoding function 0c5b9dd0414037743bf39fdc3420d55035bffa61 Sebastian Fricke 2019-05-14 15:10:17
Commit 30d4aed4403c39a6865e30c0384c3360d375cbb6 - added home product properties, improved dictionary iteration, fixed a key error in get_attributes
Author: Sebastian Fricke
Author date (UTC): 2020-01-14 10:21
Committer name: Sebastian Fricke
Committer date (UTC): 2020-01-14 10:21
Parent(s): bfe6e22f7acb282a3af8423c81ceacf9fcf21ef4
Signing key:
Tree: 37a85572a01b4b259b2223b2aac8da0e84badcc9
File Lines added Lines deleted
packages/item_upload.py 44 39
File packages/item_upload.py changed (mode: 100644) (index 7b74184..730dbcf)
... ... def itemPropertyUpload(flatfile, folder, filename):
221 221 reader = csv.DictReader(item, delimiter=';', lineterminator='\n') reader = csv.DictReader(item, delimiter=';', lineterminator='\n')
222 222
223 223 # define the names of the property fields within the flatfile # define the names of the property fields within the flatfile
224 property_names = ['bullet_point1', 'bullet_point2'
225 , 'bullet_point3', 'bullet_point4'
226 , 'bullet_point5', 'fit_type'
227 , 'lifestyle', 'batteries_required'
228 , 'supplier_declared_dg_hz_regulation'
229 , 'department_name', 'variation_theme'
230 , 'seasons', 'material_composition'
231 , 'outer_material_type', 'collar_style'
232 , 'neck_size', 'pattern_type'
233 , 'sleeve_type']
224 property_names = ['bullet_point1', 'bullet_point2',
225 'bullet_point3', 'bullet_point4',
226 'bullet_point5', 'fit_type',
227 'lifestyle', 'batteries_required',
228 'supplier_declared_dg_hz_regulation1',
229 'department_name', 'variation_theme',
230 'seasons', 'material_composition',
231 'outer_material_type', 'collar_style',
232 'neck_size', 'pattern_type',
233 'sleeve_type', 'installation_type',
234 'finish_type', 'seasons1', 'paint_type1',
235 'theme']
234 236
235 237 # Assign the Plentymarkets property ID to the property_names # Assign the Plentymarkets property ID to the property_names
236 238 property_id = dict() property_id = dict()
237 239
238 id_values = ['15', '16'
239 , '17', '24'
240 , '19', '20'
241 , '9', '10'
242 , '14'
243 , '13', '12'
244 , '11', '8'
245 , '7', '25'
246 , '26', '28'
247 , '29']
248
240 use_names = []
241 id_values = ['15', '16',
242 '17', '24',
243 '19', '20',
244 '9', '10',
245 '14',
246 '13', '12',
247 '11', '8',
248 '7', '25',
249 '26', '28',
250 '29', '45',
251 '46', '47',
252 '48', '49']
249 253 property_id = dict( zip(property_names, id_values) ) property_id = dict( zip(property_names, id_values) )
250 254
251 255 properties = dict() properties = dict()
 
... ... def itemPropertyUpload(flatfile, folder, filename):
253 257 for row in reader: for row in reader:
254 258 if(row['parent_child'] == 'parent'): if(row['parent_child'] == 'parent'):
255 259 try: try:
256 values = [row[property_names[0]], row[property_names[1]]
257 , row[property_names[2]], row[property_names[3]]
258 , row[property_names[4]], row[property_names[5]]
259 , row[property_names[6]], row[property_names[7]]
260 , row[property_names[8] + '1']
261 , row[property_names[9]], row[property_names[10]]
262 , row[property_names[11]], row[property_names[12]]
263 , row[property_names[13]], row[property_names[14]]
264 , row[property_names[15]], row[property_names[16]]
265 , row[property_names[17]]
266 ]
260 use_names = [i for i in property_names if i in [*row]]
261 values = [row[i] for i in use_names]
267 262 except ValueError as err: except ValueError as err:
268 263 print("In property Upload: One of the values wasn't found : ", err) print("In property Upload: One of the values wasn't found : ", err)
269 264
270 265 # Check for empty values # Check for empty values
271 properties[row['item_sku']] = dict(zip(property_names, values))
266 properties[row['item_sku']] = dict(zip(use_names, values))
272 267
273 268 column_names = ['SKU', 'ID-property', 'Value', 'Lang', 'Active'] column_names = ['SKU', 'ID-property', 'Value', 'Lang', 'Active']
274 269 Data = {} Data = {}
275 270 for index, row in enumerate( properties ): for index, row in enumerate( properties ):
276 for prop in property_id:
277 values = [row, property_id[prop], properties[row][prop], 'DE', 1]
271 for prop in use_names:
272 try:
273 values = [row, property_id[prop],
274 properties[row][prop], 'DE', 1]
278 275
279 Data[row + prop] = dict(zip(column_names, values))
276 Data[row + prop] = dict(zip(column_names, values))
277 except KeyError as kerr:
278 print("ERROR: Key {0} was not found in the flatfile"
279 .format(kerr))
280 280
281 281
282 282 barcode.writeCSV(Data, "Item_Merkmale", column_names, folder, filename) barcode.writeCSV(Data, "Item_Merkmale", column_names, folder, filename)
 
... ... def get_attributes(dataset, sets):
335 335
336 336 output_string = '' output_string = ''
337 337 try: try:
338 if(len(sets[dataset['parent_sku']]['color']) > 1):
339 output_string = 'color_name:' + dataset['color_name']
338 if(dataset['parent_sku'] in [*sets]):
339 if(len(sets[dataset['parent_sku']]['color']) > 1):
340 output_string = 'color_name:' + dataset['color_name']
341 else:
342 print("{0} not found in {1}".format(
343 dataset['parent_sku'], ','.join([*sets])
344 ))
340 345 except Exception as err: except Exception as err:
341 346 print("Error @ adding color to string (get_attributes)\nerr:{0}" print("Error @ adding color to string (get_attributes)\nerr:{0}"
342 347 .format(err)) .format(err))
 
... ... def find_similar_attr(flatfile):
360 365
361 366 for row in reader: for row in reader:
362 367 # If it is a parent create a new dictionary with 2 sets for color and size # If it is a parent create a new dictionary with 2 sets for color and size
363 if(row['parent_child'] == 'parent'):
368 if(row['parent_child'].lower() == 'parent'):
364 369 color = set() color = set()
365 370 size = set() size = set()
366 371 Data[row['item_sku']] = {'color':color, 'size':size} Data[row['item_sku']] = {'color':color, 'size':size}
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using git:
git clone git://git.rocketgit.com/user/initBasti/Amazon2PlentySync

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main