initBasti / Amazon2PlentySync (public) (License: GPLv3) (since 2019-01-27) (hash sha1)
Transfer your data from you Amazon Flatfile spreadsheet over to the Plentymarkets system. How to is included in the readme
List of commits:
Subject Hash Author Date (UTC)
current status 15.8 94db3a5c98c596b24f00624fa4b772b9fd830b03 Sebastian Fricke 2019-08-15 14:26:42
Added manual file choosing in case of empty config 2df178528d70be15bfb2e1c9058f69e128236622 Sebastian Fricke 2019-08-15 10:11:41
Changed the item upload format to fix errors in the sync, moved active upload to properties because it has to be done seperatly to the creation process 3b466364e3dcdf14b4cef5b8649ec9573c992324 Sebastian Fricke 2019-06-17 14:09:23
Removed the image upload from item upload and added a exportfile together with functions to get the variation id for the image upload, the image upload is now a single process 6349c4a7177345c25aa6d8ecd03740a75fa2520f Sebastian Fricke 2019-06-13 12:58:36
Updated the feature list to the current active list b5d1675bcb56c38a97c928d7800b6a29c2dea116 LagerBadel PC:Magdalena 2019-06-11 12:11:06
fixed a bug with the encoding of very large datasets were the 10000 letters were not enough, increased waiting time but removed some mistakes that way 8f431d6a68fb21699950b1ca48a1592976789c74 LagerBadel PC:Magdalena 2019-06-06 13:41:52
small debugging improvements in writeCSV and missing colors 88db9e1362a4178805671f443554a7f0d3db9e69 LagerBadel PC:Magdalena 2019-06-06 11:52:31
Major Update added a gui category chooser, attribute id connection and updated the whole Script to work on Elastic Sync a8a6156e26e2783a695c87bda35aba725fd77023 Sebastian Fricke 2019-06-05 12:45:29
fixed a bug with the encoding function 0c5b9dd0414037743bf39fdc3420d55035bffa61 Sebastian Fricke 2019-05-14 15:10:17
Major Update added a config file for better useability and added a gui to enter the category and the name of the product further work towards the rework from dynamic import to elastic sync e4356af15a4b8f7393f85bd51c16b330bc3555af Sebastian Fricke 2019-05-14 14:43:03
Changed the price upload to identify items that are not in plentymarkets and added a webshop price 4ab9bcd988f9eb26647748a8f80f25c8c5b7f2e2 Sebastian Fricke 2019-05-03 09:18:35
added Webshop to marketconnections 84f93694fe0c67972ad951649d9f6f0d577d3e29 Sebastian Fricke 2019-05-01 14:12:00
Added the modelnumber feature and removed the creation of empty features ea98391f2dbdf8fb8e601153b4f6ebfca504929c Sebastian Fricke 2019-05-01 12:31:19
Changed the feature upload into a loop for more overview 0a1bee82659a576c6fb4f2641aa3990d8d686b3c Sebastian Fricke 2019-05-01 10:04:20
Added a few new instructions to the Instructions file b4878c59958f89a02937de1dfc7aabbd23e71061 LagerBadel PC:Magdalena 2019-04-18 09:41:10
Made some fields not required but added Warnings for the log file, additionally some new amazon features were added. 6392338b7e9968be3bc4da9031144c3cc2cfae48 Sebastian Fricke 2019-04-18 09:37:51
Added an error log system and improved overall workflow 2e3763e436899466db9f03f70ea926869afd3219 Sebastian Fricke 2019-04-18 08:12:27
Added additional feature uploads 528cad4899d3e3adca5098c1a0ce92c2a6b8a853 Sebastian Fricke 2019-04-16 10:25:49
Added an optimization for the initial directory for Linux 58b340605cba0603520ada8a184cc9fba5f8c3b8 Sebastian Fricke 2019-04-16 10:22:18
Fixed a typo in the build script f7943d8b2c33b89b083380902f1b1281366a12b2 Sebastian Fricke 2019-04-16 08:13:51
Commit 94db3a5c98c596b24f00624fa4b772b9fd830b03 - current status 15.8
Author: Sebastian Fricke
Author date (UTC): 2019-08-15 14:26
Committer name: Sebastian Fricke
Committer date (UTC): 2019-08-15 14:26
Parent(s): 2df178528d70be15bfb2e1c9058f69e128236622
Signing key:
Tree: eabf457e7a54661a1020eca6d10b17afc267fc6f
File Lines added Lines deleted
packages/gui/category_chooser.py 44 4
packages/item_upload.py 18 12
packages/stock_upload.py 85 0
packages/variation_upload.py 141 0
File packages/gui/category_chooser.py changed (mode: 100644) (index 1631dae..d69be15)
... ... from tkinter import messagebox as tmb
7 7 from packages import color as clr from packages import color as clr
8 8 from packages import item_upload from packages import item_upload
9 9
10 class MarkingDropdown(tkinter.Frame):
11 def __init__(self, master, *args, **kwargs):
12 tkinter.Frame.__init__(self, master, *args, **kwargs)
13 self.master = master
14 self.args = args
15 self.kwargs = kwargs
16 self.initialize()
17
18 def initialize(self):
19 self.grid()
20
21 self.optionvar = tkinter.StringVar(self)
22 self.resultvar = tkinter.StringVar(self)
23
24 self.options = {
25 'Neu':'14',
26 'Alt':'9',
27 'Item im Upload Prozess':'28'
28 }
29
30 self.optionvar.set('marking')
31
32 self.dropdown_menu = tkinter.OptionMenu(self, self.optionvar, *[ *self.options ])
33 self.dropdown_menu.grid(row=1, column=0, sticky="EW", padx=50)
34
35 self.optionvar.trace('w', self.change_dropdown)
36
37 def change_dropdown(self, *args):
38 if(self.optionvar.get() and not( self.optionvar.get() == 'marking' )):
39 self.resultvar = self.options[ self.optionvar.get() ]
40
10 41 class LabelBox(tkinter.Frame): class LabelBox(tkinter.Frame):
11 42 def __init__(self, master, similar_names): def __init__(self, master, similar_names):
12 43 tkinter.Frame.__init__(self, master) tkinter.Frame.__init__(self, master)
 
... ... class CategoryChooser(tkinter.Tk):
290 321 self.atrpath = atrpath self.atrpath = atrpath
291 322 self.atrdate = atrdate self.atrdate = atrdate
292 323 self.newpath = {'upload-path':'', 'attribute-path':''} self.newpath = {'upload-path':'', 'attribute-path':''}
293 self.data = {'name':'', 'categories':''}
324 self.data = {'name':'', 'categories':'', 'marking':''}
294 325 self.protocol("WM_WINDOW_DELETE", self.close_app) self.protocol("WM_WINDOW_DELETE", self.close_app)
295 326 self.missingcolors = {} self.missingcolors = {}
296 327 # Window position properties # Window position properties
 
... ... class CategoryChooser(tkinter.Tk):
340 371 self.namechooser = tkinter.Entry(self, width=50, bg="white") self.namechooser = tkinter.Entry(self, width=50, bg="white")
341 372 self.namechooser.grid(row=6, columnspan=3, pady=10, padx=10) self.namechooser.grid(row=6, columnspan=3, pady=10, padx=10)
342 373
374 self.markingdesc = DescBox(master=self, desctext="Choose a marking for the product")
375 self.markingdesc.grid(row=7, columnspan=3, pady=10, padx=10)
376
377 self.markingchooser = MarkingDropdown(master=self)
378 self.markingchooser.grid(row=8, columnspan=3, pady=10, padx=10)
379
343 380 self.accept = tkinter.Button(self, text="Accept", self.accept = tkinter.Button(self, text="Accept",
344 command=lambda: self.get_input(self.dropdown.resultbox.get(), self.namechooser.get()))
345 self.accept.grid(row=7, column=3, pady=10, padx=10)
381 command=lambda: self.get_input(self.dropdown.resultbox.get(),
382 self.namechooser.get(),
383 self.markingchooser.resultvar))
384 self.accept.grid(row=9, column=3, pady=10, padx=10)
346 385
347 def get_input(self, categories, name):
386 def get_input(self, categories, name, marking):
348 387 self.data['name'] = name self.data['name'] = name
349 388 self.data['categories'] = categories self.data['categories'] = categories
389 self.data['marking'] = marking
350 390 # Close the gui after accepting the input to stop the mainloop # Close the gui after accepting the input to stop the mainloop
351 391 self.close_app() self.close_app()
352 392
File packages/item_upload.py changed (mode: 100644) (index 59433c1..3ba75b9)
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
132 132 if(row['parent_child'] == 'child'): if(row['parent_child'] == 'child'):
133 133 attributes = get_attributes(dataset=row, sets=color_size_sets) attributes = get_attributes(dataset=row, sets=color_size_sets)
134 134
135 if(row['parent_child'] == 'parent'):
136 item_flag = 21
137
138 135 try: try:
139 136 values = [ values = [
140 137 row['parent_sku'], row['item_sku'], row['parent_sku'], row['item_sku'],
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
166 163 print('Error at the Values') print('Error at the Values')
167 164 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
168 165 except KeyError as err: except KeyError as err:
169 print("Error at : 'if(row['parent_child'] == 'parent'):'")
166 print("Error inside parent_child == parent\nline:{0}err:{1}"
167 .format(sys.exc_info[2].tb_lineno, err))
170 168 return row['item_sku'] return row['item_sku']
171 169
172 170 # open the intern number csv to get the item ID # open the intern number csv to get the item ID
 
... ... def get_properties(flatfile):
324 322 'width':0, 'width':0,
325 323 'height':0, 'height':0,
326 324 'weight':0, 'weight':0,
327 '1'}
325 }
328 326
329 327 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
330 328 reader = csv.DictReader(item, delimiter=";") reader = csv.DictReader(item, delimiter=";")
 
... ... def get_properties(flatfile):
365 363 def get_attributes(dataset, sets): def get_attributes(dataset, sets):
366 364
367 365 output_string = '' output_string = ''
368 if(len(sets[dataset['parent_sku']]['color']) > 1):
369 output_string = 'color_name:' + dataset['color_name']
370 if(len(sets[dataset['parent_sku']]['size']) > 1):
371 if(not(output_string)):
372 output_string = 'size_name:' + dataset['size_name']
373 else:
374 output_string = output_string + ';size_name:' + dataset['size_name']
366 try:
367 if(len(sets[dataset['parent_sku']]['color']) > 1):
368 output_string = 'color_name:' + dataset['color_name']
369 except Exception as err:
370 print("Error @ adding color to string (get_attributes)\nerr:{0}"
371 .format(err))
372 try:
373 if(len(sets[dataset['parent_sku']]['size']) > 1):
374 if(not(output_string)):
375 output_string = 'size_name:' + dataset['size_name']
376 else:
377 output_string = output_string + ';size_name:' + dataset['size_name']
378 except Exception as err:
379 print("Error @ adding size to string\nerr:{0}"
380 .format(err))
375 381 return output_string return output_string
376 382
377 383 def find_similar_attr(flatfile): def find_similar_attr(flatfile):
File packages/stock_upload.py added (mode: 100644) (index 0000000..6f2268f)
1 from csv import DictReader, DictWriter
2 from os.path import isfile
3 try:
4 from sortedcontainers import SortedDict
5 except ImportError:
6 print("the sortedcontainers module is required to run this program.")
7 raise ImportError
8
9
10 def writeCSV(dataobject, name, columns):
11 '''Write Data into new CSV for Upload
12 OUTPUT
13 '''
14
15 output_path_number = 1
16 datatype = ".csv"
17 output_path = "Upload/" + name + "_upload_" + str(output_path_number) + datatype
18
19 while(isfile(output_path)):
20 output_path_number = int(output_path_number) + 1
21 output_path = "Upload/" + name + "_upload_" + str(output_path_number) + datatype
22
23 with open(output_path, mode='a') as item:
24 writer = DictWriter(item, delimiter=";", fieldnames=columns)
25 writer.writeheader()
26 for row in dataobject:
27 writer.writerow(dataobject[row])
28
29 if(isfile(output_path)):
30 print("Upload file successfully created under {0}".format(output_path))
31
32 return output_path
33
34
35 def stockUpload(flatfile, export, stocklist):
36
37 # The column header names
38 column_names = ['Barcode','LocationID','LocationName','Reordered','ReservedStock','Stock','WarehouseID','VariationID','VariationNo']
39
40 # create a Data Dictionary and fill it with the necessary values from the flatfile
41 Data = SortedDict()
42
43 with open(flatfile, mode='r') as item:
44 reader = DictReader(item, delimiter=";")
45 for row in reader:
46 if(row['external_product_id']):
47 values = [row['external_product_id'],0,'Standard-Lagerort','','','','104','',row['item_sku']]
48 Data[row['item_sku']] = SortedDict(zip(column_names, values))
49
50 with open(export, mode='r') as item:
51 reader = DictReader(item, delimiter=";")
52 for row in reader:
53 if(row['VariationNumber'] in [*Data]):
54 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
55
56 with open(stocklist, mode='r') as item:
57 reader = DictReader(item, delimiter=";")
58 for row in reader:
59 if(row['MASTER'] and row['MASTER'] in [*Data]):
60 Data[row['MASTER']]['Stock'] = row['BADEL 26.12.16']
61
62 output_path = writeCSV(Data, 'stock', column_names)
63
64
65 def priceUpload(flatfile, export):
66 # The column header names
67 column_names = ['VariationID','IsNet','VariationPrice','SalesPriceID']
68
69 # create a Data Dictionary and fill it with the necessary values from the flatfile
70 Data = SortedDict()
71
72 with open(flatfile, mode='r') as item:
73 reader = DictReader(item, delimiter=";")
74 for row in reader:
75 if(row['external_product_id']):
76 values = ['',0,row['standard_price'],1]
77 Data[row['item_sku']] = SortedDict(zip(column_names, values))
78
79 with open(export, mode='r') as item:
80 reader = DictReader(item, delimiter=";")
81 for row in reader:
82 if(row['VariationNumber'] in [*Data]):
83 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
84
85 output_path = writeCSV(Data, 'price', column_names)
File packages/variation_upload.py added (mode: 100644) (index 0000000..8247872)
1 from csv import DictReader, DictWriter
2 from os.path import isfile
3 try:
4 from sortedcontainers import SortedDict
5 except ImportError:
6 print("the sortedcontainers module is required to run this program.")
7 raise ImportError
8
9
10 def writeCSV(dataobject, name, columns):
11 '''Write Data into new CSV for Upload
12 OUTPUT
13 '''
14
15 output_path_number = 1
16 datatype = ".csv"
17 output_path = "Upload/" + name + "_upload_" + \
18 str(output_path_number) + datatype
19
20 while(isfile(output_path)):
21 output_path_number = int(output_path_number) + 1
22 output_path = "Upload/" + name + "_upload_" + \
23 str(output_path_number) + datatype
24
25 with open(output_path, mode='a') as item:
26 writer = DictWriter(item, delimiter=";", fieldnames=columns)
27 writer.writeheader()
28 for row in dataobject:
29 writer.writerow(dataobject[row])
30
31 if(isfile(output_path)):
32 print("Upload file successfully created under {0}".format(output_path))
33
34 return output_path
35
36
37 def variationUpload(flatfile, intern_number):
38
39 # The column header names
40 names = ['ItemID', 'VariationID', 'VariationNumber', 'VariationName', 'Position', 'LengthMM', 'WidthMM', 'HeightMM',
41 'WeightG', 'VariationAttributes', 'PurchasePrice', 'MainWarehouse', 'Availability', 'AutoStockVisible']
42
43 # create a Data Dictionary and fill it with the necessary values from the flatfile
44 Data = SortedDict()
45
46 with open(flatfile, mode='r') as item:
47 reader = DictReader(item, delimiter=";")
48 for row in reader:
49 if(row['parent_child'] == 'parent'):
50 item_name = row['item_name']
51 if(row['parent_child'] == 'child'):
52 try:
53 if(row['package_height'] and row['package_length'] and row['package_width']):
54 row['package_height'] = int(row['package_height'])
55 row['package_length'] = int(row['package_length'])
56 row['package_width'] = int(row['package_width'])
57 except ValueError as err:
58 row['package_height'] = int(float(row['package_height']))
59 row['package_length'] = int(float(row['package_length']))
60 row['package_width'] = int(float(row['package_width']))
61 except ValueError as err:
62 print(err)
63 print(
64 "/nPlease copy the values for height, length, width and weight\nfrom the children to the parent variation in the flatfile.\n")
65 exit()
66
67 if(row['color_name']):
68 attributes = 'color_name:' + row['color_name']
69 if(row['size_name']):
70 attributes += ';size_name:' + row['size_name']
71 if(row['outer_material_type']):
72 attributes += ';material_name:' + \
73 row['outer_material_type']
74 if('pattern' in [*row] and row['pattern']):
75 attributes += ';pattern:' + row['pattern']
76 try:
77 values = ['', '', row['item_sku'], item_name, '', int(row['package_length']) * 10, int(row['package_width']) * 10, int(
78 row['package_height']) * 10, row['package_weight'], attributes, row['standard_price'], 'Badel', 'Y', 'Y']
79 except Exception as err:
80 print(err)
81 exit()
82 Data[row['item_sku']] = SortedDict(zip(names, values))
83
84 # open the intern numbers csv and fill in the remaining missing fields by using the item_sku as dict key
85 with open(intern_number, mode='r') as item:
86 reader = DictReader(item, delimiter=';')
87 for row in reader:
88 # check if the sku is within the keys of the Data Dictionary
89 if(row['amazon_sku'] in [*Data]):
90 Data[row['amazon_sku']]['ItemID'] = row['article_id']
91 if(not(row['position'] == 0)):
92 Data[row['amazon_sku']]['Position'] = row['position']
93
94 output_path = writeCSV(Data, 'variation', names)
95
96 return output_path
97
98
99 def setActive(flatfile, export):
100 # because of a regulation of the plentyMarkets system the active status has to be delivered as an extra upload
101 column_names = ['Active', 'ItemID', 'VariationID', 'VariationNumber']
102 Data = {}
103 # open the flatfile to get the sku names
104 with open(flatfile, mode='r') as item:
105 reader = DictReader(item, delimiter=';')
106
107 for row in reader:
108 values = ['Y', '', '', row['item_sku']]
109 Data[row['item_sku']] = dict(zip(column_names, values))
110
111 with open(export, mode='r') as item:
112 reader = DictReader(item, delimiter=';')
113 for row in reader:
114 if(row['VariationNumber'] in [*Data]):
115 Data[row['VariationNumber']]['ItemID'] = row['ItemID']
116 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
117 output_path = writeCSV(Data, 'active', column_names)
118
119
120 def EANUpload(flatfile, export):
121 # open the flatfile get the ean for an sku and save it into a dictionary with columnheaders of the plentymarket dataformat
122 column_names = ['BarcodeID', 'BarcodeName', 'BarcodeType',
123 'Code', 'VariationID', 'VariationNumber']
124 Data = {}
125 with open(flatfile, mode='r') as item:
126 reader = DictReader(item, delimiter=";")
127
128 for row in reader:
129 values = ['3', 'UPC', 'UPC',
130 row['external_product_id'], '', row['item_sku']]
131 Data[row['item_sku']] = dict(zip(column_names, values))
132
133 # open the exported file to get the variation id
134 with open(export, mode='r') as item:
135 reader = DictReader(item, delimiter=";")
136
137 for row in reader:
138 if(row['VariationNumber'] in [*Data]):
139 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
140
141 output_path = writeCSV(Data, 'EAN', column_names)
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using git:
git clone git://git.rocketgit.com/user/initBasti/Amazon2PlentySync

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main