initBasti / Amazon2PlentySync (public) (License: GPLv3) (since 2019-01-27) (hash sha1)
Transfer your data from you Amazon Flatfile spreadsheet over to the Plentymarkets system. How to is included in the readme
List of commits:
Subject Hash Author Date (UTC)
Fixed a problem, that caused Data to not pass sorting; Fixed error handling with the product type; Updated category ids 9a62d369fb24bc80765cd19e31fb255398fb8ed5 Sebastian Fricke 2019-09-12 09:27:54
fixed a merge conflict bug e6b4d9613237009d980cdbfc7ec65c3383a3495a Sebastian Fricke 2019-08-16 11:31:02
current status 15.8 94db3a5c98c596b24f00624fa4b772b9fd830b03 Sebastian Fricke 2019-08-15 14:26:42
Added manual file choosing in case of empty config 2df178528d70be15bfb2e1c9058f69e128236622 Sebastian Fricke 2019-08-15 10:11:41
Added Markdown choosing, fixed various bugs 991ed44df370cf80fc2e2c51d7427d63e221888f Sebastian Fricke 2019-08-15 09:30:55
Changed the item upload format to fix errors in the sync, moved active upload to properties because it has to be done seperatly to the creation process 3b466364e3dcdf14b4cef5b8649ec9573c992324 Sebastian Fricke 2019-06-17 14:09:23
Removed the image upload from item upload and added a exportfile together with functions to get the variation id for the image upload, the image upload is now a single process 6349c4a7177345c25aa6d8ecd03740a75fa2520f Sebastian Fricke 2019-06-13 12:58:36
Updated the feature list to the current active list b5d1675bcb56c38a97c928d7800b6a29c2dea116 LagerBadel PC:Magdalena 2019-06-11 12:11:06
fixed a bug with the encoding of very large datasets were the 10000 letters were not enough, increased waiting time but removed some mistakes that way 8f431d6a68fb21699950b1ca48a1592976789c74 LagerBadel PC:Magdalena 2019-06-06 13:41:52
small debugging improvements in writeCSV and missing colors 88db9e1362a4178805671f443554a7f0d3db9e69 LagerBadel PC:Magdalena 2019-06-06 11:52:31
Major Update added a gui category chooser, attribute id connection and updated the whole Script to work on Elastic Sync a8a6156e26e2783a695c87bda35aba725fd77023 Sebastian Fricke 2019-06-05 12:45:29
fixed a bug with the encoding function 0c5b9dd0414037743bf39fdc3420d55035bffa61 Sebastian Fricke 2019-05-14 15:10:17
Major Update added a config file for better useability and added a gui to enter the category and the name of the product further work towards the rework from dynamic import to elastic sync e4356af15a4b8f7393f85bd51c16b330bc3555af Sebastian Fricke 2019-05-14 14:43:03
Changed the price upload to identify items that are not in plentymarkets and added a webshop price 4ab9bcd988f9eb26647748a8f80f25c8c5b7f2e2 Sebastian Fricke 2019-05-03 09:18:35
added Webshop to marketconnections 84f93694fe0c67972ad951649d9f6f0d577d3e29 Sebastian Fricke 2019-05-01 14:12:00
Added the modelnumber feature and removed the creation of empty features ea98391f2dbdf8fb8e601153b4f6ebfca504929c Sebastian Fricke 2019-05-01 12:31:19
Changed the feature upload into a loop for more overview 0a1bee82659a576c6fb4f2641aa3990d8d686b3c Sebastian Fricke 2019-05-01 10:04:20
Added a few new instructions to the Instructions file b4878c59958f89a02937de1dfc7aabbd23e71061 LagerBadel PC:Magdalena 2019-04-18 09:41:10
Made some fields not required but added Warnings for the log file, additionally some new amazon features were added. 6392338b7e9968be3bc4da9031144c3cc2cfae48 Sebastian Fricke 2019-04-18 09:37:51
Added an error log system and improved overall workflow 2e3763e436899466db9f03f70ea926869afd3219 Sebastian Fricke 2019-04-18 08:12:27
Commit 9a62d369fb24bc80765cd19e31fb255398fb8ed5 - Fixed a problem, that caused Data to not pass sorting; Fixed error handling with the product type; Updated category ids
Author: Sebastian Fricke
Author date (UTC): 2019-09-12 09:27
Committer name: Sebastian Fricke
Committer date (UTC): 2019-09-12 09:27
Parent(s): e6b4d9613237009d980cdbfc7ec65c3383a3495a
Signing key:
Tree: 2c58871470c6dd0fda704291a1f74da0b3ee988a
File Lines added Lines deleted
packages/amazon_data_upload.py 30 18
packages/barcode.py 21 43
packages/gui/category_chooser.py 1 6
packages/image_upload.py 2 2
packages/item_upload.py 48 40
packages/price_upload.py 5 4
product_import.py 31 16
File packages/amazon_data_upload.py changed (mode: 100644) (index e7ed4aa..d394c3e)
1 1 import csv import csv
2 2 from os.path import isfile from os.path import isfile
3 import sys
4 3 from packages import barcode from packages import barcode
5 4 try: try:
6 5 from sortedcontainers import SortedDict from sortedcontainers import SortedDict
 
... ... except ImportError:
11 10
12 11 def amazonSkuUpload(flatfile): def amazonSkuUpload(flatfile):
13 12
14 column_names = [ 'MarketID', 'MarketAccountID',
15 'SKU', 'ParentSKU' ]
13 column_names = ['MarketID', 'MarketAccountID',
14 'SKU', 'ParentSKU']
16 15
17 16 # Define constant values # Define constant values
18 marketid = 104 # Amazon FBA Germany
19 accountid = 0 # bkkasia.germany@gmail.com
17 marketid = 104 # Amazon FBA Germany
18 accountid = 0 # bkkasia.germany@gmail.com
20 19
21 20 Data = SortedDict() Data = SortedDict()
22 21
23 22 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
24 23 reader = csv.DictReader(item, delimiter=';') reader = csv.DictReader(item, delimiter=';')
25 24 for row in reader: for row in reader:
26 values = [ marketid, accountid,
27 row['item_sku'], row['parent_sku'] ]
25 values = [marketid, accountid,
26 row['item_sku'], row['parent_sku']]
28 27 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
29 28
30 29 return Data return Data
 
... ... def amazonSkuUpload(flatfile):
32 31
33 32 def amazonDataUpload(flatfile): def amazonDataUpload(flatfile):
34 33
35 column_names = [ 'ItemAmazonProductType', 'ItemAmazonFBA', 'ItemShippingWithAmazonFBA' ]
34 column_names = ['ItemAmazonProductType',
35 'ItemAmazonFBA',
36 'ItemShippingWithAmazonFBA']
36 37
37 38 Data = SortedDict() Data = SortedDict()
38 39
 
... ... def amazonDataUpload(flatfile):
40 41 reader = csv.DictReader(item, delimiter=";") reader = csv.DictReader(item, delimiter=";")
41 42
42 43 type_id = { type_id = {
43 'accessory':28,
44 'shirt':13,
45 'pants':15,
46 'dress':18,
47 'outerwear':21,
48 'bags':27
44 'accessory': 28,
45 'shirt': 13,
46 'pants': 15,
47 'dress': 18,
48 'outerwear': 21,
49 'bags': 27
49 50 } }
50 51
51 52 values = '' values = ''
 
... ... def amazonDataUpload(flatfile):
57 58 for key in type_id: for key in type_id:
58 59 if(row['feed_product_type'].lower() == key): if(row['feed_product_type'].lower() == key):
59 60 product_type = type_id[key] product_type = type_id[key]
60 if(not(product_type)):
61 else:
62 print("ERROR @ product type in AmazonData: {0} not in {1}"
63 .format(row['feed_product_type'],
64 ",".join([*type_id])))
65
66 if(not(product_type) and not(row['feed_product_type'])):
61 67 raise barcode.EmptyFieldWarning('product_type') raise barcode.EmptyFieldWarning('product_type')
62 68
63 69 values = [product_type, '1', '1'] values = [product_type, '1', '1']
 
... ... def amazonDataUpload(flatfile):
66 72
67 73 return Data return Data
68 74
69 def featureUpload(flatfile, features, folder):
75
76 def featureUpload(flatfile, features, folder, filename):
70 77
71 78 column_names = [ column_names = [
72 79 'Variation.number', 'VariationEigenschaften.id', 'Variation.number', 'VariationEigenschaften.id',
73 'VariationEigenschaften.cast', 'VariationEigenschaften.linked',
80 'VariationEigenschaften.cast',
81 'VariationEigenschaften.linked',
74 82 'VariationEigenschaften.value' 'VariationEigenschaften.value'
75 83 ] ]
76 84
 
... ... def featureUpload(flatfile, features, folder):
95 103 print("The feature:\t{0}\twas not found, in the flatfile!\n".format(feature)) print("The feature:\t{0}\twas not found, in the flatfile!\n".format(feature))
96 104
97 105 if(Data): if(Data):
98 barcode.writeCSV(dataobject=Data, name="features".upper(), columns=column_names, upload_path=folder)
106 barcode.writeCSV(dataobject=Data,
107 name="features".upper(),
108 columns=column_names,
109 upload_path=folder,
110 item=filename)
File packages/barcode.py changed (mode: 100644) (index 1b90556..1b9c047)
1 1 import csv import csv
2 import re
3 2 from os.path import isfile from os.path import isfile
4 3 import sys import sys
5 from tkinter.filedialog import askdirectory
6 import os
7 try:
8 from sortedcontainers import SortedDict
9 except ImportError:
10 print("the sortedcontainers module is required to run this program.")
11 raise ImportError
4
12 5
13 6 class EmptyFieldWarning(Exception): class EmptyFieldWarning(Exception):
14 7 def __init__(self, errorargs): def __init__(self, errorargs):
15 Exception.__init__(self, "Following field/s are empty {0}".format(errorargs))
8 Exception.__init__(self, "Following field/s are empty {0}"
9 .format(errorargs))
16 10 self.errorargs = errorargs self.errorargs = errorargs
17 11
18 def writeCSV(dataobject, name, columns, upload_path):
19 '''Write Data into new CSV for Upload
20 OUTPUT
21 '''
22 '''
23 uploadpath = os.getcwd() + '/Upload'
24 if not os.path.exists(uploadpath):
25 print("=#="*10 + '\n')
26 printf("Please choose a folder for the Upload files\n")
27 print("=#="*10 + '\n')
28 uploadpath = askdirectory(title="Choose a folder for the Upload files!")
29 '''
30
31 output_path_number = 1
12
13 def writeCSV(dataobject, name, columns, upload_path, item):
32 14 datatype = ".csv" datatype = ".csv"
33 output_name = "/" + name + "_upload_" + str(output_path_number) + datatype
15 output_name = "/" + name + "_" + item + "_" + datatype
34 16 output_path = upload_path + output_name output_path = upload_path + output_name
35 17
36 while(isfile(output_path)):
37 output_path_number = int(output_path_number) + 1
38 output_name = "/" + name + "_upload_" + str(output_path_number) + datatype
39 output_path = upload_path + output_name
40
41 with open(output_path, mode='a') as item:
42 writer = csv.DictWriter(item, delimiter=";", fieldnames=columns, lineterminator='\n')
18 with open(output_path, mode='w') as item:
19 writer = csv.DictWriter(item, delimiter=";",
20 fieldnames=columns, lineterminator='\n')
43 21 writer.writeheader() writer.writeheader()
44 22 try: try:
45 23 for row in dataobject: for row in dataobject:
46 24 writer.writerow(dataobject[row]) writer.writerow(dataobject[row])
47 25 except Exception as err: except Exception as err:
48 print("ERROR @ writeCSV : line : {0}, Error: {1}".format(sys.exc_info()[2].tb_lineno, err))
26 print("ERROR @ writeCSV : line : {0}, Error: {1}"
27 .format(sys.exc_info()[2].tb_lineno, err))
49 28 print("Press ENTER to continue..") print("Press ENTER to continue..")
50 29 input() input()
51 30 sys.exit() sys.exit()
 
... ... def writeCSV(dataobject, name, columns, upload_path):
55 34
56 35 return output_path return output_path
57 36
37
58 38 def barcode_Upload(flatfile, stocklist): def barcode_Upload(flatfile, stocklist):
59 # open the flatfile get the ean for an sku and save it into a dictionary with
60 # columnheaders of the plentymarket dataformat
39 # open the flatfile get the ean for an sku and save it into a
40 # dictionary with columnheaders of the plentymarket dataformat
61 41
62 column_names = [ 'EAN_Barcode', 'FNSKU_Barcode', 'SKU',
63 'ASIN-countrycode', 'ASIN-type', 'ASIN-value' ]
42 column_names = ['EAN_Barcode', 'FNSKU_Barcode', 'SKU',
43 'ASIN-countrycode', 'ASIN-type', 'ASIN-value']
64 44
65 45 Data = {} Data = {}
66 46 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
 
... ... def barcode_Upload(flatfile, stocklist):
68 48
69 49 for row in reader: for row in reader:
70 50 if(row['parent_child'] == 'child'): if(row['parent_child'] == 'child'):
71 # Set code to an empty String if the barcode type matches EAN set it to to
72 # the external_product_id
51 # Set code to an empty String if the barcode type matches
52 # EAN set it to to the external_product_id
73 53 code = '' code = ''
74 54 code = row['external_product_id'] code = row['external_product_id']
75 55
76 56 if(not(code)): if(not(code)):
77 57 raise EmptyFieldWarning('barcode(EAN)') raise EmptyFieldWarning('barcode(EAN)')
78 values = [ code, '', row['item_sku'],
58 values = [code, '', row['item_sku'],
79 59 '1', 'ASIN', ''] '1', 'ASIN', '']
80 60
81 61 Data[row['item_sku']] = dict(zip(column_names, values)) Data[row['item_sku']] = dict(zip(column_names, values))
82 62
83
84 63 with open(stocklist['path'], mode='r', encoding=stocklist['encoding']) as item: with open(stocklist['path'], mode='r', encoding=stocklist['encoding']) as item:
85 64 reader = csv.DictReader(item, delimiter=";") reader = csv.DictReader(item, delimiter=";")
86 65
87 66 for row in reader: for row in reader:
88 67 if(row['MASTER'] in [*Data]): if(row['MASTER'] in [*Data]):
89 # Set code to an empty String if the barcode type matches FNSKU set it to to
90 # the external_product_id
68 # Set code to an empty String if the barcode type matches
69 # FNSKU set it to to the external_product_id
91 70 code = '' code = ''
92 71 code = row['fnsku'] code = row['fnsku']
93 72
 
... ... def barcode_Upload(flatfile, stocklist):
96 75 Data[row['MASTER']]['ASIN-value'] = row['asin'] Data[row['MASTER']]['ASIN-value'] = row['asin']
97 76
98 77 return Data return Data
99
File packages/gui/category_chooser.py changed (mode: 100644) (index d69be15..c5be3b4)
... ... class DropdownChooser(tkinter.Frame):
229 229 'Women.Hoodie' :'68', 'Women.Hoodie' :'68',
230 230 'Women.Sarong-O' :'69', 'Women.Sarong-O' :'69',
231 231 'Women.Jacken' :'84', 'Women.Jacken' :'84',
232 'Accessoires.Taschen':'58',
233 'Accessoires.Patch':'76',
234 'Accessoires.Backdrop':'77',
235 'Accessoires.Kissen':'78',
236 'Accessoires.Suncatcher':'79',
237 'Accessoires.Halstuch':'90'
232 'Unisex.Bags' : '108'
238 233 } }
239 234
240 235 self.activities = { self.activities = {
File packages/image_upload.py changed (mode: 100644) (index c416081..9f18cd2)
... ... def getColorAttributeID(attributefile, product):
28 28 return attributeid return attributeid
29 29
30 30
31 def imageUpload(flatfile, attributefile, exportfile, uploadfolder):
31 def imageUpload(flatfile, attributefile, exportfile, uploadfolder, filename):
32 32
33 33 try: try:
34 34 Data = SortedDict() Data = SortedDict()
 
... ... def imageUpload(flatfile, attributefile, exportfile, uploadfolder):
97 97 except Exception as err: except Exception as err:
98 98 print("Error @ imageupload line: {0} : {1}".format(sys.exc_info()[2].tb_lineno, err)) print("Error @ imageupload line: {0} : {1}".format(sys.exc_info()[2].tb_lineno, err))
99 99
100 barcode.writeCSV(dataobject=Data, name='Image_', columns=column_names, upload_path=uploadfolder)
100 barcode.writeCSV(dataobject=Data, name='Image_', columns=column_names, upload_path=uploadfolder, item=filename)
101 101 return Data return Data
File packages/item_upload.py changed (mode: 100644) (index 1c70204..1d209c8)
... ... import sys
3 3 import re import re
4 4 import chardet import chardet
5 5 import collections import collections
6 from os.path import isfile
7 6 from sys import exit from sys import exit
8 from packages import barcode, amazon_data_upload, price_upload, image_upload
7 from packages import barcode, amazon_data_upload, price_upload
8
9 9
10 10 class WrongEncodingException(Exception): class WrongEncodingException(Exception):
11 11 pass pass
12 12
13 def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
13
14 def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data, filename):
14 15 # The column headers for the output file as expected from the # The column headers for the output file as expected from the
15 16 # plentymarkets dataformat # plentymarkets dataformat
16 17 column_names = ['Parent-SKU', 'SKU', column_names = ['Parent-SKU', 'SKU',
18 'isParent',
17 19 'Length', 'Width', 'Length', 'Width',
18 20 'Height', 'Weight', 'Height', 'Weight',
19 21 'Name', 'MainWarehouse', 'Name', 'MainWarehouse',
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
38 40 'Item-Flag-1' 'Item-Flag-1'
39 41 ] ]
40 42
41
42 43 # Unpack File and scrap data # Unpack File and scrap data
43 44 # INPUT # INPUT
44 45 # -------------------------------------------------------------- # --------------------------------------------------------------
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
46 47 sorted_Data = collections.OrderedDict() sorted_Data = collections.OrderedDict()
47 48 package_properties = {} package_properties = {}
48 49 barcode_data = {} barcode_data = {}
50 isParent = False
49 51
50 52 # Get sets of all colors and sizes for each parent # Get sets of all colors and sizes for each parent
51 53 # to find if there are some with only one attribute value for all childs # to find if there are some with only one attribute value for all childs
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
68 70 # SET KEYWORDS # SET KEYWORDS
69 71 keywords = '' keywords = ''
70 72 if(row['generic_keywords']): if(row['generic_keywords']):
71 keywords = row[ 'generic_keywords' ]
73 keywords = row['generic_keywords']
72 74
73 75 if(not(keywords)): if(not(keywords)):
74 76 try: try:
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
77 79 print("Generic Keywords are empty!") print("Generic Keywords are empty!")
78 80
79 81 # SET ATTRIBUTES # SET ATTRIBUTES
80 attributes = ''
81 if(row['parent_child'] == 'parent'):
82 group_parent = row['item_sku']
83 position = 0
84 if(row['parent_child'] == 'child'):
85 attributes = get_attributes(dataset=row, sets=color_size_sets)
86 if(group_parent and row['parent_sku'] == group_parent):
87 position += 1
82 try:
83 attributes = ''
84 if(row['parent_child'] == 'parent'):
85 isParent = True
86 group_parent = row['item_sku']
87 position = 0
88 if(row['parent_child'] == 'child'):
89 isParent = False
90 attributes = get_attributes(dataset=row,
91 sets=color_size_sets)
92 if(group_parent and row['parent_sku'] == group_parent):
93 position += 1
94 except Exception as err:
95 print("Error @ attribute setting, line:{0}, err:{1}"
96 .format(sys.exc_info()[2].tb_lineno, err))
88 97 try: try:
89 98 values = [ values = [
90 99 row['parent_sku'], row['item_sku'], row['parent_sku'], row['item_sku'],
91 package_properties[ 'length' ] * 10, package_properties[ 'width' ] * 10,
92 package_properties[ 'height' ] * 10, package_properties[ 'weight' ],
100 isParent,
101 package_properties['length'] * 10,
102 package_properties['width'] * 10,
103 package_properties['height'] * 10,
104 package_properties['weight'],
93 105 row['item_name'], '104', row['item_name'], '104',
94 106 attributes, position, attributes, position,
95 107 '62', keywords, '62', keywords,
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
115 127 except KeyError: except KeyError:
116 128 raise KeyError raise KeyError
117 129 print('Error at the Values') print('Error at the Values')
130 except Exception as err:
131 print("Error @ setting values: line:{0}, err:{1}"
132 .format(sys.exc_info()[2].tb_lineno, err))
118 133 Data[row['item_sku']] = collections.OrderedDict(zip(column_names, values)) Data[row['item_sku']] = collections.OrderedDict(zip(column_names, values))
119 134 except KeyError as err: except KeyError as err:
120 print("Error inside parent_child == parent\nline:{0}err:{1}"
121 .format(sys.exc_info[2].tb_lineno, err))
135 print("Error reading file\nline:{0}err:{1}"
136 .format(sys.exc_info()[2].tb_lineno, err))
122 137 return row['item_sku'] return row['item_sku']
123 138
124 139 # open the intern number csv to get the item ID # open the intern number csv to get the item ID
 
... ... def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
190 205 # -------------------------------------------------------------- # --------------------------------------------------------------
191 206
192 207 # Sort the dictionary to make sure that the parents are the first variant of each item # Sort the dictionary to make sure that the parents are the first variant of each item
193 print("Sort Products")
194 208 sorted_Data = sort_Products(Data) sorted_Data = sort_Products(Data)
195 209
196 barcode.writeCSV(sorted_Data, "item", column_names, folder)
210 for index, row in enumerate( sorted_Data ):
211 print("DEBUG: sorted_Data index: {0} = {1}"
212 .format(index, row))
213
214 barcode.writeCSV(sorted_Data, "item", column_names, folder, filename)
197 215 except UnicodeDecodeError as err: except UnicodeDecodeError as err:
198 216 print("Decode Error at line: {0}, err: {1}".format(sys.exc_info()[2].tb_lineno, err)) print("Decode Error at line: {0}, err: {1}".format(sys.exc_info()[2].tb_lineno, err))
199 217 print("press ENTER to continue..") print("press ENTER to continue..")
200 218 input() input()
201 219 sys.exit() sys.exit()
202 220
203 def itemPropertyUpload(flatfile, folder):
221 def itemPropertyUpload(flatfile, folder, filename):
204 222
205 223 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
206 224 reader = csv.DictReader(item, delimiter=';', lineterminator='\n') reader = csv.DictReader(item, delimiter=';', lineterminator='\n')
 
... ... def itemPropertyUpload(flatfile, folder):
253 271 print("In property Upload: One of the values wasn't found : ", err) print("In property Upload: One of the values wasn't found : ", err)
254 272
255 273 # Check for empty values # Check for empty values
256 #for index, item in enumerate( values ):
257 # if(not(item)):
258 # print(row['item_sku'], " has no value on ", property_names[index], " !")
259
260 274 properties[row['item_sku']] = dict(zip(property_names, values)) properties[row['item_sku']] = dict(zip(property_names, values))
261 275
262 276 column_names = ['SKU', 'ID-property', 'Value', 'Lang', 'Active'] column_names = ['SKU', 'ID-property', 'Value', 'Lang', 'Active']
 
... ... def itemPropertyUpload(flatfile, folder):
267 281
268 282 Data[row + prop] = dict(zip(column_names, values)) Data[row + prop] = dict(zip(column_names, values))
269 283
270 barcode.writeCSV(Data, "Item_Merkmale", column_names, folder)
284
285 barcode.writeCSV(Data, "Item_Merkmale", column_names, folder, filename)
271 286
272 287 def get_properties(flatfile): def get_properties(flatfile):
273 288
 
... ... def get_properties(flatfile):
303 318 properties[ 'length' ] = int(float(row['package_length'])) properties[ 'length' ] = int(float(row['package_length']))
304 319 properties[ 'width' ] = int(float(row['package_width'])) properties[ 'width' ] = int(float(row['package_width']))
305 320 properties[ 'weight' ] = int(float(row['package_weight'])) properties[ 'weight' ] = int(float(row['package_weight']))
321 except Exception as err:
322 print("Error @ setting values: line:{0}, err:{1}"
323 .format(sys.exc_info()[2].tb_lineno, err))
306 324
307 325 except ValueError as err: except ValueError as err:
308 326 print(err) print(err)
 
... ... def get_properties(flatfile):
310 328 "and weight\nfrom the children to the parent", "and weight\nfrom the children to the parent",
311 329 "variation in the flatfile.\n") "variation in the flatfile.\n")
312 330 exit() exit()
331 except Exception as err:
332 print("Error @ setting values: line:{0}, err:{1}"
333 .format(sys.exc_info()[2].tb_lineno, err))
313 334
314 335 return properties return properties
315 336
 
... ... def find_similar_attr(flatfile):
352 373 if(row['parent_sku'] == line): if(row['parent_sku'] == line):
353 374 Data[row['parent_sku']]['color'].add(row['color_name']) Data[row['parent_sku']]['color'].add(row['color_name'])
354 375 Data[row['parent_sku']]['size'].add(row['size_name']) Data[row['parent_sku']]['size'].add(row['size_name'])
355
356 376 return Data return Data
357 377
358 378 def sort_Products(dataset): def sort_Products(dataset):
 
... ... def sort_Products(dataset):
364 384
365 385 # Go through the items of the dataset # Go through the items of the dataset
366 386 for item in item_list: for item in item_list:
367 # When there is no entry in 'Parent-SKU' the item has to be a parent
368 387 if(not(item[0] in [* new_dict ])): if(not(item[0] in [* new_dict ])):
369 if(not(item[1]['Parent-SKU'])):
388 if(item[1]['isParent']):
370 389 # add the parent to the new dict # add the parent to the new dict
371 390 new_dict[item[0]] = item[1] new_dict[item[0]] = item[1]
372 391 # get all the children and update the itemlist without them # get all the children and update the itemlist without them
373 child_dict = search_child(item_list, item[0])
392 child_dict = search_child(item_list=item_list, parent=item[0])
374 393 # add each child to the new dict after the parent # add each child to the new dict after the parent
375 394 for child in child_dict: for child in child_dict:
376 395 new_dict[child] = child_dict[child] new_dict[child] = child_dict[child]
 
... ... def get_variationid(exportfile, sku):
447 466
448 467 return variationid return variationid
449 468
450 def get_attributes(dataset, sets):
451
452 output_string = ''
453 if(len(sets[dataset['parent_sku']]['color']) > 1):
454 output_string = 'color_name:' + dataset['color_name']
455 if(len(sets[dataset['parent_sku']]['size']) > 1):
456 if(not(output_string)):
457 output_string = 'size_name:' + dataset['size_name']
458 else:
459 output_string = output_string + ';size_name:' + dataset['size_name']
460 return output_string
File packages/price_upload.py changed (mode: 100644) (index 1e2dba6..481553f)
... ... except ImportError:
7 7
8 8
9 9 def priceUpload(flatfile): def priceUpload(flatfile):
10 # The column header names
10 # The column header names for the output
11 11 column_names = ['price', 'ebay', 'amazon', 'webshop', 'etsy'] column_names = ['price', 'ebay', 'amazon', 'webshop', 'etsy']
12 12
13 13 prices = { prices = {
 
... ... def priceUpload(flatfile):
33 33 for scndrow in reader: for scndrow in reader:
34 34 if(row['parent_child'] == 'parent'): if(row['parent_child'] == 'parent'):
35 35 if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['item_sku'] == scndrow['parent_sku']): if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['item_sku'] == scndrow['parent_sku']):
36 print("parent without price add:{0} from:{1} to:{2}"
37 .format(scndrow['standard_price'],
38 scndrow['item_sku'],
39 row['item_sku']))
36 40 standard_price = scndrow['standard_price'] standard_price = scndrow['standard_price']
37 print("reach standard_price set parent standard_price : {0}".format(standard_price))
38 41 break break
39 42 elif(row['parent_child'] == 'child'): elif(row['parent_child'] == 'child'):
40 43 if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['parent_sku'] == scndrow['parent_sku']): if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['parent_sku'] == scndrow['parent_sku']):
41 print("reach standard_price set child")
42 44 standard_price = scndrow['standard_price'] standard_price = scndrow['standard_price']
43 45 break break
44 46
 
... ... def priceUpload(flatfile):
65 67 values = [prices['price']['value'], prices['ebay']['value'], values = [prices['price']['value'], prices['ebay']['value'],
66 68 prices['amazon']['value'], prices['webshop']['value'], prices['amazon']['value'], prices['webshop']['value'],
67 69 prices['etsy']['value']] prices['etsy']['value']]
68
69 70 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
70 71
71 72 return Data return Data
File product_import.py changed (mode: 100644) (index 6bdaf1b..a275f02)
... ... def main():
63 63 } }
64 64 # Check if the os is Linux, in that case the initial directory is Documents # Check if the os is Linux, in that case the initial directory is Documents
65 65 # Unless Documents is not available in which case it is ~ # Unless Documents is not available in which case it is ~
66 initial_directory = '../'
67
68 66 if(platform.system() == 'Linux'): if(platform.system() == 'Linux'):
69 67 if(os.path.exists(path='/home/' + os.getlogin() + '/Documents/')): if(os.path.exists(path='/home/' + os.getlogin() + '/Documents/')):
70 68 initial_directory = '/home/' + os.getlogin() + '/Documents/' initial_directory = '/home/' + os.getlogin() + '/Documents/'
 
... ... def main():
138 136 # END GUI # END GUI
139 137
140 138 user_data = cchooser.data user_data = cchooser.data
139 specific_name = user_data['name'].strip(' ').strip("'").strip("\"").strip("_").strip("\n").lower()
141 140 # Writing the changes into the config for the next start of the script # Writing the changes into the config for the next start of the script
142 141 if(cchooser.newpath['upload-path'] and cchooser.newpath['attribute-path']): if(cchooser.newpath['upload-path'] and cchooser.newpath['attribute-path']):
143 142 config_update = {'row1':{ 'title': 'upload_folder', 'value': cchooser.newpath['upload-path'] }, config_update = {'row1':{ 'title': 'upload_folder', 'value': cchooser.newpath['upload-path'] },
 
... ... def main():
203 202 step += 1 step += 1
204 203 try: try:
205 204 print("\nItem Upload\n") print("\nItem Upload\n")
206 itemUpload(sheet, intern_number, stocklist, attributefile, upload_folder, user_data)
205 itemUpload(flatfile=sheet,
206 intern=intern_number,
207 stocklist=stocklist,
208 attributefile=attributefile,
209 folder=upload_folder,
210 input_data=user_data,
211 filename=specific_name)
207 212 except WrongEncodingException: except WrongEncodingException:
208 213 wrongEncodingLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="flatfile") wrongEncodingLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="flatfile")
209 214 except KeyError as kexc: except KeyError as kexc:
210 215 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet)) keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
211 216 except OSError as fexc: except OSError as fexc:
212 217 fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="intern_numbers") fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="intern_numbers")
213 except TypeError:
214 fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="flatfile")
218 #except TypeError as err:
219 #print("TypeError: {0}sys.exc_info: {1}".format( err, sys.exc_info() ))
220 #fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="flatfile")
215 221 except UnboundLocalError as uexc: except UnboundLocalError as uexc:
216 222 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args) unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
217 223 except EmptyFieldWarning as eexc: except EmptyFieldWarning as eexc:
218 224 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet)) emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
219 except Exception as exc:
220 print("Item Upload failed!\n")
221 if(exc == 'item_sku'):
222 print("It is very likely that you don't have the proper headers, use the english ones!\n")
223 e = sys.exc_info()
224 print("Error @ FILE: {0}, LINE: {1}\n".format( e[2].tb_frame.f_code.co_filename, e[2].tb_lineno ))
225 for element in e:
226 print(element)
225 #except Exception as exc:
226 # print("Item Upload failed!\n")
227 # if(exc == 'item_sku'):
228 # print("It is very likely that you don't have the proper headers, use the english ones!\n")
229 # e = sys.exc_info()
230 # print("Error @ FILE: {0}, LINE: {1}\n".format( e[2].tb_frame.f_code.co_filename, e[2].tb_lineno ))
231 # for element in e:
232 # print(element)
227 233
228 234 try: try:
229 235 print("Feature Upload") print("Feature Upload")
230 236 step += 1 step += 1
231 featureUpload(flatfile=sheet, features=features, folder=upload_folder)
237 featureUpload(flatfile=sheet,
238 features=features,
239 folder=upload_folder,
240 filename=specific_name)
232 241 except KeyError as kexc: except KeyError as kexc:
233 242 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet)) keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
234 243 except UnboundLocalError as uexc: except UnboundLocalError as uexc:
 
... ... def main():
241 250 try: try:
242 251 print("Property Upload") print("Property Upload")
243 252 step += 1 step += 1
244 itemPropertyUpload(flatfile=sheet, folder=upload_folder)
253 itemPropertyUpload(flatfile=sheet,
254 folder=upload_folder,
255 filename=specific_name)
245 256 except KeyError as kexc: except KeyError as kexc:
246 257 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet)) keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
247 258 except UnboundLocalError as uexc: except UnboundLocalError as uexc:
 
... ... def main():
261 272 plenty_export = check_encoding(plenty_export) plenty_export = check_encoding(plenty_export)
262 273
263 274 step += 1 step += 1
264 imageUpload(flatfile=sheet, attributefile=attributefile, exportfile=plenty_export, uploadfolder=upload_folder)
275 imageUpload(flatfile=sheet,
276 attributefile=attributefile,
277 exportfile=plenty_export,
278 uploadfolder=upload_folder,
279 filename=specific_name)
265 280 del fexc del fexc
266 281 # A stop in the script flow to interrupt a console window from closing itself # A stop in the script flow to interrupt a console window from closing itself
267 282 print('press ENTER to close the script...') print('press ENTER to close the script...')
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using git:
git clone git://git.rocketgit.com/user/initBasti/Amazon2PlentySync

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main