initBasti / Amazon2PlentySync (public) (License: GPLv3) (since 2019-01-27) (hash sha1)
Transfer your data from you Amazon Flatfile spreadsheet over to the Plentymarkets system. How to is included in the readme
List of commits:
Subject Hash Author Date (UTC)
Major Update added a gui category chooser, attribute id connection and updated the whole Script to work on Elastic Sync a8a6156e26e2783a695c87bda35aba725fd77023 Sebastian Fricke 2019-06-05 12:45:29
fixed a bug with the encoding function 0c5b9dd0414037743bf39fdc3420d55035bffa61 Sebastian Fricke 2019-05-14 15:10:17
Major Update added a config file for better useability and added a gui to enter the category and the name of the product further work towards the rework from dynamic import to elastic sync e4356af15a4b8f7393f85bd51c16b330bc3555af Sebastian Fricke 2019-05-14 14:43:03
Changed the price upload to identify items that are not in plentymarkets and added a webshop price 4ab9bcd988f9eb26647748a8f80f25c8c5b7f2e2 Sebastian Fricke 2019-05-03 09:18:35
added Webshop to marketconnections 84f93694fe0c67972ad951649d9f6f0d577d3e29 Sebastian Fricke 2019-05-01 14:12:00
Added the modelnumber feature and removed the creation of empty features ea98391f2dbdf8fb8e601153b4f6ebfca504929c Sebastian Fricke 2019-05-01 12:31:19
Changed the feature upload into a loop for more overview 0a1bee82659a576c6fb4f2641aa3990d8d686b3c Sebastian Fricke 2019-05-01 10:04:20
Added a few new instructions to the Instructions file b4878c59958f89a02937de1dfc7aabbd23e71061 LagerBadel PC:Magdalena 2019-04-18 09:41:10
Made some fields not required but added Warnings for the log file, additionally some new amazon features were added. 6392338b7e9968be3bc4da9031144c3cc2cfae48 Sebastian Fricke 2019-04-18 09:37:51
Added an error log system and improved overall workflow 2e3763e436899466db9f03f70ea926869afd3219 Sebastian Fricke 2019-04-18 08:12:27
Added additional feature uploads 528cad4899d3e3adca5098c1a0ce92c2a6b8a853 Sebastian Fricke 2019-04-16 10:25:49
Added an optimization for the initial directory for Linux 58b340605cba0603520ada8a184cc9fba5f8c3b8 Sebastian Fricke 2019-04-16 10:22:18
Fixed a typo in the build script f7943d8b2c33b89b083380902f1b1281366a12b2 Sebastian Fricke 2019-04-16 08:13:51
Added a build script for Linux + removed the finished executables 8fcf82d5de859895d29a7f355c0d49700beb4e38 Sebastian Fricke 2019-04-16 08:10:13
Changed the EAN type from UPC to GTIN_13 which is the correct one. ea74c1d8c001ae6895f07bbecbcb9a0898400b95 Sebastian Fricke 2019-04-15 13:04:54
fixed a bug with item_name + changed the item_name assignment to include the variation name instead of the parent name 7dedb2bb9afac7d5625ccbf9c05f6ff4b1b1e5e1 LagerBadel PC:Magdalena 2019-04-15 12:32:33
Added usage instructions in english and german language. e2f291e2a00ac9283ab9d843e652d7b77fa6bbaf Sebastian Fricke 2019-04-15 09:59:36
Added usage instructions in english and german language. 30646f203ae8847cfa4971cb62187dca8406b8d7 Sebastian Fricke 2019-04-15 09:58:26
Fixed small compilation mistakes concerning positional arguments dc011ec52cf578e2910edde1aeacb893bb2e57f9 Sebastian Fricke 2019-04-15 07:16:14
Fixed a problem with the Upload folder when the executable is within its Folder outside of the root 6ca74a5bbbf13036405c654225de2540cddf2ed0 Sebastian Fricke 2019-04-15 07:02:13
Commit a8a6156e26e2783a695c87bda35aba725fd77023 - Major Update added a gui category chooser, attribute id connection and updated the whole Script to work on Elastic Sync
Author: Sebastian Fricke
Author date (UTC): 2019-06-05 12:45
Committer name: Sebastian Fricke
Committer date (UTC): 2019-06-05 12:45
Parent(s): 0c5b9dd0414037743bf39fdc3420d55035bffa61
Signing key:
Tree: 84dad9e73c076741509b35954927f38ef5efc58a
File Lines added Lines deleted
packages/amazon_data_upload.py 38 71
packages/barcode.py 93 0
packages/color.py 79 0
packages/config.py 12 5
packages/gui/__pycache__/category_chooser.cpython-37.pyc 0 0
packages/gui/category_chooser.py 239 17
packages/image_upload.py 86 128
packages/item_upload.py 255 105
packages/log_files.py 3 2
packages/price_upload.py 62 0
packages/stock_upload.py 0 114
packages/variation_upload.py 0 266
product_import.py 95 160
File packages/amazon_data_upload.py changed (mode: 100644) (index 890988a..ec64839)
1 1 import csv import csv
2 2 from os.path import isfile from os.path import isfile
3 3 import sys import sys
4 from packages import variation_upload
4 from packages import barcode
5 5 try: try:
6 6 from sortedcontainers import SortedDict from sortedcontainers import SortedDict
7 7 except ImportError: except ImportError:
 
... ... except ImportError:
9 9 raise ImportError raise ImportError
10 10
11 11
12 def amazonSkuUpload(flatfile, export, folder):
12 def amazonSkuUpload(flatfile):
13 13
14 column_names = ['VariationID', 'MarketID',
15 'MarketAccountID', 'SKU', 'ParentSKU']
16 Data = SortedDict()
14 column_names = [ 'MarketID', 'MarketAccountID',
15 'SKU', 'ParentSKU' ]
17 16
18 with open(export['path'], mode='r', encoding=export['encoding']) as item:
19 reader = csv.DictReader(item, delimiter=';')
20 item_number = 1
21 for row in reader:
22 if(row['VariationID']):
23 values = [row['VariationID'], '104', '0', '', '']
24 Data[row['VariationNumber']] = SortedDict(
25 zip(column_names, values))
17 # Define constant values
18 marketid = 104 # Amazon FBA Germany
19 accountid = 0 # bkkasia.germany@gmail.com
20
21 Data = SortedDict()
26 22
27 23 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
28 24 reader = csv.DictReader(item, delimiter=';') reader = csv.DictReader(item, delimiter=';')
29 25 for row in reader: for row in reader:
30 if(row['item_sku'] in [*Data]):
31 Data[row['item_sku']]['SKU'] = row['item_sku']
32 Data[row['item_sku']]['ParentSKU'] = row['parent_sku']
26 values = [ marketid, accountid,
27 row['item_sku'], row['parent_sku'] ]
28 Data[row['item_sku']] = SortedDict(zip(column_names, values))
33 29
34 output_path = variation_upload.writeCSV(Data, 'VariationSKUListe', column_names, folder)
30 return Data
35 31
36 32
37 def amazonDataUpload(flatfile, export, folder):
33 def amazonDataUpload(flatfile):
38 34
39 column_names = [
40 'ItemAmazonProductType', 'ItemAmazonFBA',
41 'ItemID','ItemShippingWithAmazonFBA'
42 ]
35 column_names = [ 'ItemAmazonProductType', 'ItemAmazonFBA', 'ItemShippingWithAmazonFBA' ]
43 36
44 37 Data = SortedDict() Data = SortedDict()
45 38
 
... ... def amazonDataUpload(flatfile, export, folder):
65 58 if(row['feed_product_type'].lower() == key): if(row['feed_product_type'].lower() == key):
66 59 product_type = type_id[key] product_type = type_id[key]
67 60 if(not(product_type)): if(not(product_type)):
68 raise variation_upload.EmptyFieldWarning('product_type')
69 values = [product_type, '1',
70 '0','1']
61 raise barcode.EmptyFieldWarning('product_type')
62 values = [product_type, '1', '1']
71 63
72 64 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
73 65
 
... ... def amazonDataUpload(flatfile, export, folder):
79 71 except Exception as err: except Exception as err:
80 72 print(err) print(err)
81 73
74 return Data
82 75
83 with open(export['path'], mode='r', encoding=export['encoding']) as item:
84 reader = csv.DictReader(item, delimiter=";")
85
86 for row in reader:
87 if(row['VariationNumber'] in [*Data]):
88 Data[row['VariationNumber']]['ItemID'] = row['ItemID']
89
90 variation_upload.writeCSV(dataobject=Data, name='amazon_data', columns=column_names, upload_path=folder)
91
92
93 def asinUpload(export, stock, folder):
94
95 column_names = ['ASIN', 'MarketplaceCountry', 'Position', 'VariationID']
96
97 Data = {}
98
99 with open(export['path'], mode='r', encoding=export['encoding']) as item:
100 reader = csv.DictReader(item, delimiter=';')
101
102 for row in reader:
103 if row['VariationID']:
104 values = [ '', '1', '0', row['VariationID'] ]
105
106 Data[row['VariationNumber']] = dict(zip(column_names, values))
107
108 with open(stock['path'], mode='r', encoding=stock['encoding']) as item:
109 reader = csv.DictReader(item, delimiter=';')
110
111 for row in reader:
112 if row['MASTER'] in [*Data]:
113 Data[row['MASTER']]['ASIN'] = row['asin']
114 variation_upload.writeCSV(dataobject=Data, name='asin', columns=column_names, upload_path=folder)
115
116 def featureUpload(flatfile, feature, feature_id, folder):
76 def featureUpload(flatfile, features, folder):
117 77
118 78 column_names = [ column_names = [
119 79 'Variation.number', 'VariationEigenschaften.id', 'Variation.number', 'VariationEigenschaften.id',
 
... ... def featureUpload(flatfile, feature, feature_id, folder):
128 88
129 89 for row in reader: for row in reader:
130 90 if(row['parent_child'] == 'child'): if(row['parent_child'] == 'child'):
131 if(feature in [*row]):
132 if(row[feature]):
133 values = [
134 row['item_sku'], feature_id,
135 '1', '1',
136 row[feature]
137 ]
138
139 Data[row[ 'item_sku' ]] = dict(zip(column_names, values))
140 else:
141 print("The feature:\t{0}\twas not found, in the flatfile!\n".format(feature))
91 for feature in features:
92 if(row['item_sku'] == '101012000221'):
93 print("1. Feature: {0}, value: {1}".format(feature, row[feature]))
94 if(feature in [*row]):
95 if(row['item_sku'] == '101012000221'):
96 print("2. Feature: {0}, value: {1}".format(feature, row[feature]))
97 if(row[feature]):
98 if(row['item_sku'] == '101012000221'):
99 print("3. Feature: {0}, value: {1}".format(feature, row[feature]))
100 values = [
101 row['item_sku'], features[feature],
102 '1', '1',
103 row[feature]
104 ]
105
106 Data[row[ 'item_sku' ] + feature] = dict(zip(column_names, values))
107 else:
108 print("The feature:\t{0}\twas not found, in the flatfile!\n".format(feature))
142 109
143 110 if(Data): if(Data):
144 variation_upload.writeCSV(dataobject=Data, name=feature.upper(), columns=column_names, upload_path=folder)
111 barcode.writeCSV(dataobject=Data, name="features".upper(), columns=column_names, upload_path=folder)
File packages/barcode.py added (mode: 100644) (index 0000000..a133e8e)
1 import csv
2 import re
3 from os.path import isfile
4 import sys
5 from tkinter.filedialog import askdirectory
6 import os
7 try:
8 from sortedcontainers import SortedDict
9 except ImportError:
10 print("the sortedcontainers module is required to run this program.")
11 raise ImportError
12
13 class EmptyFieldWarning(Exception):
14 def __init__(self, errorargs):
15 Exception.__init__(self, "Following field/s are empty {0}".format(errorargs))
16 self.errorargs = errorargs
17
18 def writeCSV(dataobject, name, columns, upload_path):
19 '''Write Data into new CSV for Upload
20 OUTPUT
21 '''
22 '''
23 uploadpath = os.getcwd() + '/Upload'
24 if not os.path.exists(uploadpath):
25 print("=#="*10 + '\n')
26 printf("Please choose a folder for the Upload files\n")
27 print("=#="*10 + '\n')
28 uploadpath = askdirectory(title="Choose a folder for the Upload files!")
29 '''
30
31 output_path_number = 1
32 datatype = ".csv"
33 output_name = "/" + name + "_upload_" + str(output_path_number) + datatype
34 output_path = upload_path + output_name
35
36 while(isfile(output_path)):
37 output_path_number = int(output_path_number) + 1
38 output_name = "/" + name + "_upload_" + str(output_path_number) + datatype
39 output_path = upload_path + output_name
40
41 with open(output_path, mode='a') as item:
42 writer = csv.DictWriter(item, delimiter=";", fieldnames=columns, lineterminator='\n')
43 writer.writeheader()
44 for row in dataobject:
45 writer.writerow(dataobject[row])
46
47 if(isfile(output_path)):
48 print("Upload file successfully created under {0}".format(output_path))
49
50 return output_path
51
52 def barcode_Upload(flatfile, stocklist):
53 # open the flatfile get the ean for an sku and save it into a dictionary with
54 # columnheaders of the plentymarket dataformat
55
56 column_names = [ 'EAN_Barcode', 'FNSKU_Barcode', 'SKU',
57 'ASIN-countrycode', 'ASIN-type', 'ASIN-value' ]
58
59 Data = {}
60 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
61 reader = csv.DictReader(item, delimiter=";")
62
63 for row in reader:
64 if(row['parent_child'] == 'child'):
65 # Set code to an empty String if the barcode type matches EAN set it to to
66 # the external_product_id
67 code = ''
68 code = row['external_product_id']
69
70 if(not(code)):
71 raise EmptyFieldWarning('barcode(EAN)')
72 values = [ code, '', row['item_sku'],
73 '1', 'ASIN', '']
74
75 Data[row['item_sku']] = dict(zip(column_names, values))
76
77
78 with open(stocklist['path'], mode='r', encoding=stocklist['encoding']) as item:
79 reader = csv.DictReader(item, delimiter=";")
80
81 for row in reader:
82 if(row['MASTER'] in [*Data]):
83 # Set code to an empty String if the barcode type matches FNSKU set it to to
84 # the external_product_id
85 code = ''
86 code = row['fnsku']
87
88 if(code):
89 Data[row['MASTER']]['FNSKU_Barcode'] = code
90 Data[row['MASTER']]['ASIN-value'] = row['asin']
91
92 return Data
93
File packages/color.py added (mode: 100644) (index 0000000..1d11780)
1 import csv
2 import re
3 from packages import barcode
4
5
6 def missingColor(flatfile, attributefile):
7 # check the attribute export from PlentyMarkets for similar color names
8 #_____________________________
9 color_list = set()
10 missing_colors = dict()
11 # Open the attribute file to get all color names
12 # Get the highest position number
13 highest_number = int(0)
14
15 with open(attributefile['path'], mode = 'r', encoding=attributefile['encoding']) as item:
16 reader = csv.DictReader(item, delimiter=';')
17
18 for row in reader:
19 try:
20 if(int(row['AttributeValue.position']) > highest_number):
21 highest_number = int(row['AttributeValue.position'])
22 except KeyError as err:
23 print("ERROR @ reading attribute file: {0}".format(err))
24 if('AttributeValue.backendName' in [*row]):
25 color_list.add(row['AttributeValue.backendName'])
26 else:
27 print("Wrong Columns in the attribute file!\n{0}\n".format(",".join([*row])))
28
29 missing_colors_columns = ['color_name', 'similar_names', 'highest-position']
30
31 # Open the flatfile to check which names are not in the color_list
32 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
33 reader = csv.DictReader(item, delimiter = ';')
34
35 color_set = set()
36 for num, row in enumerate( reader ):
37 color_name = row['color_name']
38 if(not( color_name in color_list ) and not(color_name in color_set)):
39 color_set.add(color_name)
40 values = [color_name, [], highest_number]
41 missing_colors[str(num)] = dict(zip(missing_colors_columns, values))
42
43 # Open the attribute file to check for similar names to the missing ones
44 with open(attributefile['path'], mode = 'r', encoding=attributefile['encoding']) as item:
45 reader = csv.DictReader(item, delimiter = ';')
46
47 for row in reader:
48 for color in missing_colors:
49 try:
50 if(re.search( missing_colors[color]['color_name'][1:], row['AttributeValue.backendName'] )):
51 missing_colors[color]['similar_names'].append(row['AttributeValue.backendName'])
52 except KeyError as err:
53 print("ERROR @ similar name search: {0}".format(err))
54
55 '''
56 for row in missing_colors:
57 print("COLOR: {0}\n".format( missing_colors[row]['color_name'] ))
58 for name in missing_colors[row]['similar_names']:
59 print("similar to {0}: {1}".format(missing_colors[row]['color_name'], name))
60 '''
61 return missing_colors
62
63 def create_attributesync(color_dict, path):
64 column_names = ['Attribute.id','AttributeValue.backendName','AttributeValue.position','AttributeValueName.name']
65
66 Data = {}
67 values = []
68
69 for row in color_dict:
70 try:
71 values = ['4', color_dict[row]['color_name'], color_dict[row]['highest-position'], color_dict[row]['color_name']]
72 except KeyError as err:
73 print(color_dict[row])
74 print("ERROR @ attribute sync creation: {0}".format(err))
75
76 Data[color_dict[row]['color_name']] = dict(zip(column_names, values))
77
78 barcode.writeCSV(dataobject=Data, name="Color_Sync", columns=column_names, upload_path=path)
79
File packages/config.py changed (mode: 100644) (index 6a7e2f1..9b97c8b)
... ... def config_creation():
18 18 with open(configpath, mode='w') as item: with open(configpath, mode='w') as item:
19 19 item.write('upload_folder=\n') item.write('upload_folder=\n')
20 20 item.write('data_folder=\n') item.write('data_folder=\n')
21 item.write('attribute_file=\n')
22 item.write('file_change_date=\n')
21 23 if(os.path.isfile(configpath)): if(os.path.isfile(configpath)):
22 24 return configpath return configpath
23 25 else: else:
24 26 return None return None
25 27
26 28 def config_read(configpath): def config_read(configpath):
27 folder = { 'upload_folder':'', 'data_folder':''}
29 config = {'upload_folder':'', 'data_folder':'',
30 'attribute_file':'', 'file_change_date':''}
28 31 if(not(configpath)): if(not(configpath)):
29 32 configpath = config_creation() configpath = config_creation()
30 33 with open(configpath, mode='r') as item: with open(configpath, mode='r') as item:
 
... ... def config_read(configpath):
34 37 option = "".join(row[0]).split('=') option = "".join(row[0]).split('=')
35 38
36 39 if(option[0].strip(' ') == 'upload_folder'): if(option[0].strip(' ') == 'upload_folder'):
37 folder[ 'upload_folder' ] = option[1].strip(' ')
40 config[ 'upload_folder' ] = option[1].strip(' ')
38 41 if(option[0].strip(' ') == 'data_folder'): if(option[0].strip(' ') == 'data_folder'):
39 folder[ 'data_folder' ] = option[1].strip(' ')
42 config[ 'data_folder' ] = option[1].strip(' ')
43 if(option[0].strip(' ') == 'attribute_file'):
44 config[ 'attribute_file' ] = option[1].strip(' ')
45 if(option[0].strip(' ') == 'file_change_date'):
46 config[ 'file_change_date' ] = option[1].strip(' ')
40 47
41 return folder
48 return config
42 49
43 50 def config_write(configpath, data): def config_write(configpath, data):
44 51 # read the current content of the config # read the current content of the config
45 52 with open(configpath, mode='w') as item: with open(configpath, mode='w') as item:
46 writer = csv.DictWriter(item, delimiter='=', lineterminator='\n', fieldnames=['title', 'path'])
53 writer = csv.DictWriter(item, delimiter='=', lineterminator='\n', fieldnames=['title', 'value'])
47 54
48 55 for row in data: for row in data:
49 56 writer.writerow(data[row]) writer.writerow(data[row])
File packages/gui/__pycache__/category_chooser.cpython-37.pyc changed (mode: 100644) (index 994ccce..58422ac)
File packages/gui/category_chooser.py changed (mode: 100644) (index 0f4130c..5827d16)
1 1 import tkinter import tkinter
2 2 import tkinter.filedialog import tkinter.filedialog
3 import datetime
4 import re
5 import sys
3 6 from tkinter import messagebox as tmb from tkinter import messagebox as tmb
7 from packages import color as clr
8 from packages import item_upload
9
10 class LabelBox(tkinter.Frame):
11 def __init__(self, master, similar_names):
12 tkinter.Frame.__init__(self, master)
13 self.master = master
14 self.similar_names = similar_names
15 self.element = []
16 self.initialize()
17
18 def initialize(self):
19 self.grid()
20
21 for num, name in enumerate(self.similar_names):
22 rownum = num
23 columnnum = 1
24 if(num>15):
25 rownum = num - 16
26 columnnum = 2
27 if(num>30):
28 rownum = num - 31
29 columnnum = 3
30 if(num>45):
31 rownum = num - 46
32 columnnum = 4
33
34 self.element.append( tkinter.Text(self, height=1, width=25) )
35 self.element[num].insert(1.0, name)
36 self.element[num].configure(bg=self.master.master.cget('bg'), relief='flat')
37 self.element[num].configure(state='disabled')
38 self.element[num].grid(row=rownum, column=columnnum, sticky='W')
39
40 class DropdownBox(tkinter.Frame):
41 def __init__(self, master):
42 tkinter.Frame.__init__(self, master)
43 self.master = master
44 self.variable = self.master.optionvar
45 self.optionslist = self.master.options
46 self.initialize()
47
48 def initialize(self):
49 self.grid()
50
51 self.dropdown = tkinter.OptionMenu(self, self.variable, *self.optionslist)
52 self.dropdown.grid(row=1, column=1, sticky='EW')
53
54
55 class InfoBox(tkinter.Frame):
56 def __init__(self, master, options, colordict):
57 tkinter.Frame.__init__(self, master)
58 self.master = master
59 self.colordict = colordict
60 self.options = options
61 self.optionvar = tkinter.StringVar()
62 self.similarcolors = []
63 self.initialize()
64
65 def initialize(self):
66 self.grid()
67
68 self.optionvar.set("missing Colors")
69
70 # Dropdown Box with all the missing colors
71 self.dropdownbox = DropdownBox(self);
72 self.dropdownbox.grid(row=1, column=1, columnspan=2, sticky="EW")
73
74 self.optionvar.trace('w', self.change_dropdown)
75
76 self.labelbox = tkinter.Label(self, text="similar colors")
77 self.labelbox.grid(row=2, column=1, columnspan=3, sticky="NESW")
78
79 def change_dropdown(self, *args):
80 if(self.labelbox.winfo_exists() == 1):
81 self.labelbox.destroy()
82 for color in self.colordict:
83 if(self.optionvar.get() == self.colordict[color]['color_name']):
84 self.similarcolors = self.colordict[color]['similar_names']
85 # Label grid with all the similar colors in Plentymarkets
86 self.labelbox = LabelBox(self, self.similarcolors)
87 self.labelbox.grid(row=2, column=1, columnspan=3, sticky="NESW")
88
89
90 class ButtonBox(tkinter.Frame):
91 def __init__(self, master):
92 tkinter.Frame.__init__(self, master)
93 self.master = master
94 self.initialize()
95
96 def initialize(self):
97 self.grid()
98
99 self.continue_button = tkinter.Button(self, text="Continue", command=lambda: self.master.destroy_warningbox())
100 self.continue_button.grid(row=1, column=1, sticky='EW', padx=20)
101
102 self.create_button = tkinter.Button(self, text="Create a Syncfile", command=lambda: self.master.create_syncfile())
103 self.create_button.grid(row=1, column=2, sticky='EW', padx=20)
104
105 self.stop_button = tkinter.Button(self, text='Stop', command=lambda: self.master.destroy_root())
106 self.stop_button.grid(row=1, column=3, sticky='EW', padx=20)
107
108 class WarningBox(tkinter.Frame):
109 def __init__(self, master, colorlist, *args, **kwargs):
110 tkinter.Frame.__init__(self, master, *args, **kwargs)
111 self.master = master
112 self.colorlist = colorlist
113 self.colorlist_length = 0
114 self.colorlist_string = str()
115 self.colordict = self.master.master.missingcolors
116 self.args = args
117 self.kwargs = kwargs
118 self.initialize()
119 print('initialized!')
120
121 def initialize(self):
122 self.grid()
123
124 if(re.search(r'list', str(type(self.colorlist)))):
125 self.colorlist_length = len(self.colorlist)
126 self.colorlist_string = self.create_colorstring(self.colorlist)
127 else:
128 print("ERROR @ WarningBox: colorlist has the wrong type (needs to be a list)\n{0}".format(type(colorlist)))
129
130 self.boxdesc = tkinter.Label(self, text="The flatfile contains colors that are not found in Plentymarkets.")
131 self.boxdesc.grid(row=1,column=1,columnspan=4, sticky='NESW', pady=20, padx=20)
132
133 self.warningmessage = tkinter.Label(self, text="Number of missing colors: " + str( self.colorlist_length ) + "\nColors: " + self.colorlist_string)
134 self.warningmessage.grid(row=2,column=1,columnspan=4, sticky='NESW')
135
136 self.buttonbox = ButtonBox(master=self)
137 self.buttonbox.grid(row=3,column=1,columnspan=3, sticky='NESW', pady=20, padx=20)
138
139 self.infobox = InfoBox(master=self, options=self.colorlist, colordict=self.colordict)
140 self.infobox.grid(row=4, column=1, columnspan=3, sticky='NESW', pady=20, padx=20)
141
142 def create_syncfile(self):
143 clr.create_attributesync(self.colordict, self.master.master.upath)
144 self.master.withdraw()
145
146 def destroy_warningbox(self):
147 self.master.withdraw()
148
149 def destroy_root(self):
150 self.master.master.destroy()
151
152 def create_colorstring(self, colorlist):
153 colorstring = ''
154
155 if(len(colorlist)>3):
156 colorstring = ' , '.join(colorlist[0:3]) + '...'
157 else:
158 colorstring = ' , '.join(colorlist)
159
160 return colorstring
4 161
5 162 class DropdownChooser(tkinter.Frame): class DropdownChooser(tkinter.Frame):
6 163 def __init__(self, master, *args, **kwargs): def __init__(self, master, *args, **kwargs):
 
... ... class DropdownChooser(tkinter.Frame):
14 171 self.grid() self.grid()
15 172
16 173 self.optionvar = tkinter.StringVar(self) self.optionvar = tkinter.StringVar(self)
174 self.resultvar = tkinter.StringVar(self)
17 175
18 176 self.options = {'Men.Aladinhose':'34', self.options = {'Men.Aladinhose':'34',
19 177 'Men.Sommerhose':'36', 'Men.Sommerhose':'36',
 
... ... class DropdownChooser(tkinter.Frame):
55 213 'Bags.Women.Schultertasche' :'60', 'Bags.Women.Schultertasche' :'60',
56 214 'Bags.Women.Rucksack':'61'} 'Bags.Women.Rucksack':'61'}
57 215
58 self.optionvar.set('pants')
216 self.optionvar.set('category')
59 217
60 218 self.dropdown_menu = tkinter.OptionMenu(self, self.optionvar, *[ *self.options ]) self.dropdown_menu = tkinter.OptionMenu(self, self.optionvar, *[ *self.options ])
61 219 self.dropdown_menu.grid(row=0, column=0, sticky="EW") self.dropdown_menu.grid(row=0, column=0, sticky="EW")
62 220
63 221 self.optionvar.trace('w', self.change_dropdown) self.optionvar.trace('w', self.change_dropdown)
222 self.resultvar.trace('w', self.add_desc)
64 223
65 224 # Create a textbox to show the result of the choosing # Create a textbox to show the result of the choosing
66 self.resultbox = tkinter.Entry(self, width=50, bg="white")
225 self.resultbox = tkinter.Entry(self, textvariable=self.resultvar, width=50, bg="white")
67 226 self.resultbox.grid(row=1, column=0, columnspan=2, sticky="EW") self.resultbox.grid(row=1, column=0, columnspan=2, sticky="EW")
68 227
228 # Create a label with an info about the standard category which hides if the entry is empty
229 self.category_info = tkinter.Label(self, text="The first category in the list will be used as standard category!")
230
69 231 def change_dropdown(self, *args): def change_dropdown(self, *args):
70 232 if( not( self.resultbox.get() ) ): if( not( self.resultbox.get() ) ):
71 233 self. resultbox.insert(tkinter.INSERT, self.options[ self.optionvar.get() ] ) self. resultbox.insert(tkinter.INSERT, self.options[ self.optionvar.get() ] )
72 234 else: else:
73 235 self.resultbox.insert(tkinter.INSERT, ', ' + self.options[ self.optionvar.get() ] ) self.resultbox.insert(tkinter.INSERT, ', ' + self.options[ self.optionvar.get() ] )
74 236
237 def add_desc(self, *args):
238 if(len(self.resultvar.get())>1):
239 self.category_info.grid(row=2, columnspan=2, sticky="EW")
240 else:
241 self.category_info.grid_remove()
242
243
75 244
76 245 class DescBox(tkinter.Frame): class DescBox(tkinter.Frame):
77 246 def __init__(self, master, desctext): def __init__(self, master, desctext):
 
... ... class DescBox(tkinter.Frame):
88 257
89 258
90 259 class CategoryChooser(tkinter.Tk): class CategoryChooser(tkinter.Tk):
91 def __init__(self, master, upath=''):
260 def __init__(self, master, upath='', flatfile='', atrpath='', atrdate=''):
92 261 tkinter.Tk.__init__(self, master) tkinter.Tk.__init__(self, master)
93 262 self.master = master self.master = master
94 263 self.upath = upath self.upath = upath
95 self.newpath = ''
264 self.flatfile = flatfile
265 self.atrpath = atrpath
266 self.atrdate = atrdate
267 self.newpath = {'upload-path':'', 'attribute-path':''}
96 268 self.data = {'name':'', 'categories':''} self.data = {'name':'', 'categories':''}
97 self.initialize()
98 269 self.protocol("WM_WINDOW_DELETE", self.close_app) self.protocol("WM_WINDOW_DELETE", self.close_app)
270 self.missingcolors = {}
271 # Window position properties
272 self.window_w = self.winfo_reqwidth()
273 self.window_h = self.winfo_reqheight()
274 self.screen_w = self.winfo_screenwidth()
275 self.screen_h = self.winfo_screenheight()
276 # For dual monitor
277 if(self.screen_w/3 > self.screen_h):
278 self.positionright = int(self.screen_w/4 - self.window_w)
279 self.positiondown = int(self.screen_h/3 - self.window_h)
280 # For single screen
281 else:
282 self.positionright = int(self.winfo_screenwidth()/2 - self.window_w)
283 self.positiondown = int(self.winfo_screenheight()/3 - self.window_h)
284 # geometry of the window and build the elements of the gui
285 self.geometry("+{}+{}".format(self.positionright, self.positiondown))
286 self.initialize()
99 287
100 288 def initialize(self): def initialize(self):
101 289 self.grid() self.grid()
102 290
291 if(self.atrpath['path']):
292 self.check_colors(self.flatfile, self.atrpath)
293
103 294 self.pathdesc = DescBox(master=self, desctext="The current Upload path is: \n" + self.upath) self.pathdesc = DescBox(master=self, desctext="The current Upload path is: \n" + self.upath)
104 self.pathdesc.grid(row=0, columnspan=2, pady=10, padx=10)
295 self.pathdesc.grid(row=0, columnspan=3, pady=10, padx=10)
296
297 self.attributedesc = DescBox(master=self, desctext="The current attribute file:\n" + self.atrpath['path'] + '\nUpload-date:\t' + self.atrdate)
298 self.attributedesc.grid(row=1, columnspan=3, pady=10, padx=10)
105 299
106 self.changepathbutton = tkinter.Button(self, text="Choose a new path", command=lambda: self.get_new_path())
107 self.changepathbutton.grid(row=1, column=1, pady=10, padx=10)
300 self.changepathbutton = tkinter.Button(self, text="Choose a new path", command=lambda: self.get_new_path("Choose a new upload folder", 'upload'))
301 self.changepathbutton.grid(row=2, column=1, pady=10, padx=10)
302
303 self.changeatrbutton = tkinter.Button(self, text="Choose the attribute file", command=lambda: self.get_new_path("Choose a new attribute file", 'atr'))
304 self.changeatrbutton.grid(row=2, column=2, pady=10, padx=10)
108 305
109 306 self.dropdesc = DescBox(master=self, desctext="Choose the categories for the product.") self.dropdesc = DescBox(master=self, desctext="Choose the categories for the product.")
110 self.dropdesc.grid(row=2, columnspan=2, pady=10, padx=10)
307 self.dropdesc.grid(row=3, columnspan=3, pady=10, padx=10)
111 308
112 309 self.dropdown = DropdownChooser(master=self) self.dropdown = DropdownChooser(master=self)
113 self.dropdown.grid(row=3, column=1)
310 self.dropdown.grid(row=4, column=1, columnspan=2)
114 311
115 312 self.namedesc = DescBox(master=self, desctext="Choose a name for the product") self.namedesc = DescBox(master=self, desctext="Choose a name for the product")
116 self.namedesc.grid(row=4, columnspan=2, pady=10, padx=10)
313 self.namedesc.grid(row=5, columnspan=3, pady=10, padx=10)
117 314
118 315 self.namechooser = tkinter.Entry(self, width=50, bg="white") self.namechooser = tkinter.Entry(self, width=50, bg="white")
119 self.namechooser.grid(row=5, columnspan=2, pady=10, padx=10)
316 self.namechooser.grid(row=6, columnspan=3, pady=10, padx=10)
120 317
121 318 self.accept = tkinter.Button(self, text="Accept", self.accept = tkinter.Button(self, text="Accept",
122 319 command=lambda: self.get_input(self.dropdown.resultbox.get(), self.namechooser.get())) command=lambda: self.get_input(self.dropdown.resultbox.get(), self.namechooser.get()))
123 self.accept.grid(row=6, column=2, pady=10, padx=10)
320 self.accept.grid(row=7, column=3, pady=10, padx=10)
124 321
125 322 def get_input(self, categories, name): def get_input(self, categories, name):
126 323 self.data['name'] = name self.data['name'] = name
 
... ... class CategoryChooser(tkinter.Tk):
128 325 # Close the gui after accepting the input to stop the mainloop # Close the gui after accepting the input to stop the mainloop
129 326 self.close_app() self.close_app()
130 327
131 def get_new_path(self):
132 print("Get the new path of the upload folder.")
133 self.newpath = tkinter.filedialog.askdirectory()
328 def get_new_path(self, title, option):
329 if(option == 'upload'):
330 print("Get the new path of the upload folder.")
331 self.newpath['upload-path'] = tkinter.filedialog.askdirectory(title=title)
332 elif(option == 'atr'):
333 print("Get a new attribute file.")
334 self.newpath['attribute-path'] = tkinter.filedialog.askopenfilename(title=title)
335 self.atrdate = datetime.datetime.now().strftime("%d.%m.%Y-%H:%M")
336 self.check_colors(self.flatfile, self.newpath['attribute-path'])
337
338 def check_colors(self, flatfile, attributefile):
339 attributefile = item_upload.check_encoding(attributefile)
340 self.missingcolors = clr.missingColor(flatfile, attributefile)
341
342 colorlist = []
343
344 try:
345 for color in self.missingcolors:
346 colorlist.append(self.missingcolors[color]['color_name'])
347 if(len(colorlist) > 0):
348 self.toplevelWarning = tkinter.Toplevel(self)
349 self.toplevelWarning.title("Warning missing Colors")
350 self.toplevelWarning.lift(self)
351 #self.toplevel_positiondown = int( self.winfo_screenheight()/1.3 - self.window_h)
352 self.toplevelWarning.geometry("+{}+{}".format(self.positionright, self.positiondown))
353 self.warningbox = WarningBox(self.toplevelWarning, colorlist)
354 except Exception as err:
355 print("Error @ checkcolor: {0} - line no.: {1}".format(err, sys.exc_info()[2].tb_lineno))
134 356
135 357 def close_app(self): def close_app(self):
136 358 if(self.data['name'] and self.data['categories']): if(self.data['name'] and self.data['categories']):
137 359 self.withdraw() self.withdraw()
138 360 else: else:
139 tmb.showerror("NO INPUT", "Please enter a name and atleast one category!")
361 tmb.showerror("NO INPUT", "Please enter a name and at least one category!")
File packages/image_upload.py changed (mode: 100644) (index cc7007f..1f99ff8)
1 1 import csv import csv
2 2 from sortedcontainers import SortedDict from sortedcontainers import SortedDict
3 3 import re import re
4 from os.path import isfile
5 from packages import variation_upload
4 import sys
6 5
6 def searchSpecialImage(image):
7 if(re.search(r'( SWATCH|SIZE )', image)):
8 return True
9 else:
10 return False
7 11
8 def searchImage(imglink, itemid, variationid, variationlinks, target):
12 def getColorAttributeID(attributefile, product):
9 13
10 existing = False
11 blockEntry = False
12
13 for row in target:
14 if(target[row]['ItemImageUrl'] == imglink and
15 target[row]['ItemImageItemID'] == itemid and
16 (re.search(variationid, variationlinks))):
17
18 existing = True
19 blockEntry = True
20 pass
21
22 if(target[row]['ItemImageUrl'] == imglink and
23 target[row]['ItemImageItemID'] == itemid and
24 not(re.search(variationid, variationlinks))):
25
26 existing = True
27 if(target[row]['VariationLink'] and
28 not(re.search(variationid, target[row]['VariationLink']))):
29 target[row]['VariationLink'] += ', ' + variationid
30
31 if(not(existing)):
32
33 variationlinks = str(variationid)
34
35 return variationlinks, blockEntry
36
37
38 def imageUpload(flatfile, exportfile, folder):
39 # open the export file, scrap the important data and save it into an dictionary
40 with open(exportfile['path'], mode='r', encoding=exportfile['encoding']) as item:
41 Data = {}
14 attributeid = ''
15 with open(attributefile['path'], mode = 'r', encoding = attributefile['encoding']) as item:
42 16 reader = csv.DictReader(item, delimiter=';') reader = csv.DictReader(item, delimiter=';')
43 names = ['variation_id', 'item_id']
44 for row in reader:
45 identification = row['VariationID'], row['ItemID']
46 Data[row['VariationNumber']] = dict(zip(names, identification))
47
48 # ------------------------------------------------------------------------
49 # if you have a file ending that you need to remove from your Link add it
50 # to the follwing List
51 # EXAMPLE: ['?dl=0', '?dl=1']
52 # ------------------------------------------------------------------------
53 endings = ['?dl=0']
54 replacement = ['']
55
56 searchterm = ['www.dropbox']
57 replaceterm = ['dl.dropbox']
58
59 # open the flat file, take the 3rd line as the header and scrap the image
60 # links + the parent sku
61 # combine it with the data out of the export file and save all into a
62 # multidimensional dictionary
63 count = 0
64 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
65 links = SortedDict()
66 # for i in range(2):
67 # next(item)
68 reader = csv.DictReader(item, delimiter=';')
69 names = ['Link1', 'Link2', 'Link3', 'Link4',
70 'Link5', 'Link6', 'Link7', 'Link8', 'Link9']
71 for row in reader:
72 imglinks = [
73 row['main_image_url'],
74 row['other_image_url1'],
75 row['other_image_url2'],
76 row['other_image_url3'],
77 row['other_image_url4'],
78 row['other_image_url5'],
79 row['other_image_url6'],
80 row['other_image_url7'],
81 row['other_image_url8']
82 ]
83
84 # check every img link for the ending if it ends with an invalid ?dl=0
85 # replace it with ?raw=1
86 for img in imglinks:
87 if(img):
88 for num in range(len(searchterm)):
89 if(re.search(searchterm[num], img)):
90 imglinks[imglinks.index(img)] = (img.replace(
91 searchterm[num], replaceterm[num]) and img.replace(endings[num], replacement[num]))
92 count += 1
93
94 if(row['item_sku'] in [*Data]):
95 links[row['item_sku']] = SortedDict(zip(names, imglinks))
96 links[row['item_sku']]['itemid'] = Data[row['item_sku']]['item_id']
97 links[row['item_sku']
98 ]['variationid'] = Data[row['item_sku']]['variation_id']
99 if(row['parent_sku']):
100 links[row['item_sku']]['parentsku'] = row['parent_sku']
101 else:
102 links[row['item_sku']]['parentsku'] = row['item_sku']
103
104 # Print the amount of changes made to links in order to make them work
105 print("Amount of fixed invalid link endings: {0} (ending with {1}, replaced with {2})".format(
106 count, ",".join(searchterm), ",".join(replaceterm)))
107
108 # creating a dictionary that contains only one entry per imagelink and
109 # combining the variation id into the duplicates
110 Data = SortedDict()
111 number = 1
112 names = ['ItemImageItemID', 'PrimaryVariationCustomNumber',
113 'VariationLink', 'ItemImageUrl']
114 blockEntry = False
115 for row in links:
116 variationlinks = ''
117 for pic in re.findall(r'Link\d{1}', ",".join([*links[row]])):
118 if(links[row][pic]):
119 variationlinks, blockEntry = searchImage(
120 links[row][pic],
121 links[row]['itemid'],
122 links[row]['variationid'],
123 variationlinks,
124 Data)
125 if(not(blockEntry) and variationlinks):
126 values = [links[row]['itemid'], links[row]
127 ['parentsku'], variationlinks, links[row][pic]]
128 # If the digit is a single digit add a 0 in front of it
129 if(re.match(r'\d(?!\S)', str(number))):
130 number = '0' + str(number)
131 Data['IMG' + str(number)
132 ] = SortedDict(zip(names, values))
133 number = int(number) + 1
134
135 variation_upload.writeCSV(Data, "image", names, folder )
17
18 try:
19 for row in reader:
20 if(row['AttributeValue.backendName'] == product['color_name']):
21 attributeid = row['AttributeValue.id']
22 if(not( attributeid )):
23 print("For SKU : {0}, Color : {1} not found!\n".format(product['item_sku'], product['color_name']))
24 except Exception as err:
25 print("ERROR @ Color ID search: line: {0}, err: {1}".format(sys.exc_info()[2].tb_lineno, err))
26
27 return attributeid
28
29
30 def imageUpload(flatfile, attributefile):
31
32 try:
33 Data = SortedDict()
34
35 column_names = ['Multi-URL', 'connect-variation', 'mandant',
36 'availability', 'listing', 'connect-color']
37 linkstring = ''
38 attributeID = ''
39
40 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
41 reader = csv.DictReader(item, delimiter=';')
42 for index, row in enumerate( reader ):
43 imglinks = [
44 row['main_image_url'],
45 row['other_image_url1'],
46 row['other_image_url2'],
47 row['other_image_url3'],
48 row['other_image_url4'],
49 row['other_image_url5'],
50 row['other_image_url6'],
51 row['other_image_url7'],
52 row['other_image_url8']
53 ]
54
55 num = 1
56 try:
57 if(imglinks[0]):
58 for img in imglinks:
59 if(not(searchSpecialImage(img))):
60 if(not(linkstring)):
61 linkstring += img + ';' + str( num )
62 num += 1
63 else:
64 linkstring += ',' + img + ';' + str( num )
65 num += 1
66 if(searchSpecialImage(img)):
67 print("\n{0} is a special image\n".format(img))
68 if(not(linkstring)):
69 linkstring += img + ';' + str( num )
70 num += 1
71 else:
72 linkstring += ',' + img + ';' + str( num )
73 num += 1
74
75
76 except Exception as err:
77 print("Error @ linkstring building line no: {0} : {1}".format(sys.exc_info()[2].tb_lineno, err))
78
79 try:
80 attributeID = getColorAttributeID(attributefile=attributefile, product=row)
81 except Exception as err:
82 print("Error @ get Color Attribute ID {0}\n".format(err))
83
84
85 values=[linkstring, 1, -1,
86 -1, -1, attributeID]
87
88 Data[row['item_sku']] = dict(zip(column_names, values))
89
90 except Exception as err:
91 print("Error @ imageupload line: {0} : {1}".format(sys.exc_info()[2].tb_lineno, err))
92
93 return Data
File packages/item_upload.py changed (mode: 100644) (index 40b6205..d148a5d)
1 1 import csv import csv
2 import sys
2 3 import re import re
3 4 import chardet import chardet
4 5 from os.path import isfile from os.path import isfile
5 6 from sys import exit from sys import exit
6 from packages import variation_upload
7 from packages import barcode, amazon_data_upload, price_upload, image_upload
7 8
8 9
9 10 try: try:
 
... ... except ImportError:
15 16 class WrongEncodingException(Exception): class WrongEncodingException(Exception):
16 17 pass pass
17 18
19 def check_flatfile(flatfile):
20 try:
21 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
22 reader = csv.DictReader(item, delimiter=';')
23
24 first_row = [* list(reader)[0] ]
25 if(not( 'feed_product_type' in first_row )):
26 if( 'Marke' in first_row ):
27 print("Please cut the first two rows from the flatfile for this script\n")
28 return False
29 else:
30 print("This file contains the wrong column header\n{0}\n".format(','.join(first_row)))
31 return False
32 else:
33 return True
34 except Exception as err:
35 print("ERROR @ flatfile checking : {0}".format(err))
36
18 37 def check_encoding(file_dict): def check_encoding(file_dict):
19 38 try: try:
20 39 with open(file_dict['path'], mode='rb') as item: with open(file_dict['path'], mode='rb') as item:
 
... ... def check_encoding(file_dict):
28 47
29 48 return file_dict return file_dict
30 49
31 def itemUpload(flatfile, intern, folder):
50 def itemUpload(flatfile, intern, stocklist, attributefile, folder, input_data):
32 51 # The column headers for the output file as expected from the # The column headers for the output file as expected from the
33 52 # plentymarkets dataformat # plentymarkets dataformat
34 column_names = ['ItemID', 'PrimaryVariationCustomNumber',
35 'PrimaryVariationLengthMM', 'PrimaryVariationWidthMM',
36 'PrimaryVariationHeightMM', 'PrimaryVariationWeightG',
37 'PrimaryVariationName', 'PrimaryVariationMainWarehouse',
38 'PrimaryVariationPurchasePrice', 'ItemOriginCountry',
39 'ItemProducer', 'ItemProducerID', 'ItemProductType',
40 'ItemTextName', 'ItemTextDescription', 'ItemTextKeywords',
41 'ItemTextLang', 'PrimaryVariationExternalID',
42 'PrimaryVariationActive',
43 'PrimaryVariationAutoStockInvisible',
44 'PrimaryVariationAutoStockNoPositiveStockIcon',
45 'PrimaryVariationAutoStockPositiveStockIcon',
46 'PrimaryVariationAutoStockVisible',
47 'PrimaryVariationAvailability',
48 'ItemMarking1', 'ItemMarking2']
53 column_names = ['ItemID', 'Parent-SKU', 'SKU',
54 'Length', 'Width',
55 'Height', 'Weight',
56 'Name', 'MainWarehouse',
57 'Attributes',
58 'ItemOriginCountry', 'ItemTextKeywords',
59 'ItemProducer', 'ItemProducerID',
60 'ItemTextName', 'ItemTextDescription',
61 'ExternalID', 'VariationActive',
62 'VariationAvailability', 'Category-IDs',
63 'Standard-Category', 'Standard-Category-Webshop',
64 'Mandant-Active', 'Webshop-Active',
65 'EAN_Barcode', 'FNSKU_Barcode',
66 'market-active-shop', 'market-active-ebay',
67 'market-active-ebayger', 'market-active-amafba',
68 'market-active-amafbager', 'market-active-webapi',
69 'marketid', 'accountid',
70 'amazon_sku', 'amazon_parentsku',
71 'amazon-producttype', 'fba-enabled', 'fba-shipping',
72 'price-price', 'ebay-price', 'amazon-price', 'webshop-price', 'etsy-price',
73 'ASIN-countrycode', 'ASIN-type', 'ASIN-value',
74 'Multi-URL', 'connect-variation', 'mandant', 'availability', 'listing',
75 'connect-color']
49 76
50 77
51 78 # Unpack File and scrap data # Unpack File and scrap data
52 79 # INPUT # INPUT
53 80 # -------------------------------------------------------------- # --------------------------------------------------------------
54 81 Data = SortedDict() Data = SortedDict()
82 package_properties = {}
83 barcode_data = {}
55 84
85 # Get sets of all colors and sizes for each parent
86 # to find if there are some with only one attribute value for all childs
87 color_size_sets = {}
88 color_size_sets = find_similar_attr(flatfile)
89
90 # PACKAGE PROPERTIES
91 package_properties = get_properties(flatfile)
92
93 # FILL DICTIONARY
56 94 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
57 95 reader = csv.DictReader(item, delimiter=";") reader = csv.DictReader(item, delimiter=";")
96
58 97 for row in reader: for row in reader:
59 98 # transform the text format to integer in order to adjust the # transform the text format to integer in order to adjust the
60 99 # height, width, length numbers from centimeter to milimeter # height, width, length numbers from centimeter to milimeter
61 100 try: try:
62 if(row['parent_child'] == 'parent'):
63 try:
64 if(row['package_height'] and
65 row['package_length'] and
66 row['package_width']):
67
68 row['package_height'] = int(row['package_height'])
69 row['package_length'] = int(row['package_length'])
70 row['package_width'] = int(row['package_width'])
71
72 # if the number is a floating point number it has to be
73 # transformed into a float first befor the integer conversion
74 except ValueError as err:
75 row['package_height'] = int(float(row['package_height']))
76 row['package_length'] = int(float(row['package_length']))
77 row['package_width'] = int(float(row['package_width']))
78
79 except ValueError as err:
80 print(err)
81 print("/nPlease copy the values for height, length, width",
82 "and weight\nfrom the children to the parent",
83 "variation in the flatfile.\n")
84 exit()
85
86 # get the keywords from the flatfile if it is a old flatfile
87 # combine the keyword columns into a single one
88 # after that check the size of the keywords
89 # because the maximum for amazon is 250byte
90 # if('generic_keywords1' in headers):
91 # if(row['generic_keywords1']):
92 # keywords = ''
93 # try:
94 # keywords = str(row['generic_keywords1'] + '' +
95 # row['generic_keywords2'] + '' +
96 # row['generic_keywords3'] + '' +
97 # row['generic_keywords4'] + '' +
98 # row['generic_keywords5'])
99 # except Exception as err:
100 # print(err)
101 # print("The combination of the keywords failed!")
102 keywords = ''
103 if(row['generic_keywords']):
104 keywords = row[ 'generic_keywords' ]
105
106 if(not(keywords)):
107 raise variation_upload.EmptyFieldWarning('generic_keywords')
108
109 try:
110 values = ['', row['item_sku'], row['package_length'] * 10,
111 row['package_width'] * 10,
112 row['package_height'] * 10,
113 row['package_weight'], row['item_name'],
114 '104', '', '62', row['brand_name'].upper(), '3',
115 row['feed_product_type'], '',
116 row['product_description'], keywords, 'de',
117 '', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 9, 1]
118
119 except KeyError:
120 raise KeyError
121 except Exception as err:
122 if(str(err).strip("'") == 'feed_product_type'):
123 raise WrongEncodingException("Wrong encoding for this script, please use UTF-8!")
124 print(err)
125 print('Error at the Values')
126 Data[row['item_sku']] = SortedDict(zip(column_names, values))
101 # SET KEYWORDS
102 keywords = ''
103 if(row['generic_keywords']):
104 keywords = row[ 'generic_keywords' ]
105
106 if(not(keywords)):
107 raise barcode.EmptyFieldWarning('generic_keywords')
108
109 # SET ATTRIBUTES
110 attributes = ''
111 if(row['parent_child'] == 'child'):
112 attributes = get_attributes(dataset=row, sets=color_size_sets)
113
114
115 try:
116 values = [
117 '', row['parent_sku'], row['item_sku'],
118 package_properties[ 'length' ] * 10, package_properties[ 'width' ] * 10,
119 package_properties[ 'height' ] * 10, package_properties[ 'weight' ],
120 row['item_name'], '104',
121 attributes,
122 '62', keywords,
123 row['brand_name'].upper(), '3',
124 input_data['name'], row['product_description'],
125 '', 'true', # externalID & active
126 '3', input_data['categories'],
127 input_data['categories'][0:2], input_data['categories'][0:2],
128 'Y', 'Y', # mandant
129 '', '', # barcode
130 'Y', 'Y', # marketconnection
131 'Y', 'Y', # marketconnection
132 'Y', 'Y', # marketconnection
133 '', '', # market & accout id amazonsku
134 '', '', # sku & parentsku amazonsku
135 '', '', '',# producttype & fba amazon
136 '','','','','',# prices
137 '', '', '', #asin
138 '', '', '', '', #image
139 '', '' # image
140 ]
141
142 except KeyError:
143 raise KeyError
144 print('Error at the Values')
145 Data[row['item_sku']] = SortedDict(zip(column_names, values))
127 146 except KeyError as err: except KeyError as err:
128 147 print("Error at : 'if(row['parent_child'] == 'parent'):'") print("Error at : 'if(row['parent_child'] == 'parent'):'")
129 148 return row['item_sku'] return row['item_sku']
 
... ... def itemUpload(flatfile, intern, folder):
135 154 try: try:
136 155 if(row['amazon_sku'] in [*Data]): if(row['amazon_sku'] in [*Data]):
137 156 Data[row['amazon_sku']]['ItemID'] = row['article_id'] Data[row['amazon_sku']]['ItemID'] = row['article_id']
138 Data[row['amazon_sku']]['PrimaryVariationExternalID'] = row['full_number']
157 Data[row['amazon_sku']]['ExternalID'] = row['full_number']
139 158 except KeyError as keyerr: except KeyError as keyerr:
140 159 print(keyerr) print(keyerr)
141 160 print("Keyerror at the Intern Number addition") print("Keyerror at the Intern Number addition")
142 161
162 # Include the barcodes & asin
163 barcode_data = barcode.barcode_Upload(flatfile, stocklist)
164
165 for row in barcode_data:
166 try:
167 if(row in [*Data]):
168 Data[row]['EAN_Barcode'] = barcode_data[row]['EAN_Barcode']
169 Data[row]['FNSKU_Barcode'] = barcode_data[row]['FNSKU_Barcode']
170 Data[row]['ASIN-countrycode'] = barcode_data[row]['ASIN-countrycode']
171 Data[row]['ASIN-type'] = barcode_data[row]['ASIN-type']
172 Data[row]['ASIN-value'] = barcode_data[row]['ASIN-value']
173 except Exception as err:
174 print("ERROR @ Barcode Part for {0}.\n{1}.\n".format(row, err))
175
176 # Include the amazonsku
177 sku_data = amazon_data_upload.amazonSkuUpload(flatfile)
178
179 for row in sku_data:
180 try:
181 if(row in [*Data]):
182 Data[row]['marketid'] = sku_data[row]['MarketID']
183 Data[row]['accountid'] = sku_data[row]['MarketAccountID']
184 Data[row]['amazon_sku'] = sku_data[row]['SKU']
185 Data[row]['amazon_parentsku'] = sku_data[row]['ParentSKU']
186 except Exception as err:
187 print("ERROR @ SKU Part for {0}.\n{1}.\n".format(row, err))
188
189 # Include the amazonsku
190 ama_data = amazon_data_upload.amazonDataUpload(flatfile)
191
192 for row in ama_data:
193 try:
194 if(row in [*Data]):
195 Data[row]['amazon-producttype'] = ama_data[row]['ItemAmazonProductType']
196 Data[row]['fba-enabled'] = ama_data[row]['ItemAmazonFBA']
197 Data[row]['fba-shipping'] = ama_data[row]['ItemShippingWithAmazonFBA']
198 except Exception as err:
199 print("ERROR @ AmazonData Part for {0}.\n{1}.\n".format(row, err))
200
201 # Include the price
202 price_data = price_upload.priceUpload(flatfile)
203
204 for row in price_data:
205 try:
206 if(row in [*Data]):
207 Data[row]['price-price'] = price_data[row]['price']
208 Data[row]['ebay-price'] = price_data[row]['ebay']
209 Data[row]['amazon-price'] = price_data[row]['amazon']
210 Data[row]['webshop-price'] = price_data[row]['webshop']
211 Data[row]['etsy-price'] = price_data[row]['etsy']
212 except Exception as err:
213 print("ERROR @ Price Part for {0}.\n{1}.\n".format(row, err))
214
215 # Include the images
216 image_data = image_upload.imageUpload(flatfile, attributefile)
217
218 for index, row in enumerate( image_data ):
219 try:
220 if(row in [*Data]):
221 Data[row]['Multi-URL'] = image_data[row]['Multi-URL']
222 Data[row]['connect-variation'] = image_data[row]['connect-variation']
223 Data[row]['mandant'] = image_data[row]['mandant']
224 Data[row]['availability'] = image_data[row]['availability']
225 Data[row]['listing'] = image_data[row]['listing']
226 Data[row]['connect-color'] = image_data[row]['connect-color']
227 except Exception as err:
228 print("ERROR @ Image Part for {0}.\n{1}.\n".format(row, err))
229
230
143 231 # Write Data into new CSV for Upload # Write Data into new CSV for Upload
144 232 # OUTPUT # OUTPUT
145 233 # -------------------------------------------------------------- # --------------------------------------------------------------
146 234
147 variation_upload.writeCSV(Data, "item", column_names, folder)
235 barcode.writeCSV(Data, "item", column_names, folder)
148 236
149 def itemPropertyUpload(flatfile, export, folder):
237 def itemPropertyUpload(flatfile, folder):
150 238
151 239 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item: with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
152 240 reader = csv.DictReader(item, delimiter=';', lineterminator='\n') reader = csv.DictReader(item, delimiter=';', lineterminator='\n')
 
... ... def itemPropertyUpload(flatfile, export, folder):
205 293
206 294 properties[row['item_sku']] = dict(zip(property_names, values)) properties[row['item_sku']] = dict(zip(property_names, values))
207 295
208 with open(export['path'], mode='r', encoding=export['encoding']) as item:
209 reader = csv.DictReader(item, delimiter=';', lineterminator='\n')
296 column_names = ['SKU', 'ID-property', 'Value', 'Lang']
297 Data = {}
298 for index, row in enumerate( properties ):
299 for prop in property_id:
300 values = [row, property_id[prop], properties[row][prop], 'DE']
301
302 Data[row + prop] = dict(zip(column_names, values))
210 303
211 column_names = ['PropertyItemID', 'ItemID', 'PrimaryVariationCustomNumber',
212 'Lang', 'Value']
304 barcode.writeCSV(Data, "Item_Merkmale", column_names, folder)
213 305
214 Data = {}
306 def get_properties(flatfile):
307
308 properties = {'length':0,
309 'width':0,
310 'height':0,
311 'weight':0}
312
313 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
314 reader = csv.DictReader(item, delimiter=";")
315
316 # Get the package properties from one of the childs or parent
215 317 for row in reader: for row in reader:
216 if(row['AttributeValueSetID'] == ''):
217 if(row['VariationNumber'] in [*properties]):
218 for number, key in enumerate( properties[row['VariationNumber']] ):
219 if(properties[row['VariationNumber']][key]):
220 values = [
221 property_id[key],
222 row['ItemID'],
223 row['VariationName'],
224 'de',
225 properties[row['VariationNumber']][key]
226 ]
227
228 Data[row['VariationNumber'] + str( number )] = dict(zip(column_names,
229 values))
230 variation_upload.writeCSV(Data, "Item_Merkmale", column_names, folder)
318 try:
319 if(row['package_height'] and
320 row['package_length'] and
321 row['package_width'] and
322 row['package_weight'] and
323 not(properties[ 'height' ])):
324
325 properties[ 'height' ] = int(row['package_height'])
326 properties[ 'length' ] = int(row['package_length'])
327 properties[ 'width' ] = int(row['package_width'])
328 properties[ 'weight' ] = int(row['package_weight'])
329 elif(properties[ 'height' ]):
330 break
331
332 # if the number is a floating point number it has to be
333 # transformed into a float first befor the integer conversion
334 except ValueError as err:
335 properties[ 'height' ] = int(float(row['package_height']))
336 properties[ 'length' ] = int(float(row['package_length']))
337 properties[ 'width' ] = int(float(row['package_width']))
338 properties[ 'weight' ] = int(float(row['package_weight']))
339
340 except ValueError as err:
341 print(err)
342 print("/nPlease copy the values for height, length, width",
343 "and weight\nfrom the children to the parent",
344 "variation in the flatfile.\n")
345 exit()
346
347 return properties
348
349 def get_attributes(dataset, sets):
350
351 output_string = ''
352 if(len(sets[dataset['parent_sku']]['color']) > 1):
353 output_string = 'color_name:' + dataset['color_name']
354 if(len(sets[dataset['parent_sku']]['size']) > 1):
355 if(not(output_string)):
356 output_string = 'size_name:' + dataset['size_name']
357 else:
358 output_string = output_string + ';size_name:' + dataset['size_name']
359 return output_string
360
361 def find_similar_attr(flatfile):
362
363 Data = {}
231 364
365 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
366 reader = csv.DictReader(item, delimiter=";")
367
368 for row in reader:
369 # If it is a parent create a new dictionary with 2 sets for color and size
370 if(row['parent_child'] == 'parent'):
371 color = set()
372 size = set()
373 Data[row['item_sku']] = {'color':color, 'size':size}
374 # If it is a child search through the Data dictionary for a match
375 if(row['parent_child'] == 'child'):
376 for line in Data:
377 if(row['parent_sku'] == line):
378 Data[row['parent_sku']]['color'].add(row['color_name'])
379 Data[row['parent_sku']]['size'].add(row['size_name'])
380
381 return Data
File packages/log_files.py changed (mode: 100644) (index b081199..6e76dff)
1 1 import os import os
2 import sys
2 3 import inspect import inspect
3 4 import logging import logging
4 5
 
... ... def function_logger(path, file_level, console_level = None):
24 25
25 26 def fileNotFoundLog(log_path, step_number, step_desc, file_name): def fileNotFoundLog(log_path, step_number, step_desc, file_name):
26 27 fileNotFoundLogger = function_logger(log_path, logging.ERROR, logging.ERROR) fileNotFoundLogger = function_logger(log_path, logging.ERROR, logging.ERROR)
27 fileNotFoundLogger.error("ERROR : The required file {2} was not found at Step {0} : {1}"
28 .format(step_number, step_desc, file_name))
28 fileNotFoundLogger.error("ERROR : The required file {2} was not found at Step {0} line no: {3} : {1}"
29 .format(step_number, step_desc, file_name, sys.exc_info()[2].tb_lineno))
29 30
30 31 def keyErrorLog(log_path, step_number, step_desc, key_name, file_name): def keyErrorLog(log_path, step_number, step_desc, key_name, file_name):
31 32 keyErrorLogger = function_logger(log_path, logging.ERROR, logging.ERROR) keyErrorLogger = function_logger(log_path, logging.ERROR, logging.ERROR)
File packages/price_upload.py added (mode: 100644) (index 0000000..b2502c1)
1 from csv import DictReader
2 try:
3 from sortedcontainers import SortedDict
4 except ImportError:
5 print("the sortedcontainers module is required to run this program.")
6 raise ImportError
7
8
9 def priceUpload(flatfile):
10 # The column header names
11 column_names = ['price', 'ebay', 'amazon', 'webshop', 'etsy']
12
13 prices = {
14 'price':{'id':'1', 'value':''},
15 'ebay':{'id':'3', 'value':''},
16 'amazon':{'id':'4', 'value':''},
17 'webshop':{'id':'5', 'value':''},
18 'etsy':{'id':'6', 'value':''}
19 }
20
21 # create a Data Dictionary and fill it with the necessary values from the
22 # flatfile
23 Data = SortedDict()
24
25 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
26 reader = DictReader(item, delimiter=";")
27 for row in reader:
28 # Make sure that there is price even at parents
29 if(not( row['standard_price'] )):
30 print("row:{0} doesnt have a price!".format(row['item_sku']))
31 for scndrow in reader:
32 if(row['parent_child'] == 'parent'):
33 if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['item_sku'] == scndrow['parent_sku']):
34 row['standard_price'] = scndrow['standard_price']
35 break
36 elif(row['parent_child'] == 'child'):
37 if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['parent_sku'] == scndrow['parent_sku']):
38 row['standard_price'] = scndrow['standard_price']
39 break
40 if(row['standard_price']):
41 for price in prices:
42 if(prices[ price ]['id'] == '3'):
43 # Ebay price calculation
44 prices[ price ]['value'] = ( int( round( float( row['standard_price'] ) - (float( row['standard_price'] ) * 0.10) ) ) - 0.05 )
45 if(prices[ price ]['id'] == '5'):
46 # Webshop price calculation
47 prices[ price ]['value'] = ( int( round( float( row['standard_price'] ) - (float( row['standard_price'] ) * 0.16666666) ) ) - 0.05 )
48 if(prices[ price ]['id'] == '6'):
49 # Etsy price calculation
50 prices[ price ]['value'] = ( int( round( float( row['standard_price'] ) + (float( row['standard_price'] ) * 0.1) ) ) - 0.15 )
51 else:
52 prices[ price ]['value'] = row['standard_price']
53 values = [prices['price']['value'], prices['ebay']['value'],
54 prices['amazon']['value'], prices['webshop']['value'],
55 prices['etsy']['value']]
56
57 Data[row['item_sku']] = SortedDict(zip(column_names, values))
58 else:
59 print("{0} doesn't have a price!\n".format(row['item_sku']))
60
61 return Data
62
File packages/stock_upload.py deleted (index 364838a..0000000)
1
2
3 from csv import DictReader, DictWriter
4 from os.path import isfile
5 from packages import variation_upload
6 try:
7 from sortedcontainers import SortedDict
8 except ImportError:
9 print("the sortedcontainers module is required to run this program.")
10 raise ImportError
11
12
13 def stockUpload(flatfile, stocklist, folder):
14
15 # The column header names
16 column_names = ['Barcode', 'LocationID', 'LocationName', 'Reordered',
17 'ReservedStock', 'Stock', 'WarehouseID']
18
19 # create a Data Dictionary and fill it with the necessary values from the
20 # flatfile
21 Data = SortedDict()
22
23 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
24 reader = DictReader(item, delimiter=";")
25 for row in reader:
26 if(row['external_product_id']):
27 values = [row['external_product_id'], 0, 'StandarWarenlager',
28 '', '', '', '104']
29 Data[row['item_sku']] = SortedDict(zip(column_names, values))
30
31 with open(stocklist['path'], mode='r', encoding=stocklist['encoding']) as item:
32 reader = DictReader(item, delimiter=";")
33 for row in reader:
34 if(row['MASTER'] and row['MASTER'] in [*Data]):
35 Data[row['MASTER']]['Stock'] = row['BADEL 26.12.16']
36
37 output_path = variation_upload.writeCSV(Data, 'stock', column_names, folder)
38
39
40 def priceUpload(flatfile, export, folder):
41 # The column header names
42 column_names = ['VariationID', 'IsNet', 'VariationPrice', 'SalesPriceID']
43
44 prices = {
45 'price':{'id':'1', 'value':''},
46 'ebay':{'id':'3', 'value':''},
47 'amazon':{'id':'4', 'value':''},
48 'webshop':{'id':'5', 'value':''}
49 }
50
51 # create a Data Dictionary and fill it with the necessary values from the
52 # flatfile
53 Data = SortedDict()
54
55 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
56 reader = DictReader(item, delimiter=";")
57 for row in reader:
58 # Make sure that there is price even at parents
59 if(not( row['standard_price'] )):
60 print("row:{0} doesnt have a price!".format(row['item_sku']))
61 for scndrow in reader:
62 if(row['parent_child'] == 'parent'):
63 if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['item_sku'] == scndrow['parent_sku']):
64 row['standard_price'] = scndrow['standard_price']
65 break
66 elif(row['parent_child'] == 'child'):
67 if(scndrow['parent_child'] == 'child' and scndrow['standard_price'] and row['parent_sku'] == scndrow['parent_sku']):
68 row['standard_price'] = scndrow['standard_price']
69 break
70 if(row['standard_price']):
71 for price in prices:
72 if(prices[ price ]['id'] == '3'):
73 # Ebay price calculation
74 prices[ price ]['value'] = ( int( round( float( row['standard_price'] ) - (float( row['standard_price'] ) * 0.10) ) ) - 0.05 )
75 if(prices[ price ]['id'] == '5'):
76 # Webshop price calculation
77 prices[ price ]['value'] = ( int( round( float( row['standard_price'] ) - (float( row['standard_price'] ) * 0.16666666) ) ) - 0.05 )
78 else:
79 prices[ price ]['value'] = row['standard_price']
80 values = ['', 0, prices[ price ]['value'], prices[ price ]['id']]
81
82 print("row:{0}, id:{1}, value:{2}".format(row['item_sku'], prices[price]['id'], prices[price]['value']))
83 Data[row['item_sku'] + '_' + price] = SortedDict(zip(column_names, values))
84 else:
85 print("{0} doesn't have a price!\n".format(row['item_sku']))
86
87 with open(export['path'], mode='r', encoding=export['encoding']) as item:
88 reader = DictReader(item, delimiter=";")
89 for row in reader:
90 for price in prices:
91 if((row['VariationNumber'] + '_' + price) in [*Data]):
92 Data[row['VariationNumber'] + '_' + price]['VariationID'] = row['VariationID']
93
94 # Open the flatfile again to check for items that didn't have variationID
95 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
96 reader = DictReader(item,delimiter=';')
97 for row in reader:
98 for price in prices:
99 sku_price = row['item_sku'] + '_' + price
100 if(( sku_price ) in [*Data]):
101 if(not( Data[sku_price]['VariationID'] )):
102 print("sku: {0} has no VariationID!".format(row['item_sku']))
103 eraseKey(Data, sku_price)
104
105 output_path = variation_upload.writeCSV(Data, 'SalesPriceVariation', column_names, folder)
106
107 def eraseKey(dictionary, key):
108 if isinstance(dictionary, dict):
109 if key in dictionary:
110 dictionary.pop(key)
111 else:
112 print("Deletion failed {0} was not in the dictionary".format(key))
113 else:
114 print("Deletion failed for {0}".format(key))
File packages/variation_upload.py deleted (index 6629c04..0000000)
1 import csv
2 import re
3 from os.path import isfile
4 import sys
5 from tkinter.filedialog import askdirectory
6 import os
7 try:
8 from sortedcontainers import SortedDict
9 except ImportError:
10 print("the sortedcontainers module is required to run this program.")
11 raise ImportError
12
13 class EmptyFieldWarning(Exception):
14 def __init__(self, errorargs):
15 Exception.__init__(self, "Following field/s are empty {0}".format(errorargs))
16 self.errorargs = errorargs
17
18 def writeCSV(dataobject, name, columns, upload_path):
19 '''Write Data into new CSV for Upload
20 OUTPUT
21 '''
22 '''
23 uploadpath = os.getcwd() + '/Upload'
24 if not os.path.exists(uploadpath):
25 print("=#="*10 + '\n')
26 printf("Please choose a folder for the Upload files\n")
27 print("=#="*10 + '\n')
28 uploadpath = askdirectory(title="Choose a folder for the Upload files!")
29 '''
30
31 output_path_number = 1
32 datatype = ".csv"
33 output_name = "/" + name + "_upload_" + str(output_path_number) + datatype
34 output_path = upload_path + output_name
35
36 while(isfile(output_path)):
37 output_path_number = int(output_path_number) + 1
38 output_name = "/" + name + "_upload_" + str(output_path_number) + datatype
39 output_path = upload_path + output_name
40
41 with open(output_path, mode='a') as item:
42 writer = csv.DictWriter(item, delimiter=";", fieldnames=columns, lineterminator='\n')
43 writer.writeheader()
44 for row in dataobject:
45 writer.writerow(dataobject[row])
46
47 if(isfile(output_path)):
48 print("Upload file successfully created under {0}".format(output_path))
49
50 return output_path
51
52
53 def variationUpload(flatfile, intern_number, folder):
54
55 # The column header names
56 names = ['ItemID', 'VariationID', 'VariationNumber', 'VariationName', 'Position',
57 'LengthMM', 'WidthMM', 'HeightMM', 'WeightG', 'VariationAttributes',
58 'PurchasePrice', 'MainWarehouse', 'Availability', 'AutoStockVisible',
59 'ExternalID']
60
61 # get the amount of different sizes to exclude adding the size if there is only a single one as attribute.
62 number_sizes = numberOfSizes(flatfile)
63
64 # create a Data Dictionary and fill it with the necessary values from the flatfile
65 Data = SortedDict()
66 item_name = ''
67
68 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
69 reader = csv.DictReader(item, delimiter=";")
70 for row in reader:
71 if(row['parent_child'] == 'child'):
72 pack_height = 0
73 pack_length = 0
74 pack_width = 0
75 pack_weight = 0
76 attributes = ''
77
78 try:
79 if(row['package_height'] and
80 row['package_length'] and
81 row['package_width']):
82
83 pack_height = int(row['package_height'])
84 pack_length = int(row['package_length'])
85 pack_width = int(row['package_width'])
86 except ValueError as err:
87 pack_height = int(float(row['package_height']))
88 pack_length = int(float(row['package_length']))
89 pack_width = int(float(row['package_width']))
90 except ValueError:
91 print("\nYour file doesn't include the proportions of the item\n\
92 at the parent, please add them later manually.\n")
93
94 if(row['package_weight']):
95 pack_weight = int(row['package_weight'])
96
97 if(not(pack_width or pack_length or pack_height or pack_weight)):
98 raise EmptyFieldWarning('package properties')
99
100 if(row['color_name']):
101 attributes = 'color_name:' + row['color_name']
102
103 if(row['size_name'] and number_sizes > 1):
104 attributes += ';size_name:' + row['size_name']
105
106 if(not(attributes)):
107 raise EmptyFieldWarning('plentymarkets attributes')
108
109 try:
110 values = ['', '', row['item_sku'], row['item_name'], '',
111 pack_length * 10,
112 pack_width * 10,
113 pack_height * 10,
114 pack_weight, attributes,
115 row['standard_price'], 'Badel', 'Y', 'Y', '']
116 except Exception as err:
117 print(err)
118 Data[row['item_sku']] = SortedDict(zip(names, values))
119
120 # open the intern numbers csv and fill in the remaining missing fields by using the
121 # item_sku as dict key
122 with open(intern_number['path'], mode='r', encoding=intern_number['encoding']) as item:
123 reader = csv.DictReader(item, delimiter=';')
124 for row in reader:
125 # check if the sku is within the keys of the Data Dictionary
126 if(row['amazon_sku'] in [*Data]):
127 Data[row['amazon_sku']]['ItemID'] = row['article_id']
128 if(not(row['position'] == 0)):
129 Data[row['amazon_sku']]['Position'] = row['position']
130 Data[row['amazon_sku']]['ExternalID'] = row['full_number']
131
132 output_path = writeCSV(Data, 'variation', names, folder)
133
134 return output_path
135
136
137 def setActive(flatfile, export, folder):
138 # because of a regulation of the plentyMarkets system the active status has to be
139 # delivered as an extra upload
140 column_names = ['Active', 'VariationID']
141 Data = {}
142 # open the flatfile to get the sku names
143 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
144 reader = csv.DictReader(item, delimiter=';')
145
146 for row in reader:
147 values = ['Y', '']
148 Data[row['item_sku']] = dict(zip(column_names, values))
149
150 with open(export['path'], mode='r', encoding=export['encoding']) as item:
151 reader = csv.DictReader(item, delimiter=';')
152 wrong_delimiter = False
153
154 for row in reader:
155 if( '\t' in [*row] ):
156 wrong_delimiter = True
157 break
158 if(row['VariationNumber'] in [*Data]):
159 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
160
161 if(wrong_delimiter):
162 reader = csv.DictReader(item, delimiter='\t')
163
164 for row in reader:
165 if(row['VariationNumber' in [*Data]]):
166 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
167
168 output_path = writeCSV(Data, 'SetActive', column_names, folder)
169
170
171 def EANUpload(flatfile, export, stocklist, folder):
172 # open the flatfile get the ean for an sku and save it into a dictionary with
173 # columnheaders of the plentymarket dataformat
174
175 column_names = ['BarcodeID', 'BarcodeName', 'BarcodeType',
176 'Code', 'VariationID', 'VariationNumber']
177
178 barcode_types = {'EAN' : {'id' : 1, 'name' : 'EAN', 'type' : 'GTIN_13'},
179 'FNSKU' : {'id' : 5, 'name' : 'FNSKU', 'type' : 'EAN_13'}}
180 Data = {}
181 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
182 reader = csv.DictReader(item, delimiter=";")
183
184 for row in reader:
185 if(row['parent_child'] == 'child'):
186 for barcode in barcode_types:
187 # Set code to an empty String if the barcode type matches EAN set it to to
188 # the external_product_id
189 code = ''
190 if(barcode == 'EAN'):
191 code = row['external_product_id']
192
193 if(not(barcode)):
194 raise EmptyFieldWarning('barcode(EAN)')
195 values = [
196 barcode_types[barcode]['id'], barcode_types[barcode]['name'],
197 barcode_types[barcode]['type'], code,
198 '', row['item_sku']
199 ]
200 Data[row['item_sku'] + barcode] = dict(zip(column_names, values))
201
202 # open the exported file to get the variation id
203 with open(export['path'], mode='r', encoding=export['encoding']) as item:
204 reader = csv.DictReader(item, delimiter=";")
205
206 for row in reader:
207 for barcode in barcode_types:
208 if(row['VariationNumber'] + barcode in [*Data]):
209 Data[row['VariationNumber'] + barcode]['VariationID'] = row['VariationID']
210
211 with open(stocklist['path'], mode='r', encoding=stocklist['encoding']) as item:
212 reader = csv.DictReader(item, delimiter=";")
213
214 for row in reader:
215 for barcode in barcode_types:
216 if(row['MASTER'] + barcode in [*Data]):
217 # Set code to an empty String if the barcode type matches FNSKU set it to to
218 # the external_product_id
219 code = ''
220 if(barcode == 'FNSKU'):
221 code = row['fnsku']
222
223 if(code):
224 Data[row['MASTER'] + barcode]['Code'] = code
225
226 output_path = writeCSV(Data, 'VariationBarcode', column_names, folder)
227
228
229 def marketConnection(export, folder, ebay=0, amazon=0, shop=0):
230 # Enable marketconnection of items and variations by entering 1 for True
231 # and 0 for False
232
233 column_names = ['VariationID', 'VariationCustomNumber',
234 'webApi', 'AmazonFBAGermany', 'AmazonFBA', 'eBayGermany', 'Ebay', 'MandantShop']
235
236 Data = {}
237 with open(export['path'], mode='r', encoding=export['encoding']) as item:
238 reader = csv.DictReader(item, delimiter=';')
239
240 for row in reader:
241 if row['VariationID'] and row['VariationNumber']:
242 values = [row['VariationID'], row['VariationNumber'],
243 '1', amazon, amazon, ebay, ebay, shop]
244 Data[row['VariationNumber']] = dict(zip(column_names, values))
245
246
247 output_path = writeCSV(Data, 'marketConnection', column_names, folder)
248
249 def numberOfSizes(flatfile):
250 # open the flatfile and read the size of each variation, put all of them in a set
251 # and return the size of the set.
252
253 length_set = 0
254 sizeset = set()
255
256 with open(flatfile['path'], mode='r', encoding=flatfile['encoding']) as item:
257 reader = csv.DictReader(item, delimiter=';')
258
259 for row in reader:
260 sizeset.add(row['size_name'])
261
262 sizeset.discard('')
263
264 print("lenght of set {0}, content of set {1}".format(len(sizeset), sizeset))
265
266 return len(sizeset)
File product_import.py changed (mode: 100644) (index 7b3db9a..421fc25)
... ... import sys
4 4 import platform import platform
5 5 import os import os
6 6 import ntpath import ntpath
7 from packages.item_upload import itemUpload, itemPropertyUpload, WrongEncodingException, check_encoding
8 from packages.variation_upload import variationUpload, setActive, EANUpload, marketConnection, EmptyFieldWarning
9 from packages.stock_upload import priceUpload
10 from packages.amazon_data_upload import amazonSkuUpload, amazonDataUpload, asinUpload, featureUpload
11 from packages.image_upload import imageUpload
7 from packages.item_upload import itemUpload, WrongEncodingException, check_encoding, check_flatfile, itemPropertyUpload
8 from packages.barcode import EmptyFieldWarning
9 from packages.amazon_data_upload import featureUpload
12 10 from packages.log_files import fileNotFoundLog, keyErrorLog, wrongEncodingLog, unboundLocalLog, emptyFieldWarningLog from packages.log_files import fileNotFoundLog, keyErrorLog, wrongEncodingLog, unboundLocalLog, emptyFieldWarningLog
13 11 from packages.gui.category_chooser import CategoryChooser from packages.gui.category_chooser import CategoryChooser
14 12 from packages.config import config_creation, config_read, config_write from packages.config import config_creation, config_read, config_write
 
... ... def main():
21 19 log_folder = '' log_folder = ''
22 20 recent_path = '' recent_path = ''
23 21 config_path = '' config_path = ''
22 attribute_date = ''
24 23 sheet = {'path':'', 'encoding':''} sheet = {'path':'', 'encoding':''}
25 24 intern_number = {'path':'', 'encoding':''} intern_number = {'path':'', 'encoding':''}
26 export = {'path':'', 'encoding':''}
27 25 stocklist = {'path':'', 'encoding':''} stocklist = {'path':'', 'encoding':''}
26 attributefile = {'path':'', 'encoding':''}
28 27 step = int(0) step = int(0)
29 28 fexc = '' fexc = ''
30 29 # Create a list of step names where every name fits to the index of a step number # Create a list of step names where every name fits to the index of a step number
31 30 step_name = ['environment-creation', step_name = ['environment-creation',
31 'config-creation',
32 'config-reading',
32 33 'import-flatfile', 'import-flatfile',
34 'GUI',
33 35 'import-internlist', 'import-internlist',
36 'import-stocklist',
34 37 'item-upload', 'item-upload',
35 'variation-upload',
36 'import-export',
37 38 'feature_upload', 'feature_upload',
38 'active_upload',
39 'property_upload',
40 'price_upload',
41 'import_stocklist',
42 'Barcode_upload',
43 'SKU_Upload',
44 'AmazonData_Upload',
45 'ASIN_upload',
46 'Image_upload',
47 'Marketconnection_upload']
39 'property_upload'
40 ]
48 41
49 42 # define the features for plentymarkets # define the features for plentymarkets
50 43 features = { features = {
51 44 'color_map':1, 'color_map':1,
52 45 'model':4, 'model':4,
53 'item_name':13,
54 46 'sleeve_type':8, 'sleeve_type':8,
55 47 'pattern_type':11, 'pattern_type':11,
56 48 'collar_style':12, 'collar_style':12,
49 'item_name':13,
57 50 'closure_type':14, 'closure_type':14,
58 51 'style_name':15, 'style_name':15,
59 52 'care_instructions':16, 'care_instructions':16,
 
... ... def main():
72 65 else: else:
73 66 initial_directory = '/home/' + os.getlogin() initial_directory = '/home/' + os.getlogin()
74 67
68 step += 1;
69 # CONFIG CREATION
75 70 # Create a config if there is none # Create a config if there is none
76 71 config_path = config_creation() config_path = config_creation()
77 72
73 step += 1;
74 # CONFIG READING
78 75 # Get the Upload and data folder from the config if possible # Get the Upload and data folder from the config if possible
79 76 if(config_read(config_path)['upload_folder']): if(config_read(config_path)['upload_folder']):
80 77 upload_folder = config_read(config_path)['upload_folder'] upload_folder = config_read(config_path)['upload_folder']
81 78 if(config_read(config_path)['data_folder']): if(config_read(config_path)['data_folder']):
82 79 recent_path = config_read(config_path)['data_folder'] recent_path = config_read(config_path)['data_folder']
80 if(config_read(config_path)['attribute_file']):
81 attributefile['path'] = config_read(config_path)['attribute_file']
82 attributefile = check_encoding(attributefile)
83 if(config_read(config_path)['file_change_date']):
84 attribute_date = config_read(config_path)['file_change_date']
83 85 if(not(recent_path)): if(not(recent_path)):
84 86 recent_path = tkinter.filedialog.askdirectory(initialdir=initial_directory, recent_path = tkinter.filedialog.askdirectory(initialdir=initial_directory,
85 87 title="Choose a folder from where to upload") title="Choose a folder from where to upload")
88 # END of CONFIG READING
89
90 # Import Flatfile
91 step += 1
92 sheet['path'] = askopenfilename(initialdir=recent_path,
93 title="Amazon Flatfile as .csv",
94 filetypes=[ ("csv files", "*.csv") ])
95
96 sheet = check_encoding(sheet)
97
98 # Check if the file was loaded properly and got the correct format
99 try:
100 if(not( check_flatfile(sheet) )):
101 print('Please fix the flatfile and try again.\n')
102 print('Press Enter to continue...')
103 input()
104 sys.exit()
105 except OSError as err:
106 print('flatfile not found\n')
107 print('Press Enter to continue...')
108 input()
109 sys.exit()
86 110
111 step += 1;
112 # GUI
87 113 # Ask the user for input inside a gui asking for categories and name # Ask the user for input inside a gui asking for categories and name
88 cchooser = CategoryChooser(None, upload_folder)
114 cchooser = CategoryChooser(None, upload_folder, sheet, attributefile, attribute_date)
89 115 cchooser.title("Choose the category and name") cchooser.title("Choose the category and name")
90 116 LOOP_ACTIVE = True LOOP_ACTIVE = True
91 117 while (LOOP_ACTIVE): while (LOOP_ACTIVE):
 
... ... def main():
94 120 if(cchooser.data['name'] and cchooser.data['categories']): if(cchooser.data['name'] and cchooser.data['categories']):
95 121 LOOP_ACTIVE = False LOOP_ACTIVE = False
96 122 cchooser.destroy() cchooser.destroy()
123 # END GUI
97 124
98 125 user_data = cchooser.data user_data = cchooser.data
99 if(cchooser.newpath):
100 config_update = {'row1':{ 'title': 'upload_folder', 'path': cchooser.newpath },
101 'row2':{ 'title':'data_folder', 'path':recent_path }}
102 print(config_update)
126 # Writing the changes into the config for the next start of the script
127 if(cchooser.newpath['upload-path'] and cchooser.newpath['attribute-path']):
128 config_update = {'row1':{ 'title': 'upload_folder', 'value': cchooser.newpath['upload-path'] },
129 'row2':{ 'title':'data_folder', 'value':recent_path },
130 'row3':{ 'title':'attribute_file', 'value': cchooser.newpath['attribute-path'] },
131 'row4':{ 'title':'file_change_date', 'value':cchooser.atrdate }}
132 try:
133 config_write(config_path, config_update)
134 except Exception as err:
135 print("ERROR: {0}\n".format(err))
136 upload_folder = cchooser.newpath
137 attributefile['path'] = cchooser.newpath['attribute-path']
138 attribute_date = cchooser.atrdate
139 elif(not( cchooser.newpath['upload-path'] ) and cchooser.newpath['attribute-path']):
140 config_update = {'row1':{ 'title': 'upload_folder', 'value': upload_folder },
141 'row2':{ 'title':'data_folder', 'value':recent_path },
142 'row3':{ 'title':'attribute_file', 'value': cchooser.newpath['attribute-path'] },
143 'row4':{ 'title':'file_change_date', 'value':cchooser.atrdate }}
144 try:
145 config_write(config_path, config_update)
146 except Exception as err:
147 print("ERROR: {0}\n".format(err))
148 attributefile['path'] = cchooser.newpath['attribute-path']
149 attribute_date = cchooser.atrdate
150 elif(cchooser.newpath['upload-path'] and not( cchooser.newpath['attribute-path'] )):
151 config_update = {'row1':{ 'title': 'upload_folder', 'value': cchooser.newpath['upload-path'] },
152 'row2':{ 'title':'data_folder', 'value':recent_path },
153 'row3':{ 'title':'attribute_file', 'value': attributefile['path'] },
154 'row4':{ 'title':'file_change_date', 'value': attribute_date }}
103 155 try: try:
104 156 config_write(config_path, config_update) config_write(config_path, config_update)
105 157 except Exception as err: except Exception as err:
106 158 print("ERROR: {0}\n".format(err)) print("ERROR: {0}\n".format(err))
107 159 upload_folder = cchooser.newpath upload_folder = cchooser.newpath
160 # END of Writing config part
108 161
109 162 if(user_data): if(user_data):
110 163 # Check if there is already a log folder within the upload folder # Check if there is already a log folder within the upload folder
 
... ... def main():
114 167 elif( os.path.exists(os.path.join(upload_folder, 'log')) ): elif( os.path.exists(os.path.join(upload_folder, 'log')) ):
115 168 log_folder = os.path.join(upload_folder, 'log') log_folder = os.path.join(upload_folder, 'log')
116 169
117 step += 1
118 sheet['path'] = askopenfilename(initialdir=recent_path,
119 title="Amazon Flatfile as .csv",
120 filetypes=[ ("csv files", "*.csv") ])
121
122 sheet = check_encoding(sheet)
123
124 170 step += 1 step += 1
125 171 intern_number['path'] = askopenfilename(initialdir=recent_path, intern_number['path'] = askopenfilename(initialdir=recent_path,
126 172 title="The Intern Numbers as .csv", title="The Intern Numbers as .csv",
 
... ... def main():
128 174
129 175 intern_number = check_encoding(intern_number) intern_number = check_encoding(intern_number)
130 176
177 step += 1;
178 try:
179 stocklist['path'] = askopenfilename(initialdir=recent_path,
180 title="The Stockreport from Amazon as .csv",
181 filetypes=[ ("csv files", "*.csv") ])
182
183 stocklist = check_encoding(stocklist)
184 except OSError as fexc:
185 fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name=fexc)
186
131 187 step += 1 step += 1
132 188 try: try:
133 189 print("\nItem Upload\n") print("\nItem Upload\n")
134 itemUpload(sheet, intern_number, upload_folder)
190 itemUpload(sheet, intern_number, stocklist, attributefile, upload_folder, user_data)
135 191 except WrongEncodingException: except WrongEncodingException:
136 192 wrongEncodingLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="flatfile") wrongEncodingLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name="flatfile")
137 193 except KeyError as kexc: except KeyError as kexc:
 
... ... def main():
146 202 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet)) emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
147 203 except Exception as exc: except Exception as exc:
148 204 print("Item Upload failed!\n") print("Item Upload failed!\n")
149 print("Here: ", exc, '\n')
150 205 if(exc == 'item_sku'): if(exc == 'item_sku'):
151 206 print("It is very likely that you don't have the proper headers, use the english ones!\n") print("It is very likely that you don't have the proper headers, use the english ones!\n")
152 207 e = sys.exc_info() e = sys.exc_info()
 
... ... def main():
154 209 for element in e: for element in e:
155 210 print(element) print(element)
156 211
157 step += 1
158 212 try: try:
159 print("\nVariation Upload\n")
160 variationUpload(sheet, intern_number, upload_folder)
161 except KeyError as kexc:
162 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
163 except UnboundLocalError as uexc:
164 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
165 except EmptyFieldWarning as eexc:
166 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
167 except Exception as exc:
168 print("VariationUpload failed!\n")
169 e = sys.exc_info()
170 print("Error @ FILE: {0}, LINE: {1}\n".format( e[2].tb_frame.f_code.co_filename, e[2].tb_lineno ))
171 for element in e:
172 print(element)
173
174 print("###########################################################")
175 print("\nUpload the files in plentymarkets, make sure that the categories are set because they are necessary for the active Upload.\n")
176
177 print('press ENTER to continue')
178 input()
179
180 print("\nGet a dataexport from the plentymarket site from the variation attributes, in order to access the current Variation ID.\n")
181
182 step += 1
183 try:
184 export['path'] = askopenfilename(initialdir=recent_path,
185 title="The Export File from Plentymarkets as .csv",
186 filetypes=[ ("csv files", "*.csv") ])
187
188 export = check_encoding(export)
189 except OSError as fexc:
190 fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name=fexc)
191 except Exception as exc:
192 print(exc)
193 print("Something went wrong at the Export file import!")
194
195 print("spreadsheet csv containing the export : ", export)
196
197 try:
198 print("Active, properties , features & price Upload")
199 step += 1
200 for name in features:
201 featureUpload(flatfile=sheet, feature=name, feature_id=features[name], folder=upload_folder)
202
213 print("Feature Upload")
203 214 step += 1 step += 1
204 setActive(sheet, export, upload_folder)
205 step += 1
206 itemPropertyUpload(sheet, export, upload_folder)
207 step += 1
208 priceUpload(sheet, export, upload_folder)
215 featureUpload(flatfile=sheet, features=features, folder=upload_folder)
209 216 except KeyError as kexc: except KeyError as kexc:
210 217 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet)) keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
211 218 except UnboundLocalError as uexc: except UnboundLocalError as uexc:
 
... ... def main():
215 222 except OSError as err: except OSError as err:
216 223 print(err) print(err)
217 224 print("Missing Data, check if you have\n - a flatfile\n - a intern file table\n - export file from plentymarkets\n - a sheet with the stock numbers!\n") print("Missing Data, check if you have\n - a flatfile\n - a intern file table\n - export file from plentymarkets\n - a sheet with the stock numbers!\n")
218
219 print("\nOpen your amazon storage report and save it as an csv.\n")
220
221 step += 1
222 225 try: try:
223 stocklist['path'] = askopenfilename(initialdir=recent_path,
224 title="The Stockreport from Amazon as .csv",
225 filetypes=[ ("csv files", "*.csv") ])
226
227 stocklist = check_encoding(stocklist)
228 print("spreadsheet csv containing the FNSKU and ASIN : ", stocklist)
229 except OSError as fexc:
230 fileNotFoundLog(log_path=log_folder, step_number=step, step_desc=step_name[step], file_name=fexc)
231
232 step += 1
233 try:
234 EANUpload(sheet, export, stocklist, upload_folder)
235 except KeyError as kexc:
236 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
237 except UnboundLocalError as uexc:
238 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
239
240 print("\nCreate a upload file for the SKU and Parent_SKU\nto connect existing items from amazon to plentyMarkets.\n")
241
242 step += 1
243 try:
244 amazonSkuUpload(sheet, export, upload_folder)
245 except KeyError as kexc:
246 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
247 except UnboundLocalError as uexc:
248 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
249 except EmptyFieldWarning as eexc:
250 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
251
252 print("\nCreate a upload file for the additional Information to Amazon Products like bullet points, lifestyle etc.\n")
253
254 step += 1
255 try:
256 amazonDataUpload(sheet, export, upload_folder)
257 except KeyError as kexc:
258 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
259 except UnboundLocalError as uexc:
260 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
261 except EmptyFieldWarning as eexc:
262 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
263
264 print("\nCollect the ASIN Numbers matching to the Variationnumber(Sku) and format them into the dataformat format.\n")
265
266 step += 1
267 try:
268 asinUpload(export, stocklist, upload_folder)
269 except KeyError as kexc:
270 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
271 except UnboundLocalError as uexc:
272 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
273 except EmptyFieldWarning as eexc:
274 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
275
276 print("\nCollect the imagelinks from the flatfile, sorts them and assigns the variation ID.\n")
277
278 step += 1
279 try:
280 imageUpload(sheet, export, upload_folder)
281 except KeyError as kexc:
282 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
283 except UnboundLocalError as uexc:
284 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
285 except Exception as err:
286 print(err)
287 print("Image Upload failed!")
288
289 print("\nActivate Marketconnection for Ebay & Amazon for all variation.\n")
290
291 step += 1
292 try:
293 marketConnection(export, upload_folder, ebay=1, amazon=1, shop=1)
226 print("Property Upload")
227 step += 1
228 itemPropertyUpload(flatfile=sheet, folder=upload_folder)
294 229 except KeyError as kexc: except KeyError as kexc:
295 230 keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet)) keyErrorLog(log_path=log_folder, step_number=step, step_desc=step_name[step], key_name=kexc, file_name=ntpath.basename(sheet))
296 231 except UnboundLocalError as uexc: except UnboundLocalError as uexc:
297 232 unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args) unboundLocalLog(log_path=log_folder, step_number=step, step_desc=step_name[step], filename=ntpath.basename(sheet), variable_name=uexc.args)
298 233 except EmptyFieldWarning as eexc: except EmptyFieldWarning as eexc:
299 234 emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet)) emptyFieldWarningLog(log_path=log_folder, step_number=step, step_desc=step_name[step], field_name=eexc.errorargs, file_name=ntpath.basename(sheet))
300 except Exception as err:
235 except OSError as err:
301 236 print(err) print(err)
302 print("Market connection failed!")
237 print("Missing Data, check if you have\n - a flatfile\n - a intern file table\n - export file from plentymarkets\n - a sheet with the stock numbers!\n")
303 238
304 239 del fexc del fexc
305 240 else: else:
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using git:
git clone git://git.rocketgit.com/user/initBasti/Amazon2PlentySync

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main