initBasti / Amazon2PlentySync (public) (License: GPLv3) (since 2019-01-27) (hash sha1)
Transfer your data from you Amazon Flatfile spreadsheet over to the Plentymarkets system. How to is included in the readme
List of commits:
Subject Hash Author Date (UTC)
Fixed scripts according to dataformat changes + readme dec28d9e6ff5c5c903d5ca01a969e661d43b66c6 Sebastian Fricke 2019-01-29 21:08:04
Working Checkboxes and file import 25378c68a6220c1c6570642920e6150a50415153 Sebastian Fricke 2019-01-29 21:03:23
Added checkboxes, descriptions, import and runbutton 2021f0960e70c8c229ec08488165dc01b998a6e0 Sebastian Fricke 2019-01-27 22:19:18
Added market connection, cosmetics in product import c9a771d5e7a3a80adc650e773c568e00dd8e2aea Sebastian Fricke 2019-01-23 15:01:47
Amazon Data Upload 33dbd0ed6945c01d8917ceae3cf3964f051a2288 Sebastian Fricke 2019-01-22 14:43:39
Readme started, amazon sku upload, vari upload, images f43a9e83598c3e4623bcb08667e2b4e649b2cdea Sebastian Fricke 2019-01-22 10:44:40
Amazon SKU Upload 8586da2ae91d49c81a0d9b6ff220c8a1b1b011a6 Sebastian Fricke 2019-01-16 18:36:54
Inital Commit with current working version of the CLI Tool and the work in progress of the GUI. 207fef4277f7c169aa79eb39ec1aaaab258b888c Sebastian Fricke 2019-01-16 09:47:43
Initial commit ba965ee75fe09437fb08da5edd25b20e39e17eff Sebastian Fricke 2019-01-16 09:42:30
Commit dec28d9e6ff5c5c903d5ca01a969e661d43b66c6 - Fixed scripts according to dataformat changes + readme
Some dataformats were cleaned of columns that were not needed, the
scripts were adjusted. Started to write a readme that includes the
dataformats.
Author: Sebastian Fricke
Author date (UTC): 2019-01-29 21:08
Committer name: Sebastian Fricke
Committer date (UTC): 2019-01-29 21:08
Parent(s): 25378c68a6220c1c6570642920e6150a50415153
Signing key:
Tree: e2f40ce25b272caa1746b368170a894045782804
File Lines added Lines deleted
README.md 88 1
packages/item_upload.py 31 6
packages/stock_upload.py 23 21
packages/variation_upload.py 0 5
File README.md changed (mode: 100644) (index 0946860..cbf05bc)
... ... Preperation
7 7 Check your Flatfile Version, the current version of this program works with the appereal flatfile of 2019. Check your Flatfile Version, the current version of this program works with the appereal flatfile of 2019.
8 8
9 9 Make sure to prepare your PlentyMarkets Account you need the following list of dataformats: Make sure to prepare your PlentyMarkets Account you need the following list of dataformats:
10 -
10 - Item upload:
11 Erstellung der Parentartikel, der Artikel aggiert dabei als eine Art Hülle zum eigentlich parent.
12 Die Grund version nutzt, eine Liste, interner Nummern um die ItemID zu bestimmen. Dies ist aber
13 nicht notwendig.
14 Creation of the Parentarticle, the article acts as a shell for the actual parentvariation.
15 The basic version of the program uses a intern number list to fill in the ItemID. But this is no
16 - Item Upload
17 {
18 [Typ: Item]
19 CategoryLevel1Name: IMPORT
20 CategoryLevel2Name: IMPORT
21 CategoryLevel3Name: IMPORT
22 CategoryLevel4Name: IMPORT
23 CategoryLevel5Name: IMPORT
24 CategoryLevel6Name: IMPORT
25 ItemID: IMPORT
26 PrimaryVariationCustomNumber: IMPORT
27 PrimaryVariationLengthMM: IMPORT
28 PrimaryVariationWidthMM: IMPORT
29 PrimaryVariationHeightMM: IMPORT
30 PrimaryVariationWeightG: IMPORT
31 PrimaryVariationName: IMPORT
32 PrimaryVariationPurchasePrice: IMPORT
33 PrimaryVariationMainWarehouse: IMPORT
34 ItemOriginCountry: IMPORT
35 ItemProducer: IMPORT
36 ItemProducerID: IMPORT
37 ItemTextName: IMPORT
38 ItemTextDescription: IMPORT
39 }
40 - Variation Upload:
41 Erstellung der Variationen, diese werden dem im Itemupload hochgeladenen parent zugewiesen.
42 {
43 [Typ: Variation]
44 ItemID: IMPORT
45 VariationID: ABGLEICH <---
46 VariationNumber: IMPORT
47 VariationName: IMPORT
48 Position: IMPORT
49 LengthMM: IMPORT
50 WidthMM: IMPORT
51 HeightMM : IMPORT
52 WeightG: IMPORT
53 VariationAttributes: IMPORT
54 PurchasePrice: IMPORT
55 MainWarehouse: IMPORT
56 Availability: IMPORT
57 AutoStockVisible: IMPORT
58 }
59 - Attribute Upload
60 Erstellung der in der Liste genutzten Farben ,Größen und Materialien im Plentymarket system.
61 {
62 VORBEDINGUNG für alle anderen Uploads
63 [Typ: Attribute]
64 AttributeBackendName: ABGLEICH <---
65 AttributeID: ABGLEICH <---
66 AttributeValueBackendName: IMPORT
67 AttributeValueFrontendName: IMPORT
68 AttributeValuePosition: IMPORT
69 Lang
70 }
71 - Active Upload:
72 {
73 VORBEDINGUNG: Kategorien gesetzt
74 MUSS extra durchgeführt werden und kann nicht im Variation Upload gesetzt werden.
75 [Typ: Variation]
76 Active: IMPORT
77 VariationID: ABGLEICH <---
78 }
79 - SalePrice Upload:
80 {
81 [Typ: Variation_Sales_Price]
82 VariationID: ABGLEICH <---
83 IsNet: IMPORT
84 VariationPrice: IMPORT
85 SalesPriceID: ABGLEICH <---
86 }
87 - Variation Barcode Upload:
88 Wenn vorhanden können hiermit die EAN(UPC), GTIN oder ISBN Nummern hochgeladen werden.
89 {
90 [Typ: Variation_Barcode]
91 BarcodeID: IMPORT
92 BarcodeName: NOTHING
93 BarcodeType: NOTHING
94 Code: IMPORT
95 VariationID: IMPORT
96 VariationNumber: ABGLEICH <---
97 }
File packages/item_upload.py changed (mode: 100644) (index 97541ce..9e69f47)
... ... except ImportError:
10 10
11 11 def itemUpload(filepath, intern_number): def itemUpload(filepath, intern_number):
12 12 # The column headers for the output file as expected from the plentymarkets dataformat # The column headers for the output file as expected from the plentymarkets dataformat
13 column_names_output = ['CategoryLevel1Name', 'CategoryLevel2Name', 'CategoryLevel3Name', 'CategoryLevel4Name', 'CategoryLevel5Name', 'CategoryLevel6Name', 'ItemID', 'PrimaryVariationCustomNumber', 'PrimaryVariationLengthMM', 'PrimaryVariationWidthMM', 'PrimaryVariationHeightMM', 'PrimaryVariationWeightG', 'PrimaryVariationName', 'PrimaryVariationPurchasePrice', 'ItemImageURL', 'PrimaryVariationMainWarehouse', 'ItemOriginCountry', 'ItemProducer', 'ItemProducerID', 'ItemTextName', 'ItemTextDescription']
13 column_names_output = ['CategoryLevel1Name', 'CategoryLevel2Name',
14 'CategoryLevel3Name', 'CategoryLevel4Name',
15 'CategoryLevel5Name', 'CategoryLevel6Name',
16 'ItemID', 'PrimaryVariationCustomNumber',
17 'PrimaryVariationLengthMM',
18 'PrimaryVariationWidthMM',
19 'PrimaryVariationHeightMM',
20 'PrimaryVariationWeightG',
21 'PrimaryVariationName',
22 'PrimaryVariationPurchasePrice', 'ItemImageURL',
23 'PrimaryVariationMainWarehouse',
24 'ItemOriginCountry', 'ItemProducer',
25 'ItemProducerID', 'ItemTextName',
26 'ItemTextDescription']
14 27
15 28 # default values: CategoryLevel5Name : '' , CategoryLevel6Name : '', ItemOriginCountry : '62' , ItemProducer : 'PANASIAM', ItemProducerID : '3' # default values: CategoryLevel5Name : '' , CategoryLevel6Name : '', ItemOriginCountry : '62' , ItemProducer : 'PANASIAM', ItemProducerID : '3'
16 29
 
... ... def itemUpload(filepath, intern_number):
35 48 row['package_width'] = int(float(row['package_width'])) row['package_width'] = int(float(row['package_width']))
36 49 except ValueError as err: except ValueError as err:
37 50 print(err) print(err)
38 print("/nPlease copy the values for height, length, width and weight\nfrom the children to the parent variation in the flatfile.\n")
51 print(
52 "/nPlease copy the values for height, length, width and weight\nfrom the children to the parent variation in the flatfile.\n")
39 53 exit() exit()
40 54 try: try:
41 values = ['', '', '', '', '', '', '', row['item_sku'], row['package_length'] * 10, row['package_width'] * 10, row['package_height'] * 10, row['package_weight'], row['item_name'], row['standard_price'], row['main_image_url'], 'Badel', '62', 'PANASIAM', '3', '', row['product_description']]
55 values = ['', '', '', '', '', '', '', row['item_sku'],
56 row['package_length'] * 10,
57 row['package_width'] * 10,
58 row['package_height'] * 10,
59 row['package_weight'],
60 row['item_name'],
61 row['standard_price'],
62 '', 'Badel', '62', 'PANASIAM', '3',
63 '', row['product_description']]
42 64 except Exception as err: except Exception as err:
43 65 print(err) print(err)
44 Data[row['item_sku']] = SortedDict(zip(column_names_output, values))
66 Data[row['item_sku']] = SortedDict(
67 zip(column_names_output, values))
45 68
46 69 # open the intern number csv to get the item ID # open the intern number csv to get the item ID
47 70 with open(intern_number, mode='r') as item: with open(intern_number, mode='r') as item:
 
... ... def itemUpload(filepath, intern_number):
60 83
61 84 while(isfile(output_path)): while(isfile(output_path)):
62 85 output_path_number = int(output_path_number) + 1 output_path_number = int(output_path_number) + 1
63 output_path = "Upload/item_upload_" + str(output_path_number) + datatype
86 output_path = "Upload/item_upload_" + \
87 str(output_path_number) + datatype
64 88
65 89 with open(output_path, mode='a') as item: with open(output_path, mode='a') as item:
66 writer = DictWriter(item, delimiter=";", fieldnames=column_names_output)
90 writer = DictWriter(item, delimiter=";",
91 fieldnames=column_names_output)
67 92 writer.writeheader() writer.writeheader()
68 93 for row in Data: for row in Data:
69 94 writer.writerow(Data[row]) writer.writerow(Data[row])
File packages/stock_upload.py changed (mode: 100644) (index 6f2268f..568306c)
1
2
1 3 from csv import DictReader, DictWriter from csv import DictReader, DictWriter
2 4 from os.path import isfile from os.path import isfile
3 5 try: try:
 
... ... def writeCSV(dataobject, name, columns):
14 16
15 17 output_path_number = 1 output_path_number = 1
16 18 datatype = ".csv" datatype = ".csv"
17 output_path = "Upload/" + name + "_upload_" + str(output_path_number) + datatype
19 output_path = "Upload/" + name + "_upload_" + \
20 str(output_path_number) + datatype
18 21
19 22 while(isfile(output_path)): while(isfile(output_path)):
20 23 output_path_number = int(output_path_number) + 1 output_path_number = int(output_path_number) + 1
21 output_path = "Upload/" + name + "_upload_" + str(output_path_number) + datatype
24 output_path = "Upload/" + name + "_upload_" + \
25 str(output_path_number) + datatype
22 26
23 27 with open(output_path, mode='a') as item: with open(output_path, mode='a') as item:
24 28 writer = DictWriter(item, delimiter=";", fieldnames=columns) writer = DictWriter(item, delimiter=";", fieldnames=columns)
 
... ... def writeCSV(dataobject, name, columns):
32 36 return output_path return output_path
33 37
34 38
35 def stockUpload(flatfile, export, stocklist):
39 def stockUpload(flatfile, stocklist):
36 40
37 41 # The column header names # The column header names
38 column_names = ['Barcode','LocationID','LocationName','Reordered','ReservedStock','Stock','WarehouseID','VariationID','VariationNo']
42 column_names = ['Barcode', 'LocationID', 'LocationName', 'Reordered',
43 'ReservedStock', 'Stock', 'WarehouseID']
39 44
40 # create a Data Dictionary and fill it with the necessary values from the flatfile
45 # create a Data Dictionary and fill it with the necessary values from the
46 # flatfile
41 47 Data = SortedDict() Data = SortedDict()
42 48
43 49 with open(flatfile, mode='r') as item: with open(flatfile, mode='r') as item:
44 50 reader = DictReader(item, delimiter=";") reader = DictReader(item, delimiter=";")
45 51 for row in reader: for row in reader:
46 52 if(row['external_product_id']): if(row['external_product_id']):
47 values = [row['external_product_id'],0,'Standard-Lagerort','','','','104','',row['item_sku']]
53 values = [row['external_product_id'], 0, 'StandarWarenlager',
54 '', '', '', '104']
48 55 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
49
50 with open(export, mode='r') as item:
51 reader = DictReader(item, delimiter=";")
52 for row in reader:
53 if(row['VariationNumber'] in [*Data]):
54 Data[row['VariationNumber']]['VariationID'] = row['VariationID']
55
56
56 57 with open(stocklist, mode='r') as item: with open(stocklist, mode='r') as item:
57 58 reader = DictReader(item, delimiter=";") reader = DictReader(item, delimiter=";")
58 59 for row in reader: for row in reader:
59 60 if(row['MASTER'] and row['MASTER'] in [*Data]): if(row['MASTER'] and row['MASTER'] in [*Data]):
60 61 Data[row['MASTER']]['Stock'] = row['BADEL 26.12.16'] Data[row['MASTER']]['Stock'] = row['BADEL 26.12.16']
61
62
62 63 output_path = writeCSV(Data, 'stock', column_names) output_path = writeCSV(Data, 'stock', column_names)
63
64
64 65
65 66 def priceUpload(flatfile, export): def priceUpload(flatfile, export):
66 67 # The column header names # The column header names
67 column_names = ['VariationID','IsNet','VariationPrice','SalesPriceID']
68 column_names = ['VariationID', 'IsNet', 'VariationPrice', 'SalesPriceID']
68 69
69 # create a Data Dictionary and fill it with the necessary values from the flatfile
70 # create a Data Dictionary and fill it with the necessary values from the
71 # flatfile
70 72 Data = SortedDict() Data = SortedDict()
71 73
72 74 with open(flatfile, mode='r') as item: with open(flatfile, mode='r') as item:
73 75 reader = DictReader(item, delimiter=";") reader = DictReader(item, delimiter=";")
74 76 for row in reader: for row in reader:
75 77 if(row['external_product_id']): if(row['external_product_id']):
76 values = ['',0,row['standard_price'],1]
78 values = ['', 0, row['standard_price'], 1]
77 79 Data[row['item_sku']] = SortedDict(zip(column_names, values)) Data[row['item_sku']] = SortedDict(zip(column_names, values))
78
80
79 81 with open(export, mode='r') as item: with open(export, mode='r') as item:
80 82 reader = DictReader(item, delimiter=";") reader = DictReader(item, delimiter=";")
81 83 for row in reader: for row in reader:
82 84 if(row['VariationNumber'] in [*Data]): if(row['VariationNumber'] in [*Data]):
83 85 Data[row['VariationNumber']]['VariationID'] = row['VariationID'] Data[row['VariationNumber']]['VariationID'] = row['VariationID']
84
85 output_path = writeCSV(Data, 'price', column_names)
86
87 output_path = writeCSV(Data, 'price', column_names)
File packages/variation_upload.py changed (mode: 100644) (index 558e66f..455a31f)
... ... def variationUpload(flatfile, intern_number):
68 68 attributes = 'color_name:' + row['color_name'] attributes = 'color_name:' + row['color_name']
69 69 if(row['size_name']): if(row['size_name']):
70 70 attributes += ';size_name:' + row['size_name'] attributes += ';size_name:' + row['size_name']
71 if(row['outer_material_type']):
72 attributes += ';material_name:' + \
73 row['outer_material_type']
74 if('pattern' in [*row] and row['pattern']):
75 attributes += ';pattern:' + row['pattern']
76 71 try: try:
77 72 values = ['', '', row['item_sku'], item_name, '', int(row['package_length']) * 10, int(row['package_width']) * 10, int( values = ['', '', row['item_sku'], item_name, '', int(row['package_length']) * 10, int(row['package_width']) * 10, int(
78 73 row['package_height']) * 10, row['package_weight'], attributes, row['standard_price'], 'Badel', 'Y', 'Y'] row['package_height']) * 10, row['package_weight'], attributes, row['standard_price'], 'Badel', 'Y', 'Y']
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/initBasti/Amazon2PlentySync

Clone this repository using git:
git clone git://git.rocketgit.com/user/initBasti/Amazon2PlentySync

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main