Importing Data via the Interface
After creating your custom table, you can import data in order to populate it.
To do so via the user interface, go to the 'Imports' application (Data > Imports) and click on 'Create an import'.
Importing data is also possible with Custom Tables APIs.
Choose the type of custom table in which you want to import data.
Next, select if you want to start your import from blank or use a saved configuration with predefined settings.
Click on the small info icon to view the details (file type, mapping, loading parameters,...) of the saved config.
You'll then be redirected to the import wizard.
It is also possible to start an import from the 'Tables data' app by selecting a table then 'Import data' from the 'More' button.
Creating an import
1. General data
First, give a name to the import. It will help you find the import execution in the 'In progress' and 'Finished' tabs. This name must therefore be unique.
Then select the entity of the custom table in which you will import data. This will also be the entity of the import.
Finally, select the table in which you want to import data. Only tables stores on the previously selected entity are displayed in the dropdown.
2. Upload file
You will have to upload an import file containing the data that you want to push in your table.
The file can be a CSV or an Excel file of maximum 100 MB.
It can be uploaded from your computer or from a cloud location configured in your license, namely the transferbox or a FTP.
Various separators and formats are available, but must be specified.
- Compression: ZIP, GZIP or None
Compressing the file is not mandatory but can hold more data. Please try to zip your file if your CSV is bigger than the limit of 100MB
-
Format: Select if your are import an Excel or a CSV file with semi-colon, comma or tab separator
-
Encoding: The supported encodings are UTF-8, UTF 16 Little Endian, ISO-8859-1 and CP1252
-
File with header: This button (which is toggled by default) specifies if the fist line of the file is the header of the columns. A file with headers makes the data mapping easier. If it doesn't contain nay, the next step will follow the order of the columns.
File format - specifications
Your file must follow specific constraints in order to be accepted.
- The isze of the file must be maximum 100 MB (if the file is zipped, only the size of the ZIP counts).
- The order of the columns is not important.
- Mandatory columns must be found in the file. Optional ones can be omitted.
- Each column must be unique.
Click on the 'Download an example' to download an illustration file containing the structure of your table and the expected value type for each column.
You will obtain a practical example of the format expected for this file.
Columns with the technical attributes "creationMoment", "updateMoment" and "id" are ignored during the import.
This means that if you need to carry out a migration of data from one table to another, you can export table data and import them afterward without having to modify the file.
3. Map attributes
This step allows you to establish the correspondence between the columns of your file (on the left) and the attributes of the table (on the right).
- If your file contains headers, Actito's artificial intelligence will automatically map the column to the corresponding field.
The column name is matched to the exact technical name or display name of the table attribute.
The AIO logo indicates automatically mapped fields. These remain editable manually.
-
If the header of a column does not correspond to an attribute in the table, you will have to perform the mapping manually by selecting the corresponding field from the drop-down menu.
A sample of values is displayed below the column header. -
If your file does not contain any header, you will need to perform the mapping based on the order of the columns and the sample of values serving as an example.
-
If a column in your file should not be imported, keep the "Ignore column" option.
The top right box shows all mandatory (and unique) fields that have not yet been mapped.
All mandatory attributes must be mapped to allow new rows to be created, while keys identify existing rows in "Update/Creation" and "Update Only" modes.
4. Loading parameters
You will consequently need to specify the import parameters, which means defining the behavior of existing records.
- Hybrid mode "Update/Creation": All lines are taken into account. If an existing row is found, it is updated. If no corresponding row is found, a new row is created.
- "Creation only": Only lines leading to the creation of a new line are taken into account. You will therefore only add new records to the database.
- "Update only": Only lines that lead to a row update are taken into account. This mode can be used for tables for which a data update is relevant.
5. Summary
The last step gives you a summary of the previous steps. It notably gives you all the information of the file uploaded at step 2, such as its format, the number of rows and the headers of the columns.
This lets you double check that you uploaded the correct file before you launch the import.
Saving an import configuration
If you want to reuse your import settings for future manual imports, you can also save an import configuration before you launch.
An import configuration keeps in memory all the import details:
- the import name (an incremental suffix will be added to it when you create new imports based on the config)
- the destination table and entity
- the file type and compression
- the attribute mapping
- the loading parameters
To create one, click on "Save as configuration".
You then need to give a name to the configuration, so you can find it when you select a table type at step 0.
To delete a saved import config, choose "Create a import" to access the config selection screen, and enter "Edit mode" in the top right corner.
Checking your import results
After launching your import, it will appear in the 'In progress' tab until it is completed. This can take between a few seconds to a few minutes, depending on the volume of imported data.
To check the import results and analyse the possible error files, please see the Finished execution section in the Data imports section.