Release Notes 2021.4.2
Release Highlights
The goal of the Incorta Cloud 2021.4.2 release is to improve the analytical capabilities, data management, security, and performance. In addition, the 2021.4.2 release introduces several enhancements and updates, such as a new Netsuite SuiteQL connector, improved Advanced Map query performance, and improved memory usage to load encrypted columns. It also introduces several new features, such as the Connectors tab in the Cluster configuration page in the Cluster Management Console (CMC), Use Single Sign-on login button, format options for Date columns, FTP servers as Data Destination, as well as Pre-SQL support for Oracle and MySQL connectors.
New Features
Dashboards, Visualizations, and Analytics
Data Management Layer
- NetSuite SuiteQL connector to support REpresentational State Transfer (REST)
- New Connectors tab for managing CData connectors in the Cluster Management Console (CMC)
- Data Agent support for version option in the agent script
- FTP servers as a Data Destination
- Pre-SQL support for Oracle and MySQL connectors
- Notebook support for using specific paragraphs in the Materialized View (MV) script
- PostgreSQL MV support for an equivalent SparkSQL query with copy and refresh options
Architecture and Application Layer
- Log in using Single-Sign On (SSO) from the Incorta login page
- Interrupt long-running dashboards that block sync processes
- Define the number of data frame partitions resulting from loading an MV
- Loader memory reduction for encrypted columns
In addition to other enhancements and fixes.
New format options for the Date column
You can now format the column of a Date Part in the Grouping Dimension tray. When you set the Date Part to Quarter or Month in the Properties panel, a Format dropdown list appears to select your preference. Here are the formatting options:
- For Quarter Format: No Format (1) or Prefix (Q1)
- For Month Format: No Format (1), Prefix (M1), Short (Jan), or Long (January)
Dynamic Group By for multiple columns in Aggregated Tables
The Aggregated Table visualization now allows Dynamic Group-by for multiple dimension columns. To learn more, refer to Visualizations → Aggregated Table.
NetSuite SuiteQL Connector to support REST
A new NetSuite SuiteQL connector is now available that supports REST. To learn more, refer to Connectors → NetSuite SuiteQL.
New Connectors tab for managing CData connectors in the CMC
A tenant administrator can now control the usage of CData connectors through the new Connectors tab in the CMC during an install or upgrade process.
Here are the steps to manage connectors:
- Sign in to the CMC.
- In the Navigation bar, select Clusters.
- In the clusters list, select a Cluster name, and then select the Connectors tab.
- In the Actions bar, select Manage Connectors.
- In the Configure Cdata Connectors dialog, select Allow to enable the use of a connector within a tenant, or select Remove from list to disable its use.
- Select OK.
Enabling a connector does not require restarting the Analytics and Loader nodes. However, removing a connector requires restarting both nodes.
Data Agent support for version option in the agent script
You can now check your Data Agent version in the agent script by running ./agent.sh version
on Linux or agent.bat version
on Windows. For more information about how to enable and download the Data Agent, refer to the Tools → Data Agent document.
FTP servers as a Data Destination
In this release, FTP servers are now available as a data destination to which you can send or export one or multiple supported insights. You can connect to FTP servers using the FTP or FTPS protocols.
For more information, refer to Data Manager and Data Destination
Pre-SQL support for Oracle and MySQL connectors
This release introduces a new feature that allows running SQL statements or calling stored procedures before executing the original extraction query and incremental query for a MySQL or Oracle data source during a load job.
In this release, the Data Source dialog has a new toggle, Pre-SQL, to enable this feature for a given data source. After enabling this toggle, use the Pre-SQL statement box to invoke the Query builder to enter the Pre-SQL statement or call the stored procedure you want.
For example, CALL app1_ctx_package.set_empno(11);
where set_empno
is a procedure that sets the employee number to 11.
The data source database management system determines the accepted statements and supported syntax.
This feature is useful when you need to:
- Set the security context of the query before extracting data
- Run stored procedures
- Update the data source before extraction
- Create a temporary table before executing the extraction query
- Delete old records in the source so that the source includes only the latest records
If the Pre-SQL statement fails during a load job, the object data extraction and load fail and an error message appears. Logs also will contain the failure reason.
Notebook support for using specific paragraphs in the MV script
You can now choose specific paragraph(s) to create an MV script by selecting Include in MV script (+ icon) in the Notebook Editor. This feature is available for all supported languages: SQL, PostgreSQL, Python, R, and Scala. However, if multiple languages are included in the same Notebook, only the paragraphs written in the default Notebook language can be selected.
If Include in MV script is not selected, then by default,
- For SQL and PostgreSQL: only the last paragraph will be included.
- For Python, R, and Scala: none of the paragraphs will be included and the script will be empty.
PostgreSQL MV support for an equivalent SparkSQL query with copy and refresh options
You can now generate a read-only SparkSQL query that is equivalent to your PostgreSQL query, and then copy it by selecting Copy to Clipboard in the Edit Query dialog of a PostgreSQL Materialized View table. Also, after editing a PostgreSQL query, you can select Refresh to show the updated SparkSQL query.
Log in using SSO from the Incorta login page
As an Incorta user, you can now log in to Incorta Analytics using Single Sign-On (SSO). A new Use Single Sign-on button is added to the Incorta Analytics login page to facilitate the login process.
The CMC administrator must enable and configure your SSO provider per tenant. For more information on how to configure SSO and start using it, refer to the Configure SSO and Start guides.
Interrupt long-running dashboards that block sync processes
This release introduces a solution for long-running dashboard queries that block synchronization processes, and accordingly other queries, without a major effect on the dashboard performance. The solution includes interrupting long-running queries that block a synchronization operation after a configured time period starting from acquiring the read lock by the running query.
This feature is disabled by default. To enable this feature or change the configured time, contact the Support team.
The Support team should add two keys to the engine.properties
file that exists in the Analytics Service directory. The two keys that control this feature are as follows:
Key | Description | Data Type | Value |
---|---|---|---|
engine.interrupt_query_by_sync | Enable or disable the interrupting long-running queries feature | Boolean | ● true ● false (default) |
engine.query_render_time_before_interruption_by_sync | Set the time (in minutes) to wait since the running query that blocks a sync process acquires the read lock before interrupting the query | Integer | Number of minutes The minimum value is 0. The default value is 10. |
Although multiple operations acquire read lock on resources (physical schema objects and joins), such as searching in a dashboard, query plan discovery, and synchronization in the background, the solution handles only read locks acquired by dashboard queries that the engine runs. This includes the following:
- Rendering a dashboard
- Exporting a dashboard
- Downloading a dashboard tab or an insight
- Sending a dashboard, dashboard tab, or insight to a data destination
- Sending a dashboard via email
- Running a scheduled job to send a dashboard via email or to a data destination
- Rendering a SQLi query that uses the engine port: the default is 5436
Depending on the interrupted process, whenever a query is interrupted, a message is displayed in the Analyzer, sent to the user via email, or displayed in the SQLi audit files. The message denotes that the query is interrupted because the underlying data is being updated.
Solution limitations
- Due to the Java implementation to interrupt running processes and avoid major performance degradation, the interrupted query does not release the lock immediately. It may take some time until it hits an interruption check first.
- This solution does not apply to synchronization that runs in the background.
Define the number of data frame partitions resulting from loading an MV
This release enables you to define the number of the data frame partitions resulting from loading a materialized view. In the MV Data Source dialog, you can add the spark.dataframe.partitions
property and set the number of the data frame partitions as appropriate. If required, the Loader Service will perform either a coalesce or reparation operation to create the required number of data frame partitions.
Loader memory reduction for encrypted columns
This release introduces a new mechanism when loading and reading encrypted columns. The Loader Service no longer creates snapshot files for encrypted columns, only parquet files. In addition, the Loader service will load only required columns into its memory instead of loading all the table columns. The Loader and Analytics services will now read encrypted columns from parquet files. This new mechanism ensures both reducing the memory used by the Loader service during load jobs and saving disk space. This mechanism also enhances the load time when loading data incrementally because the column data will not be evicted from memory and reloaded again; only new data is loaded.
The new mechanism results in a minor degradation in the Analytics service performance when reading columns (while discovering them or rendering dashboards).
Changing the encryption status of one or more columns in a physical schema table or MV requires performing a full load for the related object.
Additional Enhancements and Fixes
Physical Schema
- Fixed an issue with physical schema load from ADLS and GCS, in which incremental schema loads caused the load process to be stuck in the writing tables stage
- Fixed an issue with the calculation of joins between non-derived tables when updating a physical schema
- Enhanced the error message details that appear when loading a materialized view table for the first time while the full load option is disabled
- Fixed an issue in which updating schemas deletes joins on Alias tables
Analyzer
- Fixed an issue where queries fails to render and produce an error when using timestamp as a column dimension
Scheduler
- Fixed an issue in which scheduled jobs fail due to an invalid username or password error
Dashboard and Visualizations
- Fixed an issue in the Treemap visualization, in which drill down for measure columns does not work when you filter the dashboard using the coloring dimension
- Enhanced Advanced Map query performance by running separate queries for each individual map layer
- Fixed an issue in the Advanced Map visualization, in which drill down does not function properly
- Fixed an issue in the Advanced Map visualization in which adding a Color By column to only one layer of a multi-layer insight caused unexpected behavior
- Fixed an issue in dashboards with presentation variables, in which filtering these dashboards using session variables did not display any data
- Fixed an issue in which sorting measure columns in Listing table insights did not sort correctly
- Fixed an issue in which a presentation variable that is defaulted to a session variable displays the session variable name instead of the value
Data Agent
- Fixed an issue with rendering tables in which the data agent has a delayed response from the data source, by setting a maximum threshold of 15 minutes for data agent retrials
Data Destination
- Resolved an issue in which saving an FTP data destination with dummy data as a draft causes an error
Materialized Views
- Enhanced materialized view tables created using the Notebook Editor to support schema versioning
- Fixed an issue with the prolonged transformation phase time when running multiple MVs concurrently
Miscellaneous
- Fixed an issue in which SQLi queries from external business intelligence tools, such as Tableau and Power BI, resulted in a lock on schema loads
- Enhanced the query performance of the Tableau connector:
- Set an increased default fetch size of 10000
- Set the assumeMinServerVersion property to 10.0
- Fixed an issue with the Loader service, in which insufficient memory caused the schema load to crash
- Fixed an issue in which an extra download link appears in the Insert Error details page, when you upload and execute an LDAP configuration file
- Enhanced the logging of the PK index calculation process to decrease the size of its logs
- Fixed an issue in which the Analytics Service did not release the read locks after rendering a dashboard
- Enhanced the security of migrating tenants with existing connectors. You are now required to enter the username and password for several connectors upon migrating tenants to your Incorta instance. For more information, refer to the Guides → Migrate
- Fixed an issue with LDAP users, in which they were unable to log in to Incorta after changing the authentication type from SSO to LDAP, or when importing using the User Sync feature and then changing the authentication type to LDAP
Known Issues and Workarounds
The following table illustrates the known issues in this release, with a workaround if it is available:
Known issue | Workaround |
---|---|
An error occurs when creating an SQL table based on SQL Server DS and adding a query hint in the query statement | Ignore the error, select Done, and then save the changes. The table will load without any errors. |
An SSO application cannot log in to Incorta tenant when using upper case letter in the tenant name | Use a lower case tenant name while configuring your SSO application. For example, if your Incorta tenant is called Demo or DEMO, you should use demo (all lower case) while configuring your SSO application. |
Opening a dashboard that has prompts with default session variables causes an embedded Incorta iframe in Salesforce to sign out instead of displaying the dashboard | |
Date columns formatted as MM/dd/yyyy or M/d/yyy are changed to a timestamp format when exported to CSV | |
A connection issue occurs in the Cassandra (Simba) Data Wizard in the following cases: ● Creating an SQL table ● Loading tables | Make sure that you have your CassandraJDBC42.jar driver updated to the latest version. |
While creating an Aggregated Table, an error occurs when trying to open the Query Plan viewer while using a formula in the Aggregate filter | |
Cascaded filters do not work properly if a prompt field is sorted using another column | |
Arabic names are not displayed properly when syncing an LDAP directory with Incorta | |
An invalid username and password error appears after upgrading, when you try to log in using the same browser window | You can do one of the following: ● Log out before the upgrade process begins ● Use a new browser window after the upgrade |