Known Issues in Latest Releases
The following are the known issues in Incorta releases as of 2024.1.x along with the workaround and fix version if any. This list may contain issues from earlier releases that haven't been resolved yet.
Known issue | Status | Affected Versions | Workaround | Fix Version |
---|---|---|---|---|
Incremental Mode activated in the Save MV Recipe in a Dataflow will fail to deploy a MV to the target schema. | Resolved for 2024.7.x releases Open for 2024.1.x releases | 2024.1.x, 2024.7.0, 2024.7.1 | Toggle off Incremental Mode, deploy the MV, open the MV in the Physical Schema, then apply incremental logic on the MV. | 2024.7.2 |
Upgrading clusters that use a MySQL 8 metadata database from a 6.0.x or earlier release to 2024.1.x or 2024.7.x might fail. | Resolved for 2024.7.x releases Open for 2024.1.x releases | 2024.1.x, 2024.7.0, 2024.7.1 | Execute the following against the Incorta metadata database before the upgrade: ALTER TABLE `NOTIFICATION` MODIFY COLUMN `EXPIRATION_DATE` TIMESTAMP NULL DEFAULT NULL; UPDATE `NOTIFICATION` SET `EXPIRATION_DATE` = NULL WHERE CAST(`EXPIRATION_DATE` AS CHAR(19)) = '0000-00-00 00:00:00'; COMMIT; | 2024.7.2 |
An issue causes a Loading chunk failed error when trying to open a dashboard or schema on Cloud installations. | Open | 2024.7.x (Cloud) | Contact Incorta Support to disable the in-app notifications feature in the CMC. | |
The load plan won’t appear on the Load Jobs list if the latest execution does not include the load group with the schema that login user has access to, for example, ● When aborting a load plan execution before the related load group starts. ● When restarting the execution from a group that follows the related load group. | Open | 2023.7.0, 2024.1.3 On-Premises, and later | ||
After the cleanup job runs and removes the load job tracking data, the Schema Designer displays 0 rows for non-optimized tables if they did not have any successful load jobs during the retention period, or if all load jobs during this period resulted in 0 rows, although these tables may still have data on disk. | Open | 2024.1.x | ||
For specific versions of the Data Lake connectors (Azure Gen2, Data Lake Local, FTP, Google Cloud Storage, Apache Hadoop (HDFS), Amazon S3, and SFTP), users who use Wildcard Union on directories containing a large number of files might encounter load failures or experience longer load times. | Resolved | Versions from 2.0.1.0 to 2.0.1.7 of the Data Lake connectors | Upgrade to connector version 2.0.1.8 | |
Using internal session variables with SQLi service might cause out of memory error. | Open | 6.0 and later | Add -Dengine.max_off_heap_memory=<value in Bytes> in the <installationPath>/IncortaNode/sqli_extra_jvm.properties , where the SQLi service is installed to set an off heap limit. For example, -Dengine.max_off_heap_memory=10737418240 will set the off heap memory to 10 GBs. | |
Load from staging jobs will get stuck and keep running endlessly with the following CMC configurations set as follows: ● The Enable automatic merging of parquet while loading from staging toggle is turned on. ● The Enable dynamic allocation in MVs toggle is turned off. ● The value of Materialized view application cores is less than the value of spark.executor.cores in the Extra options for Parquet merge option. | Open | 2024.1.3 On-Premises | Do one of the following in the CMC > Server Configurations > Spark Integration: ● Make sure that the Materialized view application cores is greater than or equal to the value of spark.executor.cores in the Extra options for Parquet merge option. ● Turn on the Enable dynamic allocation in MVs toggle. | |
An issue with the unique index sequential calculation might cause the failure of loading tables and join calculations during Post-load. | Resolved | 2024.1.3 On-Premises | Make sure that the unique index parallel calculation ( the default behavior) is NOT disabled. Open the engine.properties file on each Loader Service node and make sure to remove the engine.parallel_index_creation=false entry or set the value to true . | 2024.1.4 |
Uploading custom-built visualization components (.inc files) to the marketplace of a cluster that uses an Oracle metadata database results in an Internal SQL exception error. | Open | 2024.1.3 On-Premises | ||
Sending multiple schemas concurrently to the same target schema (dataset) in a Google BigQuery data destination may fail for the first time if the target schema (the BigQuery dataset) does not yet exist. | Open | 2024.1.x | Do one of the following: ● Send (load) one schema first, and then send all other schemas. You can create a load plan with one schema in a group and all other schemas in another group. ● Create the dataset in the BigQuery project before sending the schemas concurrently. ● Execute the load plan again or manually load the failed schemas. | |
When filtering the load jobs list by the In Queue status, single-group load plans that are in queue won’t appear on the list; however, they will appear when filtering by the In Progress status. | Open | 2023.7.0, 6.0, and later | ||
When enabling Spark-based extraction on a data source while running Spark 3.3, the extraction of the respective table fails. | Open | Spark 3.3 | Turn off the Enable Spark Based Extraction in the Table Data Source or Edit Dataset dialog. | |
A formula column that only references a physical column with null values shows wrong results as the Loader Service does not respect the null values when materializing the formula column. | Open | 2022.12.0, 6.0, and later | ||
The count and distinct functions return null values instead of zeros when using them in a flat table or a session variable. | Open | 2022.12.0, 6.0, and later | ||
When the Pause Scheduled Jobs option is enabled in the Cluster Management Console (CMC), the emails of manually activated scheduled jobs are not sent. | Open | 2022.9.0, 6.0, and later | ||
Scheduled jobs of overwritten dashboards and physical schemas are not deleted. | Open | 2022.9.0, 6.0, and later | ||
Access rights to a parent folder are not propagated to child folders and dashboards if the same access rights were revoked before. | Open | 2022.9.0, 6.0, and later | ||
Aggregated tables with hierarchical data do not support conditional formatting based on other measures when adding multiple attributes. | Open | 2022.9.0, 6.0, and later | ||
Exporting a tenant with a global variable that uses the newly supported syntax (query or queryDistinct expressions) from a cluster that runs a release prior to 2022.5.1 and then importing it to a cluster that runs the same or a later release will cause the Edit Global Variable dialog to show an empty Value. | Open | 2022.4.0 - 2022.5.0 | You can manually edit the value to enter the query expression again or upgrade both of the source and target clusters to the 2020.5.1 release before exporting the tenant that you want. Note: This issue does not apply to exporting a tenant from a cluster that runs 2022.5.1 or a later release and then importing it to the same or another cluster that runs 2022.5.1 or later. | |
No validation on functions is triggered in the Formula Builder when creating global variables. | Open | 2022.4.0, 6.0, and later | ||
Global Variables that return a list of values are not functioning properly when referenced in materialized views (MVs) or individual filters. | Open | 2022.4.0, 6.0, and later | ||
Global Variables will not appear on the list of variables when referenced in a column individual filter, that is when you type $ in the Search Values box. | Open | 2022.4.0, 6.0, and later | You can add the global variable manually, if applicable. In Query , Contains , and Starts With are examples of functions that accept a global variable as a filter value to be added manually. | |
Using the DISTINCT operator on a measure in a Pivot table that is built over an Incorta SQL View throws errors. | Open | 2022.4.0, 6.0, and later | ||
In an imported physical schema or tenant, the Disable Full Load option will be automatically enabled after changing the data source of an object to an Incorta SQL Table or vice versa, which will lead to skipping these updated objects in the next schema full load jobs. | Open | 2022.3.0, 6.0, and later | You need to either disable the feature manually for objects that support disabling full loads (physical schema tables and MVs with incremental load enabled) or recreate the objects. Note: This issue does not apply to newly created Incorta SQL Tables. | |
Incrementally loading a table without a key for the first time results in an error. | Open | 2022.3.0, 6.0, and later | Attempt to load the table incrementally again | |
Insights might not render when you run the same query multiple times for the same dashboard. | Open | 2022.3.0, 6.0, and later | ||
Email addresses added as CC or BCC in a scheduled data notification are not saved. | Open | 2021.4.3, 5.2.1, and later | ||
In the CMC under Nodes, the Restart button beside each Service does not function properly. | Open | For Cloud environments, use the Cloud Admin Portal to restart the Services from the Configurations tab. | ||
The Show Empty Groups option in Listing and Aggregated tables does not work properly when enabling it for a formula in the Grouping Dimension tray. | Open | |||
The dashboard Search box won’t return any matching results when you search by the values of a formula column in a business schema view, an insight, or a result set. However, using the Find all records containing… link filters the related insights correctly. | Open | |||
Searching the values of a formula column in the Filters dialog does not return any matching value in the case of formula columns from business schema views or formula columns added as prompts. | Resolved | 2024.1.0 Cloud and 2024.1.3 On-Premises |