Release Notes Incorta 2026.3.0 (GA)

Release date: March 13, 2026

In addition to all features previously introduced in the 2025.7 releases through 2025.7.5, the Incorta 2026.3.0 release delivers a comprehensive suite of new features and significant improvements, focusing on enhanced data governance, real-time data access, and improved analytics and visualization.

Release highlights

Enhanced data governance and controlled metadata

  • Data Catalog renamed to Data Governance
  • New Data Quality capabilities
  • Self-service access requests with approval workflows
  • A new metadata draft mode and approval workflow in the Data Catalog, enabling peer review of domains, terms, and certifications before publishing, and ensuring metadata quality and accuracy
  • AI-powered classification for glossary terms
  • Data catalog quality issue reporting, integrated with issue tracking systems, such as Jira.

AI-powered intelligence (Nexus)

  • Generate Story customization options
  • Enable dashboards for Nexus
  • File upload support in Nexus Chat for data-driven conversations and analysis, expanding interactive analytics capabilities
  • Chat session history and parallel sessions
  • Chat usability enhancements

Real-time data access

  • Query data directly from Snowflake, providing instant access to source data for ad-hoc analysis
  • No-copy read from Delta Lake (S3, ADLS, GCS), reducing storage overhead and operational complexity

Improved user experience in Data Studio

  • Data Studio flows layout enhancement
  • Spark configuration for Data Flows and Save MV recipes to tune ETL performance
  • Data flow sharing
  • Detailed recipe information with a pop-up view
  • Data Quality rules enhancement

Improved data loading

  • Controlled load plans through options to:
    • Halt subsequent load groups if a preceding group encounters errors, ensuring data integrity
    • Run post-load calculations as early for individual objects as soon as they complete extraction and deduplication
  • Optimized incremental loads for materialized views (MVs)
  • Enhanced delete synchronization with inclusion sets in addition to exclusion sets

Automatic UI updates with zero downtime

Intelligent analytics and visualization

  • Enhanced user experience for insights over result sets
  • Insight Preview mode in the Analyzer
  • Formula reference lines in Bubble visualizations
  • Pivot table row expand and collapse
  • Dynamic measures for listing tables
  • URL-based image insights
  • Extended styling options

Enhanced security

  • JWT-based embedded dashboards
  • Concurrent OAuth and PAT authentication support

Upgrade considerations

Important

Upgrade considerations for previous 2024.x and 2025.7 releases also apply to this release unless stated otherwise.

Data agent upgrade considerations

This release uses the Data Agent version 13.0.5. Please upgrade to this version.

  • Customers upgrading from a release before 2025.7 must follow the steps mentioned in 2025.7.1.
  • Customers upgrading from a 2025.7.x release:
    • For Incorta Cloud clusters: No action is required. The data agent will be upgraded automatically during the cluster upgrade.
    • For other deployments (On-Premises and customer-managed cloud clusters): Upgrade the data agent manually.

Behavior changes

Data flow execution context update after renaming

Previously, renaming a data flow did not update its execution context file, preventing the flow from reinitializing.
Starting with this release, renaming a data flow automatically updates the execution context, ensuring proper initialization.

Dynamic Group By support for scheduled dashboard exports

Scheduled dashboard exports in Excel and CSV formats now apply the selected or default Dynamic Group By configuration for each insight, ensuring that exported data reflects the intended grouping defined for the dashboard.

Null values in KPIs

Starting with this release, KPI insights now honor the Null Value Representation setting configured in the CMC > Tenant Configurations > Customizations, ensuring consistent handling of null values in KPI and tabular insights.

A change in the schema settings

The Performance Optimization option in the schema Settings and More Options menus has been removed starting with this release. You can continue to manage in-memory data behavior at the table level through each table’s Advanced Settings > Loaded Data in Memory, which provides more control over memory-loading options.

Post-load calculation as part of the Running phase

Post-load calculations are no longer handled as a separate stage and are now included within the Running phase. You can review the Running phase breakdown, including Extraction, Deduplication, Load, and Post-load, in the Load Job Details Viewer.

Stricter permissions for single insight delivery

Users must now have at least Share access to a dashboard and the appropriate roles to send or schedule the delivery of single dashboard insights via email or send them to data destinations. Those lacking the required access can no longer perform these actions, aligning insight-level permissions with the existing rules that govern dashboard sharing and delivery.

Disabling the data agent memory manager

The Data agent memory manager introduced in the 2025.7.1 release is now disabled by default in this release.

Data Studio Data Quality Rules enhancement

Data quality rules are now managed in metadata rather than CSV files, with automatic migration, improved export and import support, and enhanced accuracy and governance.


New features

Data management

Incorta Nexus

Dashboards, Visualizations, and Analytics

Architecture and Application Layer

Updates


Data management

Data Studio enhancements

Share data flows with view, share, or edit access

Data Studio now supports sharing data flows with users or groups who have any of the following roles: Schema Manager, Advanced Analyzer User, or SuperRole. From the More options () menu of a specific data flow, select Share to grant access.

The available access levels are:

  • View access

    • Allows users to open and review the data flow, preview cached data, and view recipe nodes, code, and information.
    • Users with view access cannot edit, revalidate, delete, disconnect, or share the flow.
  • Share access

    • Includes all view permissions and also allows users to share the data flow with others, granting either view or share access.
  • Edit access

    • Allows users to modify the data flow, including adding or editing recipes, validating data, deleting or disconnecting flows, and managing configurations.
    • To edit a data flow, the user must also have access to the associated schema(s).

Data Studio flows layout enhancement

This release introduces an enhanced layout for the Data Flow Editor within Data Studio, improving recipe management. The Recipes panel is now expanded by default, scrollable, and includes a search feature for quick recipe discovery. It also introduces a new Input & Output section that groups the Input Table and Save MV recipes. Selecting a recipe now automatically displays the Edit panel on the right side of the Data Flow Editor

New Columns and Alerts tabs in the Results pane

The Results pane, a resizable and collapsible panel displayed at the bottom of the data flow, has been expanded to include new Columns and Alerts tabs. The Columns tab displays the schema definition, listing the recipe’s columns and their data types. The Alerts tab shows warnings and alerts when required recipe configurations are missing or become invalid. The Code preview has been relocated to the Results pane and is now available as a dedicated Code tab.

Note

Data Studio does not support Oracle metadata.

Spark configuration option for Data Flows and Save MV recipes

This release introduces a new option to configure Spark properties when creating or editing Data Flows and Save MV recipes

For Data Flows:
In the Data Flow Editor, select Settings in the Action bar, and under the new Spark Properties section, select Add Property to define key–value pairs. You can add, edit, or delete configurations as needed.

For Save MV recipes:
In the Save MV recipe dialog, use the new Spark Properties section to define additional Spark key–value pairs for the deployed MV.

Any configuration changes require restarting or reconnecting the data flow and redeploying the Save MV recipe for the updates to take effect.

Note

By default, the Spark application uses the following configuration values:

  • spark.driver.memory: 1 g
  • spark.executor.memory: 1 g
  • spark.executor.cores: 1

You can modify these default values using the new Spark configuration option.

New pop-up to view Recipe information

This release adds a new Recipe pop-up in the Data Flow Editor within Data Studio, allowing users to view recipe information by hovering over a recipe. The pop-up displays key details, including status, last run time, duration, and row count. For recipes with multiple outputs, such as the Split Recipe, row counts are shown for each output.

Self-service feature access request

Users can now request access to features regardless of their assigned security role, such as Dashboards, Data Catalog, and Nexus, through a governed approval workflow. This enables greater flexibility and streamlines access management across the platform.

Prerequisite configuration

  • In the Cluster Management Console (CMC), toggle on Governed User Access Requests under Default Tenant Configurations → Security.

Request access

  • Navigate to Account → Request Access.
  • In the Request Access To ... dialog, select Request Access next to the required access level, for example, Use Advanced Analytics, Manage Dashboards, or View Data Catalog.
  • Enter a justification, choose a priority (High, Medium, or Low), and then select Submit Request.

Approval workflow

  • SuperRole users and User Managers receive a notification as reviewers.
  • Reviewers approve the request by selecting Add & Accept, or decline it by selecting Cancel & Decline.

Introduction of Data Governance across CMC and Analytics service

This release introduces Data Governance as a unified capability across the CMC and Analytics service. In the Cluster Management Console (CMC), Default Tenant Configurations now include a new Data Governance tab, consolidating Data Catalog settings previously located under Incorta Labs, with enablement permissions available to both Admin and Cloud Admin roles. In the Analytics service, Data Governance now includes the existing Data Catalog and a new Data Quality tab, providing a centralized entry point for governance-related capabilities.

For more information on the Data Governance configurations, refer to Guides → Configure Tenants.

New Data Quality capabilities

This release introduces a new Data Quality tab under Data Governance, expanding the platform’s capabilities for defining, managing, and enforcing data quality rules. The Data Quality includes two tabs: Overview and Rules.

Data Quality Overview

As part of Data Governance, the platform now provides a centralized Overview page to monitor quality and compliance across the environment. This tab is accessible to Data Catalog Users, Data Governors, and Admin users.

The Overview page includes three tabs: Data Quality, Metadata Quality, and Business Rules Compliance. These tabs can be customized by Data Governors who have the Analyzer User role and view access to the schema containing the quality scores.

Data Quality

The Data Quality tab provides a high-level view of data quality rule evaluation results, including the following:

  • Overall Data Quality score
  • Scores per domain
  • Scores per quality dimension, such as Validity, Timeliness, and Completeness.
  • Number of active versus inactive rules
  • Number of data quality issues reported through Report Quality workflows.
  • Score Trend across multiple rule execution runs

Scores are calculated based on evaluations of rules defined under Data Quality → Rules.

Metadata Quality

The Metadata Quality tab provides a view of the health and completeness of metadata assets across the platform, including the following:

  • Overall Metadata Quality score
  • Total number of assets
  • Number of metadata quality issues reported through Report Quality workflows
  • Scores per domain
  • Scores per metadata quality dimension, such as Completeness, Accuracy, and Consistency

Metadata quality evaluation checks assets’ metadata attributes, such as assigned owners, certification, classification, and documentation.

Scores reflect the results of metadata quality evaluations across multiple runs.

Business Rules Compliance

The Business Rules Compliance tab provides visibility into compliance with business rules defined in the Data Studio, including the following:

  • Overall business rules compliance score
  • Scores per quality dimension
  • Score trend across multiple runs

Results are calculated based on evaluations of business quality rules implemented using Data Studio.

Important

The Data_Quality schema depends on the following reserved quality recipe columns:

  • __INCORTA__DQ__VIOLATIONS
  • __INCORTA__DQ__CHECK_TS

If these columns are removed from the Data Studio data flow or excluded from the generated MV, the entire schema load fails, even if other flows contain valid quality recipes. This results in Data Quality results not being populated.

Workaround:
Ensure the reserved columns exist in all flows. For existing flows, restore any missing columns, redeploy the MV, and then reload the Data_Quality schema.

Data Quality Rules

The Data Governor role is now expanded to manage data quality rules. Data Governors can now create simple, no-code data quality rules at the term level. These rules are automatically applied to all associated physical columns, ensuring consistent enforcement of data quality standards across all relevant data.

Create and manage data quality rules

Data Governors can set data quality rules as follows:

  • Navigate to the Data Quality tab and select + New → Rule in the Action bar.
  • In the Define a Rule dialog, enter the rule Name and Description, then select a Validation.
    • When a validation requires a parameter (for example, Contains), the system displays a Validation Value field.
    • The Trim and Case Sensitive toggles also appear and can be enabled as needed.
    • The system automatically sets the Validation Type based on the selected validation.
  • Assign the rule to a glossary term from the Data Catalog and assign up to three owners.
  • Enable the Activated toggle to activate the rule.
  • Select a severity level: Critical, High, Medium, or Low.
  • Select Save to create the rule.

The system displays the rule under the Rules tab, where you can update or remove a rule using the Edit or Delete icons.

Note
  • You can apply multiple rules to the same glossary term.
  • The system evaluates data quality rules when you incrementally load a schema.
Search and filter data quality rules

Find and manage data quality rules using search and filtering options. Search by rule name or filter rules by Type, Validation, Term, Status, Severity, Created By, or Owner.

Export and import of data quality rules

Simplify migration and reuse across environments by exporting and importing data quality rule definitions.

  • To export rules, select More Options () → Export in the Action bar. This downloads the rules as a .zip file.
  • To import rules, select +NewImport Rules in the Action bar. In the Import Rules dialog, browse to or drag and drop the .zip file.
Note
  • When importing, if the system detects existing rules with the same name, a warning displays. You can review the warnings and select Ignore to discard duplicates or select Overwrite to replace existing rules.
  • If imported rules reference glossary terms that do not exist, a warning displays. Select Review Issue to view detailed information, including the line number, issue type, and issue description. Hover over the Eye icon to view the exact rule and the associated glossary term.

Download Data Catalog and Data Quality assets

You can now download Data Catalog and Data Quality assets as packaged bundles, enabling analysis, auditing, or backup while adhering to existing access and governance controls.

To download an assets package, navigate to Data Governance → Data Catalog or Data Quality. In the Action bar, select More options (⋮) → Download Catalog Overview Assets or Download Quality Overview Assets, depending on the selected tab.

This feature supports both Cloud and On-Premises deployments. For more information, refer to the Data Catalog Package Installation Guide and Data Quality Package Installation Guide.

Enhanced Data Catalog auditing

This release adds more granular auditing through new folders in the data_catalog_audit directory, which is accessible from Data → Local Data Files, improving transparency, traceability, and oversight for data governance.

  • core_audit: Tracks entity-level changes, such as glossaries, domains, asset relationships, and metadata updates.
  • dataquality: Tracks changes to quality rules under Data Governance → Data Quality → Rules, including creation, updates, and listing.
  • system_event: Captures system-level operations, such as user access, catalog synchronizations, and searches.
  • workflow: Tracks user-driven workflow actions, such as quality issue reports and access requests.

Data Catalog enhancements

Metadata draft mode and approval workflow

Data Catalog now allows contributors to save metadata edits as drafts, enabling review and approval before publishing. This controlled draft workflow helps maintain data accuracy and governance for metadata updates, including domains, terms, documentation, certification, and rating and review.

For example, when Data Catalog users certify an asset or submit a review through the Information panel (i), a request is submitted to the Data Governor for approval.

Users can edit or delete their drafts and track their submission status—Pending, Accepted, Canceled, or Rejected—through clear notifications. Drafts are visible only to Data Catalog users and Data Governors, ensuring secure collaboration during the review process. Request types can be viewed and filtered within the Requests panel using the Request Change For filter. The approval workflow can be customized; contact your Account Executive for configuration.

For more information, refer to Data Catalog Manager → Submit and Track Requests.

Report data and metadata quality issues

This release enables reporting of data and metadata quality issues directly from the Information panel (i) within the Data Catalog, integrated with issue tracking systems, such as Jira. The Report an Issue option is available on all assets and must be configured in the CMC; contact your Account Executive to enable this feature.

For more information, refer to Data Catalog Manager → Report a Quality Issue.

Term filter for tables and views

The Data Catalog now supports filtering tables and views by assigned business terms to easily locate assets. Data Catalog Users can narrow results by selecting one or more terms in the filter bar, improving discoverability and enabling more consistent data exploration across the Data Catalog.

Support for multiple line separators in imported files

The Data Catalog import process now supports multiple line separators. Import files can be edited using common text editors or spreadsheet tools such as Excel, saved as CSV, and imported successfully. Files saved with different line encodings, including Windows line separators, are fully supported.

Expanded classification levels configurations

Data Governors can now add new classification levels and edit or delete existing ones directly from the Configure Classification dialog. This enhancement facilitates managing privacy levels.

For more information, refer to Tools → Data Catalog Manager and Configure Classification Levels.

Extended glossary term group hierarchy

This release extends glossary terms within the Data Catalog to support up to five levels of term groups, enabling more flexible and scalable business hierarchies.

Domains and glossary terms naming uniqueness per level

This enhancement improves the naming flexibility of domains and glossary terms by removing the global uniqueness constraint and enforcing uniqueness only per level, using case-insensitive validation.

AI-powered classification for glossary terms (Preview)

This release introduces an AI-assisted auto-classification capability that suggests classifications for unclassified glossary terms, enabling Data Governors to efficiently apply classifications. From the Data Catalog home page, navigate to Glossary Terms and select View All to access the full list of terms. When unclassified terms are available, a Classify button displays in the top-right corner of the Glossary Terms page to initiate the auto-classification process. You can select up to ten terms to be auto-classified; classification suggestions and applied changes are limited to those selections. The number of terms is configurable from the CMC.

This feature must be enabled in the CMC to appear in the Data Catalog. Contact your Account Executive to configure and enable the auto-classification functionality.

Important

If a Data Governor updates a term’s classification while auto-classification is in progress and a conflict occurs:

  • The AI auto-classification does not override the manual update.
  • A warning is displayed indicating the classification was recently updated manually and cannot be changed automatically

Column-level auditing across Analytics and SQL interfaces

Incorta now supports column-level auditing to track access to all or masked columns used across Dashboards, SQLi, Advanced SQLi queries, Business Notebooks, and Nexus, strengthening traceability and data governance across both analytical and SQL-based access. Each audit record captures column usage, providing visibility into how data is accessed and enabling more effective compliance and investigation workflows. Auditing can be scoped to all columns or limited to masked (classified) columns, allowing organizations to balance governance requirements with performance considerations.

To use this feature, configure the following settings in the CMC under Default Tenant Configurations → Data Governance:

  • Toggle on Data Governance.
  • Toggle on Column Audit (​​appears only when Data Governance is enabled).
  • Select the Audit Scope:
    • All — Audits all columns
    • Masked — Audits only columns with classifications that have Mask Data enabled, such as Restricted or Confidential.

After configuring Column Audit in the CMC, column-level audit data becomes available in the Analytics platform under Data → Local Data Files → column_audit.

Advanced SQLi support for remote tables (Preview)

Advanced SQLi now supports querying remote tables directly. This enhancement enables real-time access to lakehouse data, allows dashboards and external BI tools to query remote datasets, and reduces the storage overhead and operational complexity previously required when using materialized views. The feature introduces an efficient way to analyze datasets stored in data lakehouses, such as Databricks, S3, ADLS, or GCS, without loading data into Incorta.

With the new enhancement, you can access and analyze remote data through Spark SQL views, Advanced SQLi queries, and BI tool connections, making data immediately available for analysis without any ingestion steps.

Direct data query (Preview)

This release introduces the Direct Query feature, a significant capability that enables you to query live data directly from source systems and visualize results instantly within Incorta. This enhancement provides real-time access to source data, drastically reducing your time-to-insight and simplifying data exploration by eliminating traditional data modeling and ingestion steps.

Important
  • This is a preview feature available on Incorta Cloud only and requires a Premium cluster.
  • The feature is disabled by default. To enable it, contact the Support team to complete the prerequisite configurations.
  • Afterward, you can configure the feature from the Cluster Management Console (CMC) > Clusters > <yourCluster> > Tenants > <yourTenant> > Configure > Direct Queries Integration.
  • This feature currently supports Snowflake as the exclusive data source, with plans to support additional data sources in future releases.
  • It is recommended to use aggregated queries to improve performance.

How it works

After configuring the feature, you can create a Snowflake result set or business view to execute SQL queries against Snowflake data, and then visualize results directly in Incorta's Insights interface, bypassing traditional schema creation and data-loading workflows. Query results reflect real-time source data, ensuring analysis accuracy without refresh delays.

Caching direct queries

To balance real-time data needs with system performance and resource efficiency, Incorta supports caching the direct query results at the insight level. You can specify the Cache Validity Duration globally in the CMC or at the result set and business view to override the global CMC setting, providing flexible control over caching behavior for different workloads.

When a user opens an insight created via a direct query, the system checks the cache validity. If the cache has expired, the system automatically retrieves fresh data from the source, ensuring optimal performance without compromising data freshness.

Limitations

  • Drilling down on insights based on Snowflake result sets is not supported.
  • Prompts are not supported. Use presentation variables instead.

External tables: No-copy read from Delta Lake (Preview)

In this release, Incorta introduces external tables as a new schema table type, enabling reading and querying Delta Lake tables in Amazon S3, Azure Data Lake Storage, or Google Cloud Storage without extracting, compacting, or storing any source data in Incorta. This eliminates redundant data copies while maintaining full analytical capabilities, including joins, formulas, and dashboard visualizations.

Key benefits and use cases

  • Eliminate data duplication and reduce costs: Access Delta Lake data directly from S3, ADLS, or GCS without creating copies in Incorta, reducing storage costs and eliminating synchronization overhead across analytics platforms.
  • Accelerated query performance: Leverage Incorta's Direct Data Mapping (DDM) files to optimize query execution on remote Delta Lake tables, treating them as optimized tables without extraction.
  • Cross-cluster data sharing: Enable multiple Incorta clusters to access the same Delta Lake tables simultaneously, facilitating distributed analytics and collaborative operations.
  • Lakehouse integration: Organizations managing data in Databricks or other lakehouse platforms can add Incorta analytics without re-engineering existing data pipelines or re-ingesting data.

How it works

  1. Create a Data Lake data source with appropriate cloud storage credentials.
  2. Define an external table. Incorta discovers Delta Lake tables and generates the necessary metadata.
  3. Create joins and formula columns as needed.
  4. Load the external table from staging to map to the latest Delta Lake version and generate the formula and join DDM files.
  5. Build insights and business views with the same analytical capabilities as traditional tables while reading directly from the source.

Known limitations

  • Incorta requires direct access to storage buckets containing the Delta Lake data. Catalog-based access is not yet supported.
  • For now, Incorta cannot query Delta tables created by Microsoft Fabric.
  • Source data updates are not auto-detected; schedule loads from staging to refresh mappings and DDM files.
  • The Delta Lake tables must be deduplicated as Incorta does not perform any deduplication.
  • Incremental loads are not yet supported.
  • Deletion vectors of Delta tables are not supported.
  • You cannot configure the following for external tables:
    • Column data type
    • Column encryption
    • Key columns
    • Multi-source settings
    • Partial data loading
    • Data purge operations

Support for unstructured data file upload

Incorta now supports uploading unstructured data files directly into the platform, including documents (PDF, DOC, DOCX, XML, LOG), images (JPEG, PNG, GIF, SVG, TIFF), audio (MP3, WAV), and video (MP4, MOV, AVI). Schema Managers can upload these files using the same interface available for CSV and Excel files, with support for compressed file extraction while preserving the folder hierarchy. Additional file extensions can be configured through the allowed.file.extensions service property. This enhancement enables unstructured data to be ingested for downstream use cases such as RAG pipelines, searchable vector databases, and semantic search powered by Incorta AI.

Enhanced delete synchronization: Support for inclusion sets

Incorta now supports using inclusion sets to synchronize delete operations between a data source and Incorta, in addition to the existing exclusion sets functionality. You can now directly use primary key extracts (inclusion sets) to retain valid records. This enhancement simplifies the configuration and improves performance for data sources such as Oracle Cloud Applications, which provide primary key extracts rather than deleted records.

In the table's Advanced Settings, select between using an inclusion set or an exclusion set for the Synchronizing delete operations between a data source and Incorta option, and then map columns accordingly.

Incremental load enhancements for materialized views

Incremental loads for MVs based on the maximum value of a column are now significantly faster and more efficient, especially for large datasets with many Parquet files. Previously, Incorta recalculated the incremental maximum value by scanning all Parquet files on every incremental load, resulting in substantial overhead. With this enhancement, Incorta now tracks and stores the maximum value of the specified column during the full load and updates it after each incremental load. On subsequent runs, the MV engine reads this stored value directly instead of recomputing it, reducing I/O, improving performance, and accelerating incremental loads for high-volume MVs.

Data lineage enhancements

This release delivers significant improvements to the Data Lineage experience, providing precise impact analysis, faster root-cause investigation, and stronger data governance across all data flows and assets.

These improvements include:

  • Granular lineage for SQL-based connectors (Preview)
  • Enhanced lineage visualization
  • Data classification visualization

Granular lineage for SQL-based connectors (Preview)

The lineage now displays the actual source tables and source columns for objects reading data using SQL-based connectors, instead of showing only the data source.

Note

You need to update the connector version from the Marketplace, and then revalidate the data source to display the source table or column.

Important

This feature lays a strong foundation for smarter lineage detection, with support set to grow over time. While some complex queries, SQL patterns, or connectors may not yet reveal all source tables, columns, or relationships, ongoing enhancements will progressively improve coverage and detection capabilities.

Enhanced lineage visualization

  • Streamlined visualization: Redundant or duplicate references to the same entity have been removed, resulting in a clearer lineage graph and improved readability for complex relationships.
  • Improved relationship representation: Relationship types are now represented with intuitive symbols. Additionally, you can display additional details, including join logic and formula column definitions, through the new additional information pane.

Data classification in lineage diagrams

Lineage diagrams now display a classification badge on each asset, such as tables, views, and dashboards, that contain columns with masked data, for example, "Restricted" or "Confidential". This provides immediate visibility into sensitive data flows, supporting faster and more accurate impact assessments.

When an asset is selected, a side panel indicates whether the asset contains masked data, helping data governance teams quickly evaluate exposure risk across downstream dependencies.

Intensive mode in the cleanup job

The cleanup job has been enhanced to automatically remove stale, temporary, or corrupted Parquet and DDM files, particularly those left behind after schema or table deletions, ungraceful shutdowns, or outdated load jobs, preventing unnecessary resource consumption.

The intensive cleanup job runs weekly by default. However, you can adjust the frequency by configuring the cleanupjob.intensive.refresh.time property (in minutes) in the Loader service's service.properties file.

Early Post-load calculations

A new Early Post-load Calculations option is now available for load groups. Enabling this option locks the schemas being loaded at the start of the load job, as illustrated in the DAG Viewer, which allows the Loader Service to start post-load calculations as soon as each schema object completes Extraction, Enrichment, and PK-Index, without waiting for the entire group to finish. This feature aims to shorten the overall load duration and reduce memory and storage bottlenecks.

Continue a load plan only when a load group succeeds

Load plan groups now include a Continue On setting to control subsequent load group execution based on the current group's completion status.

  • Success or Finished With Errors (default): Next load group executes regardless of errors with the current group, maintains the existing behavior.
  • Success: Next load group executes only if the current group completes without errors; failures halt the subsequent group execution

This configuration enables strict dependency management, preventing downstream load groups from processing when upstream data loads finish with errors, while allowing flexibility to continue execution when errors are acceptable. Upgraded load plans default to Success or Finished With Errors, preserving current behavior.


Incorta Nexus

Generate Story customization options

The Generate Story feature now includes a Customize option that provides additional control over the structure and appearance of AI-generated dashboards.

The available customization options include:

  • Layout selection to determine how visualizations are arranged in the dashboard:
    • Top-to-Down (vertical flow)
    • Left-to-Right (horizontal flow)
    • Slides (presentation-style layout)
  • Color palette selection to define the dashboard style, including Sophisticated, Classic, Bright and Bold, 90's Retro, Contemporary, or Custom.
  • Story title editing to define the title of the generated story.
  • Visualization selection to choose which AI-suggested visualizations to include in the story.

This capability provides greater flexibility in configuring the layout, styling, and visual content of AI-generated stories.

Enable dashboards for Nexus

This release introduces a new toggle option to enable dashboards for Nexus through the Dashboard Info panel or the Data Catalog. Once enabled, the dashboard and underlying insights would be available for use.

File upload to Nexus (Preview)

This release introduces the option to upload files directly through Nexus by attaching one or more files to Nexus chat and asking questions about their content. The file upload feature enhances Nexus interactivity and supports a more data-driven chat experience, enabling users to work with multiple datasets in a single conversation.

Using file upload

Please contact Incorta Support to enable file uploads and configure the related settings in Nexus Advanced Settings. Once this setup is complete, you can:

  1. Open Nexus.
  2. Upload any file or drag and drop it into Nexus Chat.
  3. Use the shortcut you configured in Advanced Settings to ask questions about your attached documents (for example, /ask-docs).
Notes
  • A maximum of 5 files can be uploaded per chat session.
  • Each file can be up to 20 MB in size.
  • Files are temporarily stored in a secure directory managed by Nexus and may be deleted automatically to free up space.
  • Files remain accessible only within the current chat; exiting or clearing the chat removes access to those files.

Nexus chat (Preview)

Nexus Chat now offers friendly and informative conversations, clarifying agent capabilities, and supporting interactive chats. It can answer general knowledge questions while exercising caution with questions that require domain-specific data.

Chat session history and parallel sessions

Incorta Nexus introduces session history and improvements to the chat experience. The platform now saves Nexus chat sessions and displays them in a history panel on the left side. The panel is expandable and collapsible, and only the active session accepts new questions. Nexus also supports running multiple sessions in parallel, each with a different context.

Chat usability enhancements

This release adds several usability improvements to the chat interface. Insights can now be docked and pinned to the session canvas for side-by-side comparison while continuing the conversation. Additional improvements include enhanced visibility of sources, highlighted referenced questions for clearer response context, and a centralized action area below each response for copying or downloading generated content.


Dashboards, Visualizations, and Analytics

Result sets usability enhancement

You can now quickly create result sets using the new Add New Result Set option in the Data panel within the Analyzer.
Create result sets using Spark SQL or Analyzer: write the SQL query and select Done, or drag columns into the trays in the Insight panel, configure the properties, and select Save.
The created result set appears in the Data panel under Insight Datasets. You can use the edit and delete icons next to the result set to modify or remove it.

Insight Preview mode in the Analyzer

This release introduces a new Preview mode in the Analyzer that enables viewing insights with the same width and height as on the dashboard. This ensures accurate validation of layout, proportions, and styling without the need to switch between the Analyzer and the Dashboard.

The new Preview mode (eye icon) is available in the Analyzer Action bar and is enabled by default.

Note

Preview mode does not support the following:

  • Rich Text insights, as the full canvas is required to display the full-size rich text edit menu.
  • The Dynamic Layout setting for tabular insights; tables are rendered in Preview mode with Dynamic Layout disabled.

Hide Default Labels option for Map visualizations

This release adds an option to hide default map labels while keeping data point labels visible, reducing visual complexity and improving focus on the data.

Enable this option by toggling Hide Default Labels in Settings → General.

Expand and collapse rows functionality in Pivot tables

Pivot tables now support expanding and collapsing rows to view data at different levels of detail. This improves the user experience for financial and analytical dashboards that rely on complex, multi-level pivot tables.

Dynamic measures option for Listing tables

Listing tables now support a Dynamic Measures option, similar to Aggregated tables, enabling dashboard users to specify which measures to render in the table. This feature provides a simplified view when multiple measures are present.

In Settings → General, enable the Dynamic Measures option, then select one or more measures as the default dynamic measures. The default value is All measures.

Direct children count in the Organizational chart

A new setting is available to accurately calculate and display the number of children for each node in the Organizational chart. Navigate to Settings > General, and under Show Count, select Off, All Children, or Direct Children.

Selecting Direct Children displays only the count of immediate children for each node. For example, if Adam has one direct report, the chart displays 1. Counts for deeper hierarchies, such as grandchildren or total underlying levels, can still be calculated using formula columns as needed.

Decimal places control for percent of total in Pie-type visualizations

You can now control the number of decimal places displayed for the percent of total values in Pie, Donut, and Pie-Donut visualizations.

In Settings → Data Values, use the Decimal Places field to enter the desired precision.

Formula as a reference line in Bubble visualizations

You can now add formulas as reference lines in Bubble visualizations. In the pill’s properties, select Add Reference Line, then choose Formula from the Reference Type dropdown menu.

New zero label rotation option in Column visualizations

A new 0° label rotation option is now available in the X-Axis → Label → Rotation settings of Column charts, enabling dashboard developers to keep axis labels horizontal for improved readability.

The Formula option allows you to define a custom expression that calculates a value to display as a reference line. Formulas can include absolute values, variables, functions, and column references. Bubble visualizations support adding multiple formula-based reference lines.

New Label Position setting for Bar-type, Column-type, and Tornado visualizations

A new Label Position setting is now available for Column, Stacked Column, Percent Column, Bar, Stacked Bar, Percent Bar, and Tornado visualizations.

You can now control where value labels appear by selecting Settings → Data Values → Label Position, and then choosing from the dropdown menu: Outside, Inside Start, Inside Middle, or Inside End, based on your layout preference.

Scrollable chart visualizations

Insights with x-axis and y-axis, such as Bar and Line charts, now support vertical and horizontal scroll bars, allowing dense data to be viewed without resizing the insight on the dashboard. Responsiveness is now optional, providing explicit control over whether charts adapt to the dashboard size or maintain their original dimensions.

URL-based Image insights

Dashboards now support adding images as insights using a URL. Image insights support Incorta variables in the URL, optional alt text, and styling options such as object fit, title, shadow, and corner radius.

Waterfall chart enhancements

The Waterfall chart now supports Cumulative Subtotal bars that display the running total at the end of each group. It also supports a Hide First Group option to exclude the initial group for clearer comparisons. Additionally, the X-axis label now displays the group-by dimension instead of generic subtotal labels.

The Cumulative Subtotal and Hide First Group toggles are available under General settings → Waterfall when Subtotal is enabled.

Enhanced chart styles

Hairline width control in Pie, Donut, Funnel, and Pyramid charts

Hairline width control is now available in Pie, Donut, Funnel, and Pyramid charts, enabling dashboard developers to adjust the connector line between data labels and chart segments for improved visibility and presentation.

The new Hairline width setting offers a range of values from 1 through 10, with a default of 2. When Data Labels are disabled, this option is hidden from the chart settings.

Extended font formatting options

You can now apply bold, italic, and underline formatting, as well as adjust font size and font color, in supported font customization settings. These options are available in applicable font configurations for visualizations, providing greater flexibility in text styling.

Border thickness control in Bar and Column charts

You can now adjust the border thickness in Bar and Column charts to enhance visual clarity and customization by selecting Settings → Bar → Border Width


Architecture and Application Layer

Automatic UI updates with zero downtime

You can now enable Automatic UI Updates from the Cluster Management Console (CMC) under Server Configurations → Incorta Labs to have the platform automatically run the latest Incorta UI, without requiring platform upgrades or restarts. The Analytics service Sign-in and About pages display the Web version number. Disabling this option restores the default UI bundled with your current platform release.

Note

When Automatic UI Updates is enabled, configure the following under Server Configurations → Email to ensure scheduled dashboards execute successfully:

  • Local Rendering URL Protocol: https
  • Local Rendering Host: cluster URL
  • Local Rendering Port: 443

For more information, refer to Guides → Configure Server.

Enhanced observability for proactive support

This release introduces enhanced observability capabilities for Cloud clusters with improved logging, metrics, and tracing that provide real-time visibility into system health and behavior. The new observability suite correlates logs, metrics, and traces to help monitor performance, proactively identify bottlenecks, troubleshoot issues faster, and pinpoint root causes of complex problems, usually before they impact end users.

Important: Feature availability

This feature is currently available exclusively for Incorta Cloud environments and is disabled by default. To enable and configure it for your cloud clusters, please contact the Incorta Support Team.

JWT-based embedded dashboards

Incorta now supports embedding dashboards and individual insights directly within external applications using a JWT-secured, iframe-based integration. This feature provides a seamless user experience by bypassing the standard login page and using JSON Web Tokens (JWTs) for secure authentication and authorization, ensuring that user context and data security are maintained without requiring the end user to enter credentials.

How it works

  1. Enable and configure the feature in the Cluster Management Console (CMC):

    1. Navigate to Clusters > <yourCluster> > Cluster Configurations > Server Configurations > Security.
    2. Turn on the Enable Iframe inclusion toggle, and then turn on Enable JWT-based Embedded Dashboards.
    3. Enter the required configurations, including the JWT issuer and the user claim name.
  2. Use the following URL format to embed dashboards and insights in your applications: https://<YourIncortaCluster>/incorta/embed/dashboard?tenant=<TenantName>&dashboardGUID=<GUID>&insightId=<InsightGUID>&token=<JWT_Token>

    Example:

<iframe src="https://mycluster.cloud.incorta.com/incorta/embed/dashboard?tenant=default&dashboardGUID=0a12345-bcd5-6ef7&insightId=abc1234-de56-78fg&token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJodHRwczovL2F1dGguZXhhbXBsZS5jb20iLCJhdWQiOiJpbmNvcnRhLWVtYmVkIiwic3ViIjoiZW1tYS5tb2hhbWVkIiwiZXhwIjoxNzM1Njg5NjAwfQ"
width="100%"
height="900">
</iframe>

Group Auto-Provisioning support for SAML SSO

Incorta now supports automatic group provisioning during SAML SSO. When enabled, it reads the groups attribute from the SAML assertion provided by the Identity Provider (such as Okta, Azure AD, or ADFS).

During login, the system creates groups if they do not exist and assigns the user to the corresponding groups. This capability supports multi-tenant environments and prevents duplicate groups.

Administrators can enable or disable auto-provisioning through configuration.

Concurrent OAuth and PAT authentication support

Incorta now supports simultaneous OAuth and Personal Access Token (PAT) authentication, providing architectural flexibility for organizations with diverse integration requirements.

Key Benefits:

  • Seamless Migration: Adopt OAuth for AI-driven integrations and new services without disrupting existing tools or workflows that rely on PATs.
  • Enhanced Capability: Support simultaneous authentication for different API requests and enable the materialization of verified business views even when OAuth 2.0 is active.
  • Independent Security Policies: Both methods operate independently, allowing for granular security control and seamless coexistence in mixed-environment deployments.

Enhanced Schema Status endpoint

The Schema Status endpoint, /schema/{schemaName}/status, now features an optional includeErrors parameter (default: false). When set to true, it enriches the endpoint response with detailed loading errors for the schema and its tables. This enhancement provides developers and administrators with granular diagnostic visibility directly through the API, eliminating the need to manually cross-reference logs or navigate the UI to identify and troubleshoot schema loading failures.

Data extraction via Advanced SQLi and Public API enhancements

The extract to external table endpoint, /extraction/table, now supports exporting verified business views to Delta Lake format, in addition to previously supported Parquet and CSV formats. This enhancement enables you to materialize business datasets and deliver them to external destinations in the format that best aligns with your data architecture requirements. Delta Lake support ensures seamless integration with modern data lake architectures, enabling efficient, reliable, and scalable data sharing.

To specify the format, use the dataFormat parameter in the endpoint request. Supported values are case-insensitive ("CSV", "Parquet", and "Delta_Lake").

Notes
  • If the dataFormat parameter is missing, the export defaults to Parquet.
  • If a non-supported format is provided, the endpoint returns an error.

Updates

Data Profiler schema availability update

The Data Profiler schema is no longer provided through the Data Quality Data App. Contact your Account Executive to configure the Data Profiler schema.


Additional enhancements and fixes

Beyond the new features and major enhancements mentioned above, this release includes the following fixes and enhancements that improve the stability, reliability, and overall performance of Incorta.

Enhancements

DescriptionArea
Hidden prompts or presentation variables now remain hidden even when bookmarked.Dashboards
Improved the dashboard filter experience by repositioning the Add button and updating the text field label for clarity.Dashboards
The query interruption mechanism has been enhanced to interrupt running sorting processes, allowing for the immediate termination of long-running sorting operations.
Note: Interrupting sorting operations during sync processes requires explicitly enabling this functionality in the engine.properties file. For configuration details, refer to Query Interruption > Interrupting queries blocking sync operations.
Engine
You can now download the Schema Diagram in different formats, including PDF, SVG, and PNG.Schemas
Relaxed the schema import validation rules introduced in 2025.7.1 to allow importing schemas that share a name with an existing schema, provided they use different capitalization.Schemas
Incorta now bundles Apache Tomcat 9.0.112 to catch up with the security enhancements and fixes in this version.Security

Fixed issues

DescriptionArea
Resolved an issue in multi-Analytics environments using Advanced SQLi where user access to Spark SQL Views was inconsistent—some users could explore and render insights built on Spark SQL views or result sets, while others with equivalent permissions experienced access failures.Advanced SQLi
Fixed an issue where the getColumn function failed to apply data masking correctly, causing masked columns to appear in plain text instead of their masked form.Built-in functions and Data Classification
Fixed an issue where dashboards sent as XLSX or CSV via Data Alerts did not apply default prompts using the Between operator, resulting in exported data not matching the filtered insight.Dashboards
Fixed an issue where domain assignments and tags were not retained when exporting and importing a tenant to another cluster.Data Classification
Fixed an issue where Nexus intermittently generated responses in Spanish by ensuring consistent application of the configured language.Incorta Nexus
Fixed an issue that caused connector auto-upgrades to fail during On-Premises Incorta upgrades when using the Offline Marketplace mode.Marketplace
Fixed an issue where string column length changes in source systems were not reflected in Incorta schemas when revalidating tables, which could cause data truncation or validation errors.Schemas
Fixed an issue where Data Governors encountered a 404 error when editing insights from the internal dashboard; the platform now displays a message indicating that dashboard customization requires Data Governors to have the Analyzer User role and to contact an administrator for access.Security
Fixed an issue where dropdown and list Slicers displayed unnecessary scrollbars, ensuring a cleaner and more consistent user interface.Slicer
Fixed an issue where total values in tables were not aligned correctly when Dynamic Group By was enabled with multiple dimension and measure columns.Visualizations

Known issues

For all of the known issues and workarounds in Incorta’s latest releases, refer to Known Issues.