The following are notes, known issues, or restrictions associated with Hybrid Data Pipeline.
JDBC driver installation limitation with Java
When attempting to install the JDBC driver in console mode, an error may occur with some versions of Java. For a successful installation, add the following argument when you run the console installation command:
-Djdk.util.zip.disableZip64ExtraFieldValidation=true
Docker deployment limitations
- FIPS is not supported for Docker deployments.
- SSL connections behind a load balancer and between nodes is not supported for Docker deployments.
FIPS mode
- FIPS is not supported for Docker deployments.
- The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
- Hybrid Data Pipeline does not support FIPS for Snowflake connections. Hybrid Data Pipeline uses the Bouncy Castle libraries to provide FIPS 140-2 compliant cryptography, but the Snowflake data store is incompatible with Bouncy
Castle FIPS. The Snowflake data store uses the default Sun Security Provider to create its own SSL context and the Sun Security Provider is not available in Bouncy Castle FIPS. In addition, the Bouncy Castle FIPS Security Provider
does not allow creating custom SSL contexts.
Java 8 data types not supported for OData
For OData connections, Hybrid Data Pipeline does not support the data types that were introduced with Java SE 8. For example, REF_CURSOR, TIME_WITH_TIMEZONE,
and TIMESTAMP_WITH_TIMEZONE.
Unusable data stores after Hybrid Data Pipeline server upgrade
Customers have the ability to update connectors directly in Hybrid Data Pipeline as described in Updating data store connectors. If you plan on performing an upgrade of the Hybrid Data Pipeline server after updating a connector directly, then you must take steps to ensure your environment pulls
in the latest connector as provided in the upgrade. Before you perform the server upgrade, you must remove all the connector jar files from the dddrivers directory in the key location.
After removing any connector jar files from the dddrivers directory, you may proceed with the Hybrid Data Pipeline server upgrade. A failure to perform this workaround will render
the data stores associated with the connectors unusable.
System database validation during installation
The installer cannot currently validate the necessary database schema objects when any of the following special characters are used in either database user ID or password values: space ( ), quotation mark ("), number sign (#), dollar
sign ($), and apostrophe ('). Therefore, in a standard installation where these characters are used in database credentials, database validation must be skipped to proceed with the installation. Similarly, when performing a silent
installation in this case, the SKIP_DATABASE_VALIDATION property should be set to true. Note that when skipping database validation
in this scenario, the server should install successfully and work with the specified system database.
Silent installation console-generated response file
In a console-generated response file, property values may not contain the quotation mark character ("), even if escaped. For example, the value My"Value or My\"Value would be invalid, and could not be used to log in to the Hybrid Data Pipeline service after installation. You can work around this issue either by generating the response file with the GUI installer, or by using environment variables
to specify values. See Silent installation process in
the user's guide for more information about these workarounds.
Driver Files API
The Driver Files API cannot be used to retrieve the output REST file when the On-Premises Connector is used to connect to a REST service via an Automated REST Connector data source.
Performing a silent installation - Log file issue
When performing a silent install, if the deployment script fails, no 'SilentInstallError.log' is written. You may check the 'Installation directory/ddcloud/final.log' to know the installation status.
The use of wildcards in SSL server certificates
The Hybrid Data Pipeline service will not by default connect to a backend data store that has been configured for SSL when a wildcard is used to identify the server name in the SSL certificate. If a server certificate contains a wildcard,
the following error will be returned.
There is a problem connecting to the DataSource. SSL handshake failed:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
valid certification path to requested target
To work around this issue, the exact string (with wildcard) in the server certificate can be specified with the Host Name in Certificate option when configuring your data source through the Hybrid Data Pipeline user interface or management
API.
Load balancer port limitation
The following requirements and limitations apply to the use of non-standard ports in a cluster environment behind a load balancer.
- For OData connections, the load balancer must supply the X-Forwarded-Port header.
- In the Web UI, the OData tab will not display the correct port in the URL.
- For JDBC connections, a non-standard port must be specified with the PortNumber connection property. The connection URL syntax is: //<host_name>:<port_number>;<key>=<value>;....
- For ODBC connections, a non-standard port must be specified with the Port Number connection option.
- If you are using the On-Premises Connector, a non-standard port must be specified with the On-Premises Connector Configuration Tool.
Web UI
- If the entry page is blank after successfully logging in to the Web UI, it should be refreshed to load properly.
- The following intermittent behavior has been observed when managing data source groups via the Data Sources view. After creating multiple data source groups and then deleting one of these groups, users could not
select multiple data source groups. When selecting one group and then another, the previous group was deselected.
- Using Safari 9.1.3, the following behavior has been observed when creating a data source group via the Data Sources view. After clicking New Group under the Data Source Groups tab, users could not select multiple data sources from the list of OData data sources. They could, however, select a single data source or all data sources.
- In the Web UI, when managing Autonomous REST Connector data sources, authentication credentials for the REST service must be entered via the Security tab. When clicking Generate Configuration or
TEST from the General tab, a pop-up window prompts you for authentication credentials if they have not already been specified via the Security tab. At this time, the connectivity
service does not consume and use credentials entered in the pop-up window. Therefore, authentication credentials must be entered via the Security tab.
- An OAuth profile cannot be created for a Google Analytics data source when using Microsoft Edge. To work around this issue, another supported browser such as Chrome or Firefox should be used.
- When using IE 11 to access the Web UI, the domain URL must be qualified (for example, http(s)://domain-qualified-url/hdpui). Alternatively, the "Display intranet sites in compatibility view settings" in IE 11 must be turned off
to use a hostname without a domain address.
- If there are any '%' or '_'characters in the HDPMetadataExposedSchema option, they will not be treated as wildcard characters. The option value specified is considered a literal.
- When a data source is configured with OData Version 4 and the OData Schema Map version is 'odata_mapping_v3' and it does not contain any "entityNameMode", any further editing of the OData Schema map adds "entityNameMode":"pluralize".
This affects how entity names are referred to in the OData queries. To avoid this, you must set the entityNameMode whenever a data source is created or edited to the preferred mode. Alternatively, you can remove the "entityNameMode"
property from the OData schema map json while saving the data source, if you want to use the default "Guess" mode.
Management API
- When the Limits API (throttling) is used to set a row limit and createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a row-limit-exceeded error is returned at the row limit instead of one row beyond the limit. For
example, if a row limit is set at 45 rows when returning a scrollable, insensitive result set beyond the specified limit, the connectivity service returns the following error on the 45th row as opposed to the expected 46th
row: The limit on the number of rows that can be returned from a query -- 45 -- has been exceeded.
OData
- Functions are not currently supported for $orderby.
- OData functions are not supported with the On-Premises Connector.
- Functions with default parameters are not working.
- For DB2, BOOLEAN data type does not work with functions in OData.
- For SQL Server and DB2, OData datatypes Edm.Date and Edm.TimeofDay do not work in Power BI, if the function is selected from from the list of function Imports and parameter values are provided. However, Power BI allows Edm.Date
and Edm.TimeOfDay types for Function imports when passed directly in OData feed. There is one workaround available for type Edm.TimeofDay. The columns that are exposed as Edm.TimeofDay should be mapped as TimeAsString in ODataSchemaMap.
In this case, PowerBI works as expected.
- In a load balancer environment, when invoking function import (and not function) that takes datetimeoffset as a parameter, we need to encode the : character present in time parameter. So, the following will return an error:
http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
(DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)
The correct URL encoded example must look like the following:
http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
(DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5)
- When invoking function import (and not function) that returns null using Power BI, a data format error is returned. The resolution to this issue is being discussed internally as well as with Microsoft.
- OData 4.0 support for $expand does not work with the following data stores: Salesforce, Google Analytics, and Oracle Service Cloud.
- $expand only supports one level deep. Take for example the following entity hierarchy:
Customers
|-- Orders
| |-- OrderItems
|-- Contacts
The following queries are supported:
Customers?$expand=Orders
Customers?$expand=Contacts
Customers?$expand=Orders,Contacts
However, this query is not supported:
Customers?$expand=Orders,OrderItems
OrderItems
is a second level entity with respect to Customers
. To query Orders
and OrderItems
,
the query must be rooted at Orders. For example:
Orders?$expand=OrderItems
Orders(id)?$expand=OrderItems
- When manually editing the ODataSchemaMap value, the table names and column names specified in the value are case-sensitive. The case of the table and column names must match the case of the tables and column names reported by the
data source.
Note: It is highly recommended that you use the OData Schema Editor to generate the value for the ODataSchemaMap data source option. The Schema Editor takes care of table and column name casing and other syntactic
details.
- The $expand clause is not supported with OpenEdge data sources when filtering for more than a single table.
- The day, endswith, and cast functions are not working when specified in a $filter clause when querying a DB2 data source.
On-Premises Connector
- The Autonomous REST Composer, available via the Configure Endpoints tab in Autonomous REST Connector data stores, does not support sampling and configuring REST endpoints made available through the On-Premises Connector.
- The On-Premises Connector is not currently FIPS compliant. Therefore, any connections made to an on-premises data source through an On-Premises Connector will not be fully FIPS compliant.
- External authentication services are not currently supported when connecting to data sources using the On-Premises Connector.
- If User Account Control is enabled on your Windows machine and you installed the On-Premises Connector in a system folder (such as Windows or Program Files), you must run the On-Premises Connector Configuration Tool in administrator
mode.
- Uninstalling and re-installing the On-Premises Connector causes the Connector ID of the On-Premises Connector to change. Any Hybrid Data Pipeline data sources using the old Connector ID must be updated to use the new Connector
ID. Installing to a new directory allows both the old and new On-Premises Connector to exist side-by-side. However, you must update the Connector ID option in previously-defined Hybrid Data Pipeline data sources to point to
the new On-Premises Connector. In addition, you must update Connector Id wherever it was used, such as the definitions of Group Connectors and Authorized Users. Note that upgrading an existing installation of the On-Premises
Connector maintains the Connector ID.
- When upgrading the On-Premises Connector, if the specified user installation directory contains a hyphen “-”, the upgrade will fail. To work around this issue, avoid using hyphen “-” in the user installation
directory name. If your existing On-Premises Connector installation directory name contains a hyphen, you must uninstall the existing On-Premises Connector and then perform a new install rather than attempting to upgrade the
existing On-Premises Connector installation.
JDBC driver
- If you attempt to install the JDBC driver in GUI mode to the default installation directory but do not have appropriate permissions for the default directory, the installer indicates that the installation has succeeded when in
fact the driver has not been installed. When attempting an installation under the same circumstance but in console mode, the proper error message is displayed.
- The JDBC 32-bit silent installation fails on Windows 10. Use the standard installation instead.
- Console mode installation is supported only on UNIX and Linux.
- The default value for the Service connection property does not connect to the Hybrid Data Pipeline server. To connect, set Service to the Hybrid Data Pipeline server in your connection URL.
- Using JNDI data sources, encryptionMethod must be configured through setExtendedOptions.
- OpenEdge
- The values returned for isWritable and isReadOnly result set metadata attributes are not correct when connected to an OpenEdge data source.
- Fetching / Describing stored procedure parameter metadata from a callable statement is not working. You can obtain stored procedure parameter metadata using DatabaseMetaData.getProcedureColumns.
- Max Precision of LONGVARCHAR data type reported by the driver is incorrect. The correct max precision is 1073741824.
- Salesforce.com, Force.com, and Database.com
- Using the SELECT...INTO Statement. The SELECT...INTO statement is not supported for remote tables.
ODBC driver
- When you first install a driver, you are given the option to install a default data source for that driver. We recommend that you install default data sources when you first install the drivers. If you do not install the default
data source at this time, you will be unable to install a default data source for this driver later. To install a default data source for a driver after the initial installation, you must uninstall the driver and then reinstall
it.
- Console mode installation is supported only on UNIX.
- The default ODBC.INI generated by the installer is missing required entries for Service=, PortNumber=, and HybridDataPipelineDataSource=.
- Salesforce.com, Force.com, and Database.com
- SQLGetDescField(SQL_DESC_NAME) and SQLColAttributes(SQL_DESC_NAME) do not return a column alias. The column name is always returned.
- Several SQLGetInfo calls for max length, max number return unknown instead of the maximum length or maximum count.
- TEXTAREA data type takes a create parameter max length. That is not reported in the TypeInfo.
- Binary data (SQL_C_BINARY) inserted into character columns (SQL_CHAR, SQL_VARCHAR, SQL_LONGVARCHAR) is not inserted correctly.
- For SQLColAttribute, the column attributes 1001 and 1002, which were assigned as DataDirect-specific attributes, were inadvertently used as system attributes by the Microsoft 3.0 ODBC implementation. Applications using those attributes
must now use 1901 and 1902, respectively.
- Because of inconsistencies in the ODBC specification, users attempting to use SQL_C_NUMERIC parameters must set the precision and scale values of the corresponding structure and the descriptor fields in the Application Parameter
Descriptor.
- One of the most common connectivity issues encountered while using IIS (Microsoft Internet Information Server) concerns the use and settings of the account permissions. If you encounter problems using Hybrid Data Pipeline drivers
with an IIS server, refer to the following KnowledgeBase article: https://community.progress.com/s/article/4274
All data stores
- It is recommended that Login Timeout not be disabled (set to 0) for a data source.
- Using setByte to set parameter values fails when the data store does not support the TINYINT SQL type. Use setShort or setInt to set the parameter value instead of setByte.
Autonomous REST Connector
Google Analytics
- An OAuth profile cannot be created for a Googl Analytics data source when using Microsoft Edge. To work around this issue, another supported browser such as Chrome or Firefox should be used.
- Validation message is not displayed when a user enters a Start Date value less than the End Date value in Create/Update Google Analytics page.
- Once a Google Analytics OAuth profile is created for a specific Google account, changing the Google Account associated with the profile results in "the configuration options used to open the database do not match the options used
to create the database" error being returned for any existing data sources.
- When the cross-origin security check is enabled, the Google Analytics Table Tool does not open in a web browser for Dimensions and Metrics.
MongoDB (including MongoDB Atlas and Azure CosmosDB for MongoDB)
- The Kerberos authentication method is not supported for MongoDB in Hybrid Data Pipeline.
OpenEdge 10.2b
- Setting the MaxPooledStatements data source option in an OpenEdge data store to a value other than zero can cause statement not prepared errors to be returned in some situations.
Oracle Marketing Cloud (Oracle Eloqua)
- Data store issues
- There are known issues with Batch Operations.
- The Update/Delete implementation can update only one record at a time. Because of this, the number of APIs executed depends on the number of records that get updated or deleted by the query plus the number of API calls
required to fetch the IDs for those records.
- Lengths of certain text fields are reported as higher than the actual lengths supported in Oracle Eloqua.
- We are currently working with Oracle to resolve the following issues with the Oracle Eloqua REST API.
- AND operators that involve different columns are optimized. In other cases, the queries are only partially optimized.
- OR operators on the same column are optimized. In other cases, the queries are completely post-processed.
- The data store is not able to insert or update the NULL value to any field explicitly.
- The data store is unable to update few fields. They are always reported as NULL after update.
- Oracle Eloqua uses a double colon (::) as an internal delimiter for multivalued Select fields. Hence when a value with the semi-colon character (;) is inserted or updated into a multivalued Select field, the semicolon character
gets converted into the double colon character.
- Query SELECT count (*) from template returns incorrect results.
- Oracle Eloqua APIs do not populate the correct values in CreatedBy and UpdatedBy fields. Instead of user names, they contain a Timestamp value.
- Only equality filters on id fields are optimized. All other filter conditions are not working correctly with Oracle Eloqua APIs and the data store is doing post-processing for such filters.
- Filters on Non-ID Integer fields and Boolean fields are not working correctly. Hence the driver needs to post-process all these queries.
- The data store does not distinguish between NULL and empty string. Therefore, null fields are often reported back as empty strings.
- Values with special characters such as curly braces ({,}), back slash (\), colon (:), slash star (/*) and star slash (*/) are not supported in where clause filter value.
Oracle Sales Cloud
- Currently, passing filter conditions to Oracle Sales Cloud works only for simple, single column conditions. If there are multiple filters with 'AND' and 'OR', only partial or no filters are passed to Oracle Sales Cloud.
- Oracle Sales Cloud reports the data type of String and Date fields as String. Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline, they are treated as String values. However, when filter conditions
are passed to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual data types and apply Date specific comparisons to Date fields. Therefore, query results can differ depending on whether filters have been
passed down to Oracle Sales Cloud or processed by Hybrid Data Pipeline.
- There appears to be a limitation with the Oracle Sales Cloud REST API concerning the >=, <=, and != comparison operators when querying String fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these comparison
operators to Oracle Sales Cloud. We are working with Oracle on this issue.
- There appears to be a limitation with the Oracle Sales Cloud REST API concerning queries with filter operations on Boolean fields. Therefore, Hybrid Data Pipeline has not been optimized to pass filter operations on Boolean fields
to Oracle Sales Cloud. We are working with Oracle on this issue.
- The drivers currently report ATTACHMENT type fields in the metadata but do not support retrieving data for these fields. These fields are set to NULL.
- Join queries between parent and child tables are not supported.
- Queries on child tables whose parent has a composite primary key are not supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and LEADS_PRODUCTS are not accessible.
- Queries on the children of relationship objects are not supported. For example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and HOUSEHOLDS_RELATIONSHIP are not accessible.
- Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs used in an OR clause are not supported. For example, the following query is not supported.
Select * From ACCOUNTS_ADDRESS_ADDRESSPURPOSE
Where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
OR (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
ACCOUNTS_ADDRESS_ADRESSNUMBER = 'AUNA-2YZKGH")
- When querying documented objects like "CATALOGPRODUCTITEMS" and "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when more records may be present. This behavior is also seen with some custom objects. We are currently
working with Oracle support to resolve this issue.
- A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with a filter on the primary key column returns 0 records even when more records are present. We are currently working with Oracle support to resolve this issue.
- Queries that contain subqueries returning more than 100 records are not supported. For example, the following query is not supported.
Select * From ACCOUNTS_ADDRESS
Where ACCOUNTS_PARTYNUMBER
In (Select Top 101 PARTYNUMBER From ACCOUNTS
- When you create custom objects, your Oracle Sales Cloud administrator must enable these objects for REST API access through Application Composer. Otherwise, you will not be able to query against these custom objects.
Oracle Service Cloud
- When you create a custom object, your Oracle Service Cloud administrator must enable all four columns of the Object Fields tab of the Object Designer, or you cannot query against the custom objects.
- The initial connection when the relational map is created can take some time. It is even possible to receive an error "504: Gateway Timeout". When this happens, Hybrid Data Pipeline continues to build the map in the background
such that subsequent connection attempts are successful and have full access to the relational map.
Salesforce
- If you have existing Salesforce data sources and are upgrading from an earlier version of Hybrid Data Pipeline to version 4.6, then you must recreate the relational map of each Salesforce data source.
To recreate the
relational map using the Web UI, select the Salesforce data source from the list of data sources in the Manage Data Sources view. Then, under the Mapping tab, select Force New from the Create Mapping dropdown, and click the Update button to save the change. Next, click the Test button to test the connection. Once you have confirmed the connection,
the Create Mapping option should be changed back to Not Exist.
To recreate the relational map using the Data Sources API, execute the following operation and payload where {datasourceId}
is the ID of the data source.
POST https://MyServer:8443/api/mgmt/datasources/{datasourceId}/map
{
"map": "recreate"
}
SAP S/4HANA (including SAP BW/4HANA and SAP NetWeaveR)
- The HTTP Header authentication method is not supported for SAP S/4HANA, SAP BW/4HANA, and SAP NetWeaver in Hybrid Data Pipeline.
Snowflake
- Hybrid Data Pipeline does not support FIPS for Snowflake connections. Hybrid Data Pipeline uses the Bouncy Castle libraries to provide FIPS 140-2 compliant cryptography, but the Snowflake data store is incompatible with Bouncy
Castle FIPS. The Snowflake data store uses the default Sun Security Provider to create its own SSL context and the Sun Security Provider is not available in Bouncy Castle FIPS. In addition, the Bouncy Castle FIPS Security Provider
does not allow creating custom SSL contexts.