This topic provides answers to some frequently asked questions about SQL of Realtime Compute for Apache Flink, including FAQ about drafts and deployments and errors returned during draft development and deployment O&M.
Why are fields misaligned when I use a POJO class as the data types for the return values of a UDTF?
Errors returned during draft development
Errors returned during deployment O&M
What do I do if the error message "exceeded quota: resourcequota" appears?
What do I do if the error message "Exceeded checkpoint tolerable failure threshold" appears?
What do I do if the error message "Flink version null is not configured for sql" appears?
What do I do if the error message "DateTimeParseException: Text 'xxx' could not be parsed" appears?
What do I do if the error message "java.io.EOFException: SSL peer shut down incorrectly" appears?
Why are fields misaligned when I use a POJO class as the data types for the return values of a UDTF?
Problem description
If a Plain Old Java Object (POJO) class is used as the data types for the return values of a user-defined table-valued function (UDTF) and the alias names of the returned fields of the UDTF are explicitly declared in the SQL statement, fields may be misaligned. In this case, the fields that are used may not meet requirements even if the data types of the fields are consistent.
For example, the SQL verification failure occurs in the following situations: You use the following POJO class as the data types for the return values of a UDTF, package the UDTF based on the requirements described in Overview, and register the UDTF based on the requirements described in Register a deployment-level UDF.
package com.aliyun.example; public class TestPojoWithoutConstructor { public int c; public String d; public boolean a; public String b; }
package com.aliyun.example; import org.apache.flink.table.functions.TableFunction; public class MyTableFuncPojoWithoutConstructor extends TableFunction<TestPojoWithoutConstructor> { private static final long serialVersionUID = 1L; public void eval(String str1, Integer i2) { TestPojoWithoutConstructor p = new TestPojoWithoutConstructor(); p.d = str1 + "_d"; p.c = i2 + 2; p.b = str1 + "_b"; collect(p); } }
CREATE TEMPORARY FUNCTION MyTableFuncPojoWithoutConstructor as 'com.aliyun.example.MyTableFuncPojoWithoutConstructor'; CREATE TEMPORARY TABLE src ( id STRING, cnt INT ) WITH ( 'connector' = 'datagen' ); CREATE TEMPORARY TABLE sink ( f1 INT, f2 STRING, f3 BOOLEAN, f4 STRING ) WITH ( 'connector' = 'print' ); INSERT INTO sink SELECT T.* FROM src, LATERAL TABLE(MyTableFuncPojoWithoutConstructor(id, cnt)) AS T(c, d, a, b);
The following error message for SQL verification is reported:
org.apache.flink.table.api.ValidationException: SQL validation failed. Column types of query result and sink for 'vvp.default.sink' do not match. Cause: Sink column 'f1' at position 0 is of type INT but expression in the query is of type BOOLEAN NOT NULL. Hint: You will need to rewrite or cast the expression. Query schema: [c: BOOLEAN NOT NULL, d: STRING, a: INT NOT NULL, b: STRING] Sink schema: [f1: INT, f2: STRING, f3: BOOLEAN, f4: STRING] at org.apache.flink.table.sqlserver.utils.FormatValidatorExceptionUtils.newValidationException(FormatValidatorExceptionUtils.java:41)
In this example, the fields returned from the UDTF and the fields in the POJO class may be misaligned. In the SQL statement, Field c is of the BOOLEAN data type, while Field a is of the INT data type, which is the opposite of the data types specified by the POJO class.
Cause
The order of the returned fields varies based on a parameterized constructor of the POJO class:
If the POJO class implements a parameterized constructor, the fields are sorted based on the order of the parameters of the parameterized constructor.
If the POJO class does not implement a parameterized constructor, the fields are sorted based on the alphabetical order of the field names.
In this example, the POJO class does not implement a parameterized constructor. As a result, the data types for the return values of the UDTF are
BOOLEAN a, VARCHAR(2147483647) b, INTEGER c, VARCHAR(2147483647) d)
. No error occurs in the preceding example. However, a rename listLATERAL TABLE(MyTableFuncPojoWithoutConstructor(id, cnt)) AS T(c, d, a, b)
is added to the returned fields in the SQL statement. The returned fields are renamed. The renaming operation is performed based on the field position. As a result, fields are misaligned when the POJO class is used. This causes verification errors or unexpected data misalignment.Solutions
If the POJO class does not implement a parameterized constructor, do not explicitly rename the fields returned by the UDTF. For example, you can change the SELECT clause in the preceding INSERT statement to the following SELECT clause:
-- If the POJO class does not implement a parameterized constructor, we recommend that you select the required field names. When you use T.*, you must know the actual order of the returned fields. SELECT T.c, T.d, T.a, T.b FROM src, LATERAL TABLE(MyTableFuncPojoWithoutConstructor(id, cnt)) AS T;
Implement a parameterized constructor in the POJO class to determine the order of the returned fields. In this case, the order of the returned fields is the order of the parameters of the parameterized constructor.
package com.aliyun.example; public class TestPojoWithConstructor { public int c; public String d; public boolean a; public String b; // Using specific fields order instead of alphabetical order public TestPojoWithConstructor(int c, String d, boolean a, String b) { this.c = c; this.d = d; this.a = a; this.b = b; } }
Data output is suspended on the LocalGroupAggregate operator for a long period of time. No data output is generated. Why?
Problem description
The
table.exec.mini-batch.size
parameter is not configured for the deployment or thetable.exec.mini-batch.size
parameter is set to a negative value. The deployment includes both WindowAggregate and GroupAggregate operators. The time column of the WindowAggregate operator is proctime, which indicates the event time. The topology of the deployment contains the LocalGroupAggregate operator, but does not contain the MiniBatchAssigner operator when the deployment is started.The following sample code provides an example on how to create a deployment that includes both WindowAggregate and GroupAggregate operators. The time column of the WindowAggregate operator is proctime, which indicates the event time.
CREATE TEMPORARY TABLE s1 ( a INT, b INT, ts as PROCTIME(), PRIMARY KEY (a) NOT ENFORCED ) WITH ( 'connector'='datagen', 'rows-per-second'='1', 'fields.b.kind'='random', 'fields.b.min'='0', 'fields.b.max'='10' ); CREATE TEMPORARY TABLE sink ( a BIGINT, b BIGINT ) WITH ( 'connector'='print' ); CREATE TEMPORARY VIEW window_view AS SELECT window_start, window_end, a, sum(b) as b_sum FROM TABLE(TUMBLE(TABLE s1, DESCRIPTOR(ts), INTERVAL '2' SECONDS)) GROUP BY window_start, window_end, a; INSERT INTO sink SELECT count(distinct a), b_sum FROM window_view GROUP BY b_sum;
Cause
The managed memory is used to cache data in miniBatch processing mode if the table.exec.mini-batch.size parameter is not configured for the deployment or the table.exec.mini-batch.size parameter is set to a negative value. The MiniBatchAssigner operator fails to be generated and cannot send the watermark message to compute operators to trigger final calculation and data output. Final calculation and data output are triggered only when one of the following conditions is met: The managed memory is full, the CHECKPOINT command is received and checkpointing has not been performed, and the deployment is canceled. For more information, see table.exec.mini-batch.size. The checkpoint interval is set to an excessively large value. The LocalGroupAggregate operator does not trigger data output for a long period of time.
Solutions
Decrease the checkpoint interval. This way, the LocalGroupAggregate operator can automatically trigger data output before checkpointing is performed.
Use the heap memory to cache data. This way, data output is automatically triggered when the amount of data cached on the LocalGroupAggregate operator reaches the value of the table.exec.mini-batch.size parameter. To configure the table.exec.mini-batch.size parameter, perform the following steps: In the Parameters section of the Configuration tab on the
page in the development console of Realtime Compute for Apache Flink, set the table.exec.mini-batch.size parameter to a positive value N in the Other Configuration field.
Why does a time difference exist between the current time and the time in the values of the Low Watermark and Datetime of Watermark Timestamp parameters on the Watermarks tab of the Status tab, as well as between the current time and the time in the value of the Task InputWatermark metric in the Watermark section of the Metrics tab?
Cause 1: A field of the TIMESTAMP_LTZ (TIMESTAMP(p) WITH LOCAL TIME ZONE) data type is used to declare the watermark in the source table. As a result, a time difference exists between the current time and the values of the watermark-related parameters.
The following example shows the difference between the watermark that is declared by using a field of the TIMESTAMP_LTZ data type and the watermark that is declared by using a field of the TIMESTAMP data type.
The following sample code shows that the field that is used to declare the watermark in the source table is of the TIMESTAMP_LTZ data type.
CREATE TEMPORARY TABLE s1 ( a INT, b INT, ts as CURRENT_TIMESTAMP,-- Use the built-in function CURRENT_TIMESTAMP to generate data of the TIMESTAMP_LTZ data type. WATERMARK FOR ts AS ts - INTERVAL '5' SECOND ) WITH ( 'connector'='datagen', 'rows-per-second'='1', 'fields.b.kind'='random','fields.b.min'='0','fields.b.max'='10' ); CREATE TEMPORARY TABLE t1 ( k INT, ts_ltz timestamp_ltz(3), cnt BIGINT ) WITH ('connector' = 'print'); -- Obtain the calculation result. INSERT INTO t1 SELECT b, window_start, COUNT(*) FROM TABLE( TUMBLE(TABLE s1, DESCRIPTOR(ts), INTERVAL '5' SECOND)) GROUP BY b, window_start, window_end;
NoteThe calculation result that is generated by using the syntax of the legacy window is the same as the calculation result that is generated by using the table-valued function (TVF) window. The following sample code provides an example of the syntax of the legacy window.
SELECT b, TUMBLE_END(ts, INTERVAL '5' SECOND), COUNT(*) FROM s1 GROUP BY TUMBLE(ts, INTERVAL '5' SECOND), b;
The following figures show that an 8-hour time difference exists between the current time (UTC+8) and the time in the values of the Low Watermark and Datetime of Watermark Timestamp parameters on the Watermarks tab of the Status tab, as well as between the current time (UTC+8) and the time in the value of the Task InputWatermark metric in the Watermark section of the Metrics tab after a draft is deployed and published in the development console of Realtime Compute for Apache Flink.
Watermark and Low Watermark
Task InputWatermark
The following sample code shows that the field that is used to declare the watermark in the source table is of the TIMESTAMP (TIMESTAMP(p) WITHOUT TIME ZONE) data type.
CREATE TEMPORARY TABLE s1 ( a INT, b INT, -- No time zone information is included in the timestamp of the simulated data source. In this case, the timestamp is incremented by one second from 2024-01-31 01:00:00. ts as TIMESTAMPADD(SECOND, a, TIMESTAMP '2024-01-31 01:00:00'), WATERMARK FOR ts AS ts - INTERVAL '5' SECOND ) WITH ( 'connector'='datagen', 'rows-per-second'='1', 'fields.a.kind'='sequence','fields.a.start'='0','fields.a.end'='100000', 'fields.b.kind'='random','fields.b.min'='0','fields.b.max'='10' ); CREATE TEMPORARY TABLE t1 ( k INT, ts_ltz timestamp_ltz(3), cnt BIGINT ) WITH ('connector' = 'print'); -- Obtain the calculation result. INSERT INTO t1 SELECT b, window_start, COUNT(*) FROM TABLE( TUMBLE(TABLE s1, DESCRIPTOR(ts), INTERVAL '5' SECOND)) GROUP BY b, window_start, window_end;
After you deploy and publish a draft in the development console of Realtime Compute for Apache Flink, the time in the values of the Low Watermark and Datetime of Watermark Timestamp parameters on the Watermarks tab of the Status tab and the time in the value of the Task InputWatermark metric in the Watermark section of the Metrics tab are the same as the current time. In this example, the current time is the time of the simulated data.
Watermark and Low Watermark
Task InputWatermark
Cause 2: The time zone of the display time in the development console of Realtime Compute for Apache Flink is different from the time zone of the display time on the Apache Flink UI.
The display time in the development console of Realtime Compute for Apache Flink is in UTC+0. However, the display time on the Apache Flink UI is the local time that is converted based on the local time zone that the Apache Flink UI obtains by using the browser. The following example shows the difference between the display time in the development console of Realtime Compute for Apache Flink and the display time on the Apache Flink UI when UTC+8 is used. The display time in the development console of Realtime Compute for Apache Flink is 8 hours earlier than the display time on the Apache Flink UI.
Development console of Realtime Compute for Apache Flink
Apache Flink UI
What do I do if the error message "undefined" appears?
Problem description
Cause
The size of your JAR package exceeds the allowed size.
Solution
Upload the JAR package in the
Object Storage Service (OSS) console. For more information, see How do I upload a JAR package in the Object Storage Service (OSS) console?
What do I do if the error message "Object '****' not found" appears?
Problem description
After you click Validate, the error message shown in the following figure appears.
Cause
When you execute DDL and DML statements in the same text, you do not declare CREATE TEMPORARY TABLE in the DDL statement.
Solution
When you execute DDL and DML statements in the same text, declare CREATE TEMPORARY TABLE instead of CREATE TABLE in the DDL statement.
What do I do if the error message "Only a single 'INSERT INTO' is supported" appears?
Problem description
After you click Validate, the error message shown in the following figure appears.
Cause
Multiple DML statements are not written between the key statements
BEGIN STATEMENT SET;
andEND;
.Solution
Write the DML statements between
BEGIN STATEMENT SET;
andEND;
. For more information, see INSERT INTO statement.
What do I do if the error message "The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true'" appears?
Problem description
Caused by: org.apache.flink.table.api.ValidationException: The primary key is necessary when enable 'Key: 'scan.incremental.snapshot.enabled' , default: true (fallback keys: [])' to 'true' at com.alibaba.ververica.cdc.connectors.mysql.table.MySqlTableSourceFactory.validatePrimaryKeyIfEnableParallel(MySqlTableSourceFactory.java:186) at com.alibaba.ververica.cdc.connectors.mysql.table.MySqlTableSourceFactory.createDynamicTableSource(MySqlTableSourceFactory.java:85) at org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:134) ... 30 more
Cause
In Realtime Compute for Apache Flink whose engine version is vvr-3.0.7-flink-1.12 or earlier, the MySQL Change Data Capture (CDC) source does not support parallel data reading. However, in Realtime Compute for Apache Flink whose engine version is vvr-4.0.8-flink-1.13 or later, data can be sharded based on the primary key to support parallel data reading. This feature is specified by the scan.incremental.snapshot.enabled parameter. The default value of this parameter is true. This indicates that the feature is enabled by default. The primary key must be configured when this feature is enabled.
Solution
If you use Realtime Compute for Apache Flink whose engine version is vvr-4.0.8-flink-1.13 or later, use one of the following solutions based on your business requirements:
If you want to read data from the MySQL CDC source in parallel, configure the primary key in the DDL statement.
If you do not want to read data from the MySQL CDC source in parallel, set the scan.incremental.snapshot.enabled parameter to false. For more information about the parameter configuration, see Parameters in the WITH clause.
What do I do if the error message "exceeded quota: resourcequota" appears?
Problem description
The error message appears when a deployment is started.
Cause
The deployment fails to be started because the resources of the current project are insufficient.
Solution
Reconfigure the project resources. For more information, see Reconfigure resources.
What do I do if the error message "Exceeded checkpoint tolerable failure threshold" appears?
Problem description
The error message appears when a deployment is running.
org.apache.flink.util.FlinkRuntimeException:Exceeded checkpoint tolerable failure threshold. at org.apache.flink.runtime.checkpoint.CheckpointFailureManager.handleJobLevelCheckpointException(CheckpointFailureManager.java:66)
Cause
The maximum number of checkpoint failures allowed in a task is not specified. By default, a failover is triggered each time a checkpoint fails.
Solution
On the
page, click the name of the desired deployment.On the Configuration tab of the deployment details page, click Edit in the upper-right corner of the Parameters section.
In the Other Configurations field, enter the following code:
execution.checkpointing.tolerable-failed-checkpoints: num
You must replace num with the maximum number of checkpoint failures that are allowed in the task. This parameter must be set to 0 or a positive integer. If the parameter is set to 0, no checkpoint exceptions or failures are allowed.
What do I do if the error message "Flink version null is not configured for sql" appears?
Problem description
StatusRuntimeException: INTERNAL: Flink version null is not configured for sql.
Cause
The Ververica Runtime (VVR) version of the system is updated to VVR 4.0.8. As a result, the version information about the Realtime Compute for Apache Flink compute engine of the deployment cannot be obtained.
Solution
Click the Configurations tab on the right side of the SQL Editor page and select the required version from the Engine Version drop-down list.
NoteIf you want to use the debugging feature, check whether the engine version that is selected on the Session Clusters page is correct.
What do I do if the error message "INFO: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss" appears?
Problem description
Cause
When OSS creates a directory, OSS checks whether the directory exists. If the directory does not exist, the error message "INFO: org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss" appears. Realtime Compute for Apache Flink deployments are not affected.
Solution
Add
<Logger level="ERROR" name="org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss"/>
to the log template. For more information, see Configure parameters to export logs of a deployment.
What do I do if the error message "DateTimeParseException: Text 'xxx' could not be parsed" appears?
Problem description
When a deployment is running, the error message
DateTimeParseException: Text 'xxx' could not be parsed
appears.Cause
If the VVR version is earlier than VVR 4.0.13 and the date format that you declare in a DDL statement is inconsistent with the format of the actual data, Realtime Compute for Apache Flink reports an error.
Solutions
In VVR 4.0.13 and later, the parsing of TIMESTAMP data in a JSON-formatted string is optimized. The following JSON formats are supported: JSON, Canal JSON, Debezium JSON, Maxwell JSON, and Ogg JSON. The following data parsing capabilities are optimized:
Data of the TIMESTAMP type that is declared in a DDL statement can be parsed as data in the DATE format.
Data of the TIMESTAMP_LTZ type that is declared in a DDL statement can be parsed as data in the DATE or TIMESTAMP format.
Realtime Compute for Apache Flink converts data of the TIMESTAMP type to data of the TIMESTAMP_LTZ type based on the time zone that is specified by the table.local-time-zone parameter. For example, you can declare the following information in the DDL statement:
CREATE TABLE source ( date_field TIMESTAMP, timestamp_field TIMESTAMP_LTZ(3) ) WITH ( 'format' = 'json', ... );
If the system parses {"date_field": "2020-09-12", "timestamp_field": "2020-09-12T12:00:00"} and the current time zone is UTC+8, the parsing result is "+I(2020-09-12T00:00:00, 2020-09-12T04:00:00.000Z)".
Data of the TIMESTAMP or TIMESTAMP_LTZ type can be automatically parsed.
Before the optimization, when the system parses TIMESTAMP data in a JSON-formatted string, you must set the timestamp-format.standard parameter to SQL or ISO-8601 to ensure that data can be correctly parsed. After the optimization, Realtime Compute for Apache Flink automatically infers the format of TIMESTAMP data and then parses the data. If the data is not correctly parsed, an error is returned. The value of the timestamp-format.standard parameter that you configured is used as a prompt for the parser to use.
What do I do if the error message "DELETE command denied to user 'userName'@'*.*.*.*' for table 'table_name'" appears?
Problem description
Cause by:java.sql.SQLSyntaxErrorException:DELETE command denied to user 'userName'@'*.*.*.*' for table 'table_name' at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120) ...
Cause
If a WHERE clause is added to an SQL statement that is used to process MySQL CDC data streams, Realtime Compute for Apache Flink sends a BEFORE UPDATE data record and an AFTER UPDATE data record for the data that is generated when an UPDATE operation is performed to the downstream. The downstream identifies the BEFORE UPDATE data record as the DELETE operation. In this case, the user who wants to perform operations on the MySQL CDC result table must have the DELETE permission.
Solutions
Check whether retract operations exist in the SQL logic. If retract operations exist, grant the DELETE permission to the user who wants to perform operations on the MySQL CDC result table.
What do I do if the error message "java.io.EOFException: SSL peer shut down incorrectly" appears?
Problem description
Caused by: java.io.EOFException: SSL peer shut down incorrectly at sun.security.ssl.SSLSocketInputRecord.decodeInputRecord(SSLSocketInputRecord.java:239) ~[?:1.8.0_302] at sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:190) ~[?:1.8.0_302] at sun.security.ssl.SSLTransport.decode(SSLTransport.java:109) ~[?:1.8.0_302] at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1392) ~[?:1.8.0_302] at sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1300) ~[?:1.8.0_302] at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:435) ~[?:1.8.0_302] at com.mysql.cj.protocol.ExportControlled.performTlsHandshake(ExportControlled.java:347) ~[?:?] at com.mysql.cj.protocol.StandardSocketFactory.performTlsHandshake(StandardSocketFactory.java:194) ~[?:?] at com.mysql.cj.protocol.a.NativeSocketConnection.performTlsHandshake(NativeSocketConnection.java:101) ~[?:?] at com.mysql.cj.protocol.a.NativeProtocol.negotiateSSLConnection(NativeProtocol.java:308) ~[?:?] at com.mysql.cj.protocol.a.NativeAuthenticationProvider.connect(NativeAuthenticationProvider.java:204) ~[?:?] at com.mysql.cj.protocol.a.NativeProtocol.connect(NativeProtocol.java:1369) ~[?:?] at com.mysql.cj.NativeSession.connect(NativeSession.java:133) ~[?:?] at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:949) ~[?:?] at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:819) ~[?:?] at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:449) ~[?:?] at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:242) ~[?:?] at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[?:?] at org.apache.flink.connector.jdbc.internal.connection.SimpleJdbcConnectionProvider.getOrEstablishConnection(SimpleJdbcConnectionProvider.java:128) ~[?:?] at org.apache.flink.connector.jdbc.internal.AbstractJdbcOutputFormat.open(AbstractJdbcOutputFormat.java:54) ~[?:?] ... 14 more
Cause
The driver version of the MySQL database is 8.0.27 and the SSL protocol is enabled for the MySQL database. However, the default access mode of the MySQL database is not SSL.
Solutions
We recommend that you set the connector parameter to rds in the WITH clause and append
characterEncoding=utf-8&useSSL=false
to the URL parameter for the MySQL dimension table. Example:'url'='jdbc:mysql://***.***.***.***:3306/test?characterEncoding=utf-8&useSSL=false'
What do I do if the error message "binlog probably contains events generated with statement or mixed based replication format" appears?
Problem description
Caused by: io.debezium.DebeziumException: Received DML 'insert into table_name (...) values (...)' for processing, binlog probably contains events generated with statement or mixed based replication format
Cause
MySQL CDC binary logs cannot be in the mixed format, but can only be in the ROW format.
Solutions
Run the
show variables like "binlog_format"
command on the MySQL database to query the current format of binary logs.NoteYou can run the
show global variables like "binlog_format"
command to view the global format of binary logs.Change the format of binary logs to ROW on the MySQL database.
Restart the deployment to make the configurations take effect.
What do I do if the error message "java.lang.ClassCastException: org.codehaus.janino.CompilerFactory cannot be cast to org.codehaus.commons.compiler.ICompilerFactory" appears?
Problem description
Causedby:java.lang.ClassCastException:org.codehaus.janino.CompilerFactorycannotbecasttoorg.codehaus.commons.compiler.ICompilerFactory atorg.codehaus.commons.compiler.CompilerFactoryFactory.getCompilerFactory(CompilerFactoryFactory.java:129) atorg.codehaus.commons.compiler.CompilerFactoryFactory.getDefaultCompilerFactory(CompilerFactoryFactory.java:79) atorg.apache.calcite.rel.metadata.JaninoRelMetadataProvider.compile(JaninoRelMetadataProvider.java:426) ...66more
Cause
The JAR package contains a Janino dependency that causes a conflict.
Specific Realtime Compute for Apache Flink dependencies such as flink-table-planner and flink-table-runtime are mistakenly added to the JAR package of the user-defined function (UDF) or connector.
Solutions
Check whether the JAR package contains org.codehaus.janino.CompilerFactory. Class conflicts may occur because the class loading sequence on different machines is different. To resolve this issue, perform the following steps:
On the
page, click the name of the desired deployment.On the Configuration tab of the deployment details page, click Edit in the upper-right corner of the Parameters section.
In the Other Configurations field, enter the following code:
classloader.parent-first-patterns.additional: org.codehaus.janino
Replace the value of the classloader.parent-first-patterns.additional parameter with a conflict class.