All Products
Search
Document Center

:FAQ about MaxCompute Java UDFs

Last Updated:Jan 12, 2026

This topic provides answers to some frequently asked questions about MaxCompute user-defined functions (UDFs) that are written in Java.

Class and dependency issues

When you call a MaxCompute UDF, the following issues related to classes or dependencies may occur:

  • Issue 1: The job fails with a ClassNotFoundException or the error Some dependencies are missing.

    • Causes:

      • Cause 1: The JAR package specified when you registered the function is incorrect.

      • Cause 2: A JAR package that the UDF depends on was not uploaded to MaxCompute. For example, a required third-party package was not uploaded as a resource.

      • Cause 3: The UDF is called from the wrong project. The UDF does not exist in the MaxCompute project where the job is running. For example, the UDF was registered in a development project but is being called from a production project.

      • Cause 4: A file resource that the UDF needs to access does not exist or was registered with an incorrect resource type. For example, a PY file was uploaded with the resource type PY, but the get_cache_file method in the UDF code requires the resource type to be FILE.

    • Solutions:

      • For Cause 1: Verify that the JAR package is correct and contains the required class. Repackage the code and upload the new JAR to your MaxCompute project. For more information, see Package a Java program, upload the package, and create a MaxCompute UDF.

      • For Cause 2: Upload the required third-party package to your MaxCompute project as a resource. Then, add this resource to the dependency list when you register the function. For more information, see Add resources and Create a UDF.

      • For Cause 3: In the project where the error occurred, run the list functions; command using the MaxCompute client to verify that the UDF exists and that its class and resource dependencies are correctly configured.

      • For Cause 4: Run the desc function <function_name>; command using the MaxCompute clientto ensure that the Resources list includes all required file resources. If a resource type is incorrect, re-add the resource by running the add <file_type> <file_name>; command.

  • Issue 2: The job fails with a NoClassDefFoundErrorNoSuchMethodError, or an ODPS-0123055 error code.

    • Causes:

      • Cause 1: A version conflict exists between a third-party library in your user-provided JAR package and a library that is built into the MaxCompute runtime environment.

      • Cause 2: A Java sandbox violation occurred. This is indicated by a java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "createClassLoader") error in the job instance's Stderr log. For more information, see Java sandbox.

    • Solutions:

Issues related to Java sandbox limits

  • Issue 1: The UDF fails when it tries to perform restricted operations, such as accessing local files, connecting to an external network or distributed file system, or creating Java threads.

  • Cause: For security and stability, MaxCompute UDFs run within a Java sandbox that restricts certain operations. By default, all network access is disabled.

  • Solution: To enable network access for your business logic, complete and submit the network connection application form. The MaxCompute technical support team will contact you to complete the network enablement process. For guidance on how to fill out the form, see Network enablement process.

Performance issues

When you call a MaxCompute UDF, the following performance issues may occur:

  • Issue 1: The error message kInstanceMonitorTimeout appears.

    • Cause: The UDF exceeded the allowed processing time for a batch of records. By default, a UDF must process a batch of records (typically 1024 records) within 1800 seconds. This limit applies to the processing time for a single batch, not the total runtime of the worker. This timeout mechanism prevents issues, such as infinite loops in UDF code, from monopolizing CPU resources.

    • Solution:

      • If the computational workload is inherently high, call the ExecutionContext.claimAlive method within your UDF's Java implementation to reset the timeout counter.

      • Optimize the logic of your UDF code. You can also tune UDF execution and improve processing speed by setting the following session-level parameters.

        Parameter

        Description

        set odps.function.timeout=xxx;

        Adjusts the UDF execution timeout. The default value is 1800s. You can increase this value. The value must be in the range of 1s to 3600s.

        set odps.stage.mapper.split.size=xxx;

        Adjusts the input data volume for each Map worker. The default value is 256 MB. You can decrease this value to reduce the processing load per worker.

        set odps.sql.executionengine.batch.rowcount=xxx;

        Adjusts the number of records that MaxCompute processes in a single batch. The default value is 1024. You can decrease this value.

  • Issue 2: The job fails with an errMsg:SigKill(OOM) or OutOfMemoryError error.

    • Cause: The job exceeded the memory allocated to a worker. This can occur during the Map, Reduce, or Join stage if a single worker processes too much data or the UDF itself consumes too much memory.

    • Solution:

      • If the error originates from the fuxi or runtime components, you can improve performance by setting the following resource parameters.

        Parameter

        Description

        set odps.stage.mapper.mem=xxx;

        Adjusts the memory for each Map worker. The default value is 1024 MB. You can increase this value.

        set odps.stage.reducer.mem=xxx;

        Adjusts the memory for each Reduce worker. The default value is 1024 MB. You can increase this value.

        set odps.stage.joiner.mem=xxx;

        Adjusts the memory for each Join worker. The default value is 1024 MB. You can increase this value.

        set odps.stage.mapper.split.size=xxx;

        Adjusts the input data volume for each Map worker. The default value is 256 MB. You can increase this value.

        set odps.stage.reducer.num=xxx;

        Adjusts the number of Reduce workers. You can increase this value.

        set odps.stage.joiner.num=xxx;

        Adjusts the number of Join workers. You can increase this value.

      • If the error originates from your Java code (a JVM-level OutOfMemoryError), you can increase the JVM heap size by setting the set odps.sql.udf.jvm.memory=xxx; parameter in addition to the worker memory parameters.

For more information about the parameters, see SET operations.

UDTF-related issues

When you call a Java UDTF, the following issues may occur:

  • Issue 1: The job fails with the error Semantic analysis exception - only a single expression in the SELECT clause is supported with UDTFs.

    • Cause: This error occurs when you project a UDTF alongside other columns or expressions in the same SELECT clause. This syntax is not supported. The following is an example of incorrect usage.

      select b.*, 'x', udtffunction_name(v) from table lateral view udtffunction_name(v) b as f1, f2;
    • Solution: o combine the output of a UDTF with other columns, use it with LATERAL VIEW. The following is an example of the correct syntax.

      select b.*, 'x' from table lateral view udtffunction_name(v) b as f1, f2;
  • Issue 2: The job fails with an error such as Semantic analysis exception - expect 2 aliases but have 0.

    • Cause: The query does not specify aliases for the output columns generated by the UDTF.

    • Solution: Provide aliases for the output columns using an as clause in the SELECT statement that calls the UDTF. The following is an example of the correct syntax.

      select udtffunction_name(paramname) as (col1, col2);