全部產品
Search
文件中心

MaxCompute:Spark-1.x樣本

更新時間:Feb 28, 2024

本文為您介紹Spark-1.x依賴的配置以及Spark-1.x相關樣本。

配置Spark-1.x的依賴

通過MaxCompute提供的Spark用戶端提交應用,需要在pom.xml檔案中添加以下依賴。
<properties>
    <spark.version>1.6.3</spark.version>
    <cupid.sdk.version>3.3.3-public</cupid.sdk.version>
    <scala.version>2.10.4</scala.version>
    <scala.binary.version>2.10</scala.binary.version>
</properties>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_${scala.binary.version}</artifactId>
    <version>${spark.version}</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-sql_${scala.binary.version}</artifactId>
    <version>${spark.version}</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-mllib_${scala.binary.version}</artifactId>
    <version>${spark.version}</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming_${scala.binary.version}</artifactId>
    <version>${spark.version}</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>com.aliyun.odps</groupId>
    <artifactId>cupid-sdk</artifactId>
    <version>${cupid.sdk.version}</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>com.aliyun.odps</groupId>
    <artifactId>hadoop-fs-oss</artifactId>
    <version>${cupid.sdk.version}</version>
</dependency>
<dependency>
    <groupId>com.aliyun.odps</groupId>
    <artifactId>odps-spark-datasource_${scala.binary.version}</artifactId>
    <version>${cupid.sdk.version}</version>
</dependency>
<dependency>
    <groupId>org.scala-lang</groupId>
    <artifactId>scala-library</artifactId>
    <version>${scala.version}</version>
</dependency>
<dependency>
    <groupId>org.scala-lang</groupId>
    <artifactId>scala-actors</artifactId>
    <version>${scala.version}</version>
</dependency>
上述代碼中Scope的定義如下:
  • spark-core、spark-sql等所有Spark社區發布的包,使用providedScope。
  • odps-spark-datasource使用預設的compileScope。

WordCount樣本(Scala)

  • 程式碼範例

    WordCount.scala

  • 提交方式
    cd /path/to/MaxCompute-Spark/spark-1.x
    mvn clean package
    
    # 環境變數spark-defaults.conf的配置請參見搭建開發環境。
    cd $SPARK_HOME
    bin/spark-submit --master yarn-cluster --class com.aliyun.odps.spark.examples.WordCount \
        /path/to/MaxCompute-Spark/spark-1.x/target/spark-examples_2.10-1.0.0-SNAPSHOT-shaded.jar

MaxCompute Table讀寫樣本(Scala)

  • 程式碼範例

    SparkSQL.scala

  • 提交方式
    cd /path/to/MaxCompute-Spark/spark-1.x
    mvn clean package
    # 環境變數spark-defaults.conf的配置請參見搭建開發環境。
    cd $SPARK_HOME
    bin/spark-submit --master yarn-cluster --class com.aliyun.odps.spark.examples.sparksql.SparkSQL \
        /path/to/MaxCompute-Spark/spark-1.x/target/spark-examples_2.10-1.0.0-SNAPSHOT-shaded.jar

MaxCompute Table讀寫樣本(Python)

實現MaxCompute Table讀寫的Python範例程式碼請參見spark_sql.py

MaxCompute Table讀寫樣本(Java)

實現MaxCompute Table讀寫的Java範例程式碼請參見JavaSparkSQL.java