本文为您介绍Spark-1.x依赖的配置以及Spark-1.x相关示例。
配置Spark-1.x的依赖
通过MaxCompute提供的Spark客户端提交应用,需要在pom.xml文件中添加以下依赖。
<properties>
<spark.version>1.6.3</spark.version>
<cupid.sdk.version>3.3.3-public</cupid.sdk.version>
<scala.version>2.10.4</scala.version>
<scala.binary.version>2.10</scala.binary.version>
</properties>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.aliyun.odps</groupId>
<artifactId>cupid-sdk</artifactId>
<version>${cupid.sdk.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.aliyun.odps</groupId>
<artifactId>hadoop-fs-oss</artifactId>
<version>${cupid.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.aliyun.odps</groupId>
<artifactId>odps-spark-datasource_${scala.binary.version}</artifactId>
<version>${cupid.sdk.version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-actors</artifactId>
<version>${scala.version}</version>
</dependency>
上述代码中Scope的定义如下:
- spark-core、spark-sql等所有Spark社区发布的包,使用providedScope。
- odps-spark-datasource使用默认的compileScope。
WordCount示例(Scala)
- 代码示例
- 提交方式
cd /path/to/MaxCompute-Spark/spark-1.x mvn clean package # 环境变量spark-defaults.conf的配置请参见搭建开发环境。 cd $SPARK_HOME bin/spark-submit --master yarn-cluster --class com.aliyun.odps.spark.examples.WordCount \ /path/to/MaxCompute-Spark/spark-1.x/target/spark-examples_2.10-1.0.0-SNAPSHOT-shaded.jar
MaxCompute Table读写示例(Scala)
- 代码示例
- 提交方式
cd /path/to/MaxCompute-Spark/spark-1.x mvn clean package # 环境变量spark-defaults.conf的配置请参见搭建开发环境。 cd $SPARK_HOME bin/spark-submit --master yarn-cluster --class com.aliyun.odps.spark.examples.sparksql.SparkSQL \ /path/to/MaxCompute-Spark/spark-1.x/target/spark-examples_2.10-1.0.0-SNAPSHOT-shaded.jar
MaxCompute Table读写示例(Python)
实现MaxCompute Table读写的Python示例代码请参见spark_sql.py。
MaxCompute Table读写示例(Java)
实现MaxCompute Table读写的Java示例代码请参见JavaSparkSQL.java。