返回 登录
2

安装Spark(完全分布式部署--Standalone)

阅读5775

1.将Spark解压并上传至/opt目录下
tar -zxvf spark-1.6.2-bin-hadoop2.6.tgz -C /opt/
2.修改环境变量
vi /etc/profile
export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
source /etc/profile
3.配置Spark-env.sh
a、cd /opt/spark/conf
b、cp spark-env.sh.template spark-env.sh
c、vi spark-env.sh
export JAVA_HOME=/opt/jdk
export SCALA_HOME=/opt/scala
export SPARK_MASTER_IP=master
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=3g
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
4.配置slaves
a、cd /opt/spark/conf
b、mv slaves.template slaves
c、vi slaves
master
slave01
slave02
5.启动和验证Spark
a、8080
b、spark-shell

评论