spark

download and install spark 1.6.1 with hadoop

spark-1.6.1-bin-hadoop2.6.tgz

run

%SPARK_HOME%\bin\spark-shell

error on windows7 x64

@see https://blogs.msdn.microsoft.com/arsen/2016/02/09/resolving-spark-1-6-0-java-lang-nullpointerexception-not-found-value-sqlcontext-error-when-running-spark-shell-on-windows-10-64-bit/
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.ClientWrapper. (ClientWrapper.scala:171)

winutils.exe support

download link https://github.com/steveloughran/winutils/raw/master/hadoop-2.6.0/bin/winutils.exe to %HADOOP_HOME%\bin\winutils.exe
download link https://github.com/steveloughran/winutils/raw/master/hadoop-2.6.0/bin/hadoop.dll to %HADOOP_HOME%\bin\hadoop.dll

無法啟動此程序,因為計算機中丟失 MSVCR100.dll

delete c:\windows\system32\msvcr100.dll
delete c:\windows\system32\msvcr100_clr0400.dll
delete c:\windows\sysWOW64\msvcr100.dll
delete c:\windows\sysWOW64\msvcr100_clr0400.dll

應用程序無法正常啟動(0xc000007b)

windows7 x64要使用這個喔
https://www.microsoft.com/en-us/download/confirmation.aspx?id=14632
vcredist_x64.exe

https://www.microsoft.com/en-us/download/confirmation.aspx?id=5555
vcredist_x86.exe

check msvcr100.dll exists

dir c:\windows\system32\msvcr100.dll

java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741701:

e:
cd\
cd spark-1.6.1-bin-hadoop2.6
set
set HADOOP_HOME=e:\hadoop
set tmp=e:\tmp
mkdir %tmp%
dir %tmp%
%HADOOP_HOME%\bin\winutils.exe ls \tmp\hive
%HADOOP_HOME%\bin\winutils.exe chmod 777 \tmp\hive
%HADOOP_HOME%\bin\winutils.exe chmod 777 \tmp
%HADOOP_HOME%\bin\winutils.exe ls \tmp\hive
bin\spark-shell

……
scala> sc.parallelize(1 to 1000).count()
……
res0: Long = 1000

scala> val textFile = sc.textFile(“README.md”)
……
scala> textFile.count() // Number of items in this RDD
res1: Long = 98

#

scala> val textFile5 = sc.textFile(“file:///e:/spark-1.6.1-bin-hadoop2.6/README.md”);textFile5.count
textFile5: org.apache.spark.rdd.RDD[String] = file:///e:/spark-1.6.1-bin-hadoop2.6/README.md MapPartitionsRDD[10] at textFile at :21
res5: Long = 95

on docker

docker run -i -t -h sandbox sequenceiq/spark:1.6.0 bash
#bash-4.1# spark-shell —master yarn-client —driver-memory 1g —executor-memory 1g —executor-cores 1
bash-4.1# spark-shell
……
scala> sc.parallelize(1 to 1000).count()
……
res0: Long = 1000

#這裡會產生找不到README.md。
scala> val textFile = sc.textFile(“README.md”)
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://sandbox:9000/user/root/README.md

指定使用localstorage上的README.md

scala> val textFile = sc.textFile(“file:///usr/local/spark/README.md”)
textFile: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/README.md MapPartitionsRDD[3] at textFile at :21
scala> textFile.count()
res1: Long = 98

把README.md放到hadoop上

bash-4.1# hadoop fs -put /usr/local/spark/README.md README.md
bash-4.1# spark-shell

scala> val textFile = sc.textFile(“README.md”)
textFile: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at :21
scala> textFile.count()
res0: Long = 98