注意插入abit an-m2後要開後機看到usb磁碟,然後把usb-hdd的開機順序移到最前面。 Advanced BIOS Features > Hard Disk Boot priority > 選USB-HDD? 若看不到就試著重開機讓機器能抓到USB-HDD。也可能是USB碟沒建好,機器認不出來。
sudo apt-get install memtester sudo memtester 1024 5 #allocate 1024MB of memory, and repeat the test 5 times.
vi的方向鍵失常問題
~/.vimrc內容加入 set nocompatible set backspace=2
ftp
1 2 3 4 5
sudo apt-get install vsftpd sudo vi /etc/vsftpd.conf write_enable=YES #防寫要uncomment掉 sudo service vsftpd restart sudo service vsftpd status
other
gnome-session-fallback不管用了。據說july會有能支援的gnome desktop出來。let’s see
harddisk smart information
1 2 3 4
sudo apt-get install smartmontools sudo smartctl -a /dev/sda #print all info sudo smartctl -i /dev/sda #print info sudo smartctl -H /dev/sda #check health
vi /etc/ntp.conf server tock.stdtime.gov.tw prefer server tick.stdtime.gov.tw server time.stdtime.gov.tw restrict tock.stdtime.gov.tw restrict tick.stdtime.gov.tw restrict time.stdtime.gov.tw
service ntp restart
sudo apt-get install ntpstat ntpstat synchronised to NTP server (211.22.103.157) at stratum 3 time correct to within 104 ms polling server every 64 s
java.lang.RuntimeException: java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=-1073741701:
e: cd\ cd spark-1.6.1-bin-hadoop2.6 set set HADOOP_HOME=e:\hadoop set tmp=e:\tmp mkdir %tmp% dir %tmp% %HADOOP_HOME%\bin\winutils.exe ls \tmp\hive %HADOOP_HOME%\bin\winutils.exe chmod 777 \tmp\hive %HADOOP_HOME%\bin\winutils.exe chmod 777 \tmp %HADOOP_HOME%\bin\winutils.exe ls \tmp\hive bin\spark-shell
…… scala> sc.parallelize(1 to 1000).count() …… res0: Long = 1000
scala> val textFile = sc.textFile(“README.md”) …… scala> textFile.count() // Number of items in this RDD res1: Long = 98
#
scala> val textFile5 = sc.textFile(“file:///e:/spark-1.6.1-bin-hadoop2.6/README.md”);textFile5.count textFile5: org.apache.spark.rdd.RDD[String] = file:///e:/spark-1.6.1-bin-hadoop2.6/README.md MapPartitionsRDD[10] at textFile at :21 res5: Long = 95
on docker
docker run -i -t -h sandbox sequenceiq/spark:1.6.0 bash #bash-4.1# spark-shell —master yarn-client —driver-memory 1g —executor-memory 1g —executor-cores 1 bash-4.1# spark-shell …… scala> sc.parallelize(1 to 1000).count() …… res0: Long = 1000
#這裡會產生找不到README.md。 scala> val textFile = sc.textFile(“README.md”) org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://sandbox:9000/user/root/README.md
指定使用localstorage上的README.md
scala> val textFile = sc.textFile(“file:///usr/local/spark/README.md”) textFile: org.apache.spark.rdd.RDD[String] = file:///usr/local/spark/README.md MapPartitionsRDD[3] at textFile at :21 scala> textFile.count() res1: Long = 98
把README.md放到hadoop上
bash-4.1# hadoop fs -put /usr/local/spark/README.md README.md bash-4.1# spark-shell … scala> val textFile = sc.textFile(“README.md”) textFile: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at :21 scala> textFile.count() res0: Long = 98