报错1:

    \Restfull\spark\external\flume-sink\src\main\scala\org\apache\spark\streaming\flume\sink\Logging.scala

  Error:(26, 21) Logging is already defined as trait Logging
   private[sink] trait Logging

这个问题是由于flume-sink所需的部分源文件idea不会自动下载,所有编译时不能通过。

解决方式:

在intellij ieda里面:

- 打开View -> Tool Windows -> Maven Projects
- 右击Spark Project External Flume Sink
- 点击Generate Sources and Update Folders
随后,Intellij IDEA会自动下载Flume Sink相关的包

然后重新build -> Make Project,一切ok!!

This should generate source code from sparkflume.avdl.

Generate Sources and Update Folders do can resolve type SparkFlumeProtocol not found issue.
来源:

Spark build error 2

这里 再另外列举出一个缺少文件的实例:

another type HiveShim not found issue can be solved as follow operation while compiling spark source code on version 1.4.1/1.4.0:

解决方式:

in Project Settings–>Modules,select “spark-hive-thriterver_2.10”, in “Sources” tab, select “v0.13.1” node, click “Sources” to mark the node as sources. then select “spark-hive_2.10”, in “Sources” tab, select “v0.13.1” node, click “Sources” to mark the node as sources.

Spark build error 3

这个过程往往出现重复编译时候

Error:scalac:      while compiling: C:\Users\Administrator\IdeaProjects\spark-1.6.0\sql\core\src\main\scala\org\apache\spark\sql\util\QueryExecutionListener.scala        during phase: jvm     library version: version 2.10.5    compiler version: version 2.10.5  reconstructed args: -nobootcp -deprecation -classpath C:\Program Files\Java\jdk1.8.0_66\jre\lib\charsets.jar;C:\Program ......C:\Program Files\Java\jdk1.8.0_66\jre\lib\rt.jar;C:\Users\Administrator\IdeaProjects\spark-1.6.0\sql\core\target\scala-2.10\classes;C:\Users\Administrator\IdeaProjects\spark-1.6.0\core\target\scala-2.10\classes;C:\Users\Administrator\.m2\repository\org\apache\avro\avro-mapred\1.7.7\avro-mapred-1.7.7-hadoop2.jar;......C:\Users\Administrator\.m2\repository\org\objenesis\objenesis\1.0\objenesis-1.0.jar;C:\Users\Administrator\.m2\repository\org\spark-project\spark\unused\1.0.0\unused-1.0.0.jar -feature -javabootclasspath ; -unchecked  last tree to typer: Literal(Constant(org.apache.spark.sql.test.ExamplePoint))              symbol: null   symbol definition: null                 tpe: Class(classOf[org.apache.spark.sql.test.ExamplePoint])       symbol owners:       context owners: anonymous class withErrorHandling$1 -> package util== Enclosing template or block ==Template( // val 
, tree.tpe=org.apache.spark.sql.util.withErrorHandling$1  "scala.runtime.AbstractFunction1", "scala.Serializable" // parents  ValDef(    private    "_"    
    
  )......        ExecutionListenerManager$$anonfun$org$apache$spark$sql$util$ExecutionListenerManager$$withErrorHandling$1.super."
" // def 
(): scala.runtime.AbstractFunction1 in class AbstractFunction1, tree.tpe=()scala.runtime.AbstractFunction1        Nil      )      ()    )  ))== Expanded type of tree ==ConstantType(  value = Constant(org.apache.spark.sql.test.ExamplePoint))uncaught exception during compilation: java.lang.AssertionError12345678910111213141516171819202122232425262728293031323334353637

解决方式

Build -> Rebuild Project

就这么简单…

这里有大量的Spark Builder Error集合: