This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
java -cp "hiveJdbcQueryUtils-0.1.jar:lib/*" com.iwinner.hive.select.hive.main.MainProcess |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sample WebService Example (Service Side ) | |
public class LoginWsServiceImpl implements LoginWsService { | |
private static Logger LOGGERS = Logger.getLogger("common"); | |
private static Logger EDRS = Logger.getLogger("common1"); | |
private LoginServiceIF loginServiceIF; | |
public Response login(String username, String password, String appName, String appId) throws RemoteException { | |
LOGGERS.info("Enter into the login operation"); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This tutorial on Hadoop MapReduce data flow will provide you the complete MapReduce data flow chart in Hadoop. The tutorial covers various phases of MapReduce job execution such as Input Files, InputFormat in Hadoop, InputSplits, RecordReader, Mapper, Combiner, Partitioner, Shuffling and Sorting, Reducer, RecordWriter and OutputFormat in detail. | |
Input Files | |
| | |
| | |
Input Format In hadoop(Input Splits ,RecordReader ) | |
| | |
| | |
MapRedce |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Writable in an interface in Hadoop and types in Hadoop must implement this interface. Hadoop provides these writable wrappers for almost all Java primitive types and some other types,but sometimes we need to pass custom objects and these custom objects should implement Hadoop's Writable interface.Hadoop MapReduce uses implementations of Writables for interacting with user-provided Mappers and Reducers. | |
To implement the Writable interface we require two methods: | |
public interface Writable { | |
void readFields(DataInput in); | |
void write(DataOutput out); | |
} | |
Why use Hadoop Writable(s)? |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import scala.io.Source | |
var fileName="input.csv" | |
for(line <- Source.fromFile(fileName).getLines){ | |
println(line) | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
http://baahu.in/spark-add-a-new-column-to-a-dataframe-using-udf-and-withcolumn/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
var list = List[String](); | |
var set = Set[String](); | |
var vec = Vector[String](); | |
var seq=Seq[String](); | |
for (ref <- Source.fromFile("E:/Data/input.txt").getLines()) { | |
list = list :+ ref; | |
set += ref; | |
vec = vec :+ ref; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Traversable | |
| | |
Iterable | |
Seq Set Map | |
i)IndexedSeq | |
ii)LinearSeq | |
IndexedSeq: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Modifer OutsidePackage Packge Class subClass | |
default Yes yes Yes Yes | |
protected NO NO Yes Yes | |
private NO NO YES NO |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There are 3 ways of processing xml files in Hadoop:- | |
1. PIG:- Using classes from Piggybank jar file. | |
2. HIVE:- Using SerDe (Serialization Deserialization) Method. | |
3. MapRedude Coding:- Lengthy coding using classes from OOXML jar files |