2014-09-22 11:30:37,923 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.NullPointerException at org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:168)
When we check the source code of FileRecordWriterContainer.java,
- if (dynamicPartitioningUsed) {
- // calculate which writer to use from the remaining values - this needs to be done before we delete cols
- List<String> dynamicPartValues = new ArrayList<String>();
- for (Integer colToAppend : dynamicPartCols) {
- dynamicPartValues.add(value.get(colToAppend).toString());
- }
It shows the error is caused by the null value of partitioned columns.
However, the same issue didn't happen in CDH4.7.
Hive does not allow empty strings as partition keys, and it returns a string value such as __HIVE_DEFAULT_PARTITION__ instead of NULL when such values are returned from a query.
So, the reason might be either Pig treats "Null" as null or Hive doesn't support NULL partition.
Reference:
https://apache.googlesource.com/hcatalog/+/branch-0.4/src/java/org/apache/hcatalog/mapreduce/FileRecordWriterContainer.java
No comments:
Post a Comment