HIVE中UDTF编写和使用
1. UDTF介绍
UDTF(User-Defined Table-Generating Functions) 用来解决
输入一行输出多行(On-to-many maping)
的需求。
2. 编写自己需要的UDTF
- 继承org.apache.hadoop.hive.ql.udf.generic.GenericUDTF。
- 实现initialize, process, close三个方法
- UDTF首先会调用initialize方法,此方法返回UDTF的返回行的信息(返回个数,类型)。初始化完成后,会调用process方法,对传入的参数进行处理,可以通过forword()方法把结果返回。最后close()方法调用,对需要清理的方法进行清理。
下面是我写的一个用来切分”key:value;key:value;”这种字符串,返回结果为key, value两个字段。供参考:
1: import java.util.ArrayList;2:
3: import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF;4: import org.apache.hadoop.hive.ql.exec.UDFArgumentException;5: import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException;6: import org.apache.hadoop.hive.ql.metadata.HiveException;7: import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;8: import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory;9: import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;10: import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory;11:
12: public class ExplodeMap extends GenericUDTF{13:
14: @Override
15: public void close() throws HiveException {16: // TODO Auto-generated method stub17: }
18:
19: @Override
20: public StructObjectInspector initialize(ObjectInspector[] args)21: throws UDFArgumentException {22: if (args.length != 1) {23: throw new UDFArgumentLengthException("ExplodeMap takes only one argument");24: }
25: if (args[0].getCategory() != ObjectInspector.Category.PRIMITIVE) {26: throw new UDFArgumentException("ExplodeMap takes string as a parameter");27: }
28:
29: ArrayList<String> fieldNames = new ArrayList<String>();30: ArrayList<ObjectInspector> fieldOIs = new ArrayList<ObjectInspector>();31: fieldNames.add("col1");32: fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
33: fieldNames.add("col2");34: fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
35:
36: return ObjectInspectorFactory.getStandardStructObjectInspector(fieldNames,fieldOIs);37: }
38:
39: @Override
40: public void process(Object[] args) throws HiveException {41: String input = args[0].toString();
42: String[] test = input.split(";");43: for(int i=0; i<test.length; i++) {44: try {45: String[] result = test[i].split(":");46: forward(result);
47: } catch (Exception e) {48: continue;49: }
50: }
51: }
52: }
3. 使用方法
UDTF有两种使用方法,一种直接放到select后面,一种和lateral view一起使用。
1:直接select中使用:select explode_map(properties) as (col1,col2) from src;
- 不可以添加其他字段使用:select a, explode_map(properties) as (col1,col2) from src
- 不可以嵌套调用:select explode_map(explode_map(properties)) from src
- 不可以和group by/cluster by/distribute by/sort by一起使用:select explode_map(properties) as (col1,col2) from src group by col1, col2
2:和lateral view一起使用:select src.id, mytable.col1, mytable.col2 from src lateral view explode_map(properties) mytable as col1, col2;
- 此方法更为方便日常使用。执行过程相当于单独执行了两次抽取,然后union到一个表里。
4. 参考文档
http://wiki.apache.org/hadoop/Hive/LanguageManual/UDF
http://wiki.apache.org/hadoop/Hive/DeveloperGuide/UDTF
http://www.slideshare.net/pauly1/userdefined-table-generating-functions