hive踩坑纪实

hive查询报错

0: jdbc:hive2://localhost:10000> select * from testa;
Error: Error while compiling statement: FAILED: SemanticException Unable to determine if hdfs://localhost:9000/user/hive/warehouse/testa is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://localhost:9000/user/hive/warehouse/testa, expected: hdfs://hadoop-master:9000 (state=42000,code=40000)
0: jdbc:hive2://localhost:10000>

原因:

由于之前配置的hdfs-site.xml中:

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:9000</value>
</property>

修改成:

 <property>
     <name>fs.defaultFS</name>
     <value>hdfs://hadoop-master:9000</value>
</property>

解决办法:

修改元数据库:

use hive;

update DBS set DB_LOCATION_URI=REPLACE (DB_LOCATION_URI,'localhost','hadoop-master')

update SDS set LOCATION=REPLACE (LOCATION,'localhost','hadoop-master');

hive用户权限管理

第一步:配置hive-site.xml

<property>
    <name>hive.users.in.admin.role</name>
    <value>root</value>
    <description>定义超级管理员 启动的时候会自动创建Comma separated list of users who are in admin role for bootstrapping.
More users can be added in ADMIN role later.</description>
</property>
<property>
    <name>hive.metastore.authorization.storage.checks</name>
    <value>true</value>
</property>
<property>
    <name>hive.metastore.execute.setugi</name>
    <value>false</value>
</property>
<property>
    <name>hive.security.authorization.enabled</name>
    <value>true</value>
    <description>开启权限 enable or disable thehive client authorization</description>
</property>
<property>
    <name>hive.security.authorization.createtable.owner.grants</name>
    <value>ALL</value>
    <description>表的创建者对表拥有所有权限the privileges automaticallygranted to the owner whenever a table gets created. An example like"select,drop" will grant select and drop privilege to the owner ofthe table</description>
</property>
<property>
    <name>hive.security.authorization.task.factory</name> <value>org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTaskFactoryImpl</value>
    <description>进行权限控制的配置。</description>
</property>
<property>
    <name>hive.semantic.analyzer.hook</name>
    <value>com.hive.HiveAdmin</value>
    <description>使用钩子程序,识别超级管理员,进行授权控制。</description>
</property>
<property>
    <name>hive.users.in.admin.role</name>
    <value>root</value>
    <description>指定的用户为admin角色,多个用户以逗号分隔
Comma separated list of users who are in admin role for bootstrapping.
More users can be added in ADMIN role later.</description>
</property>

第二步:实现Hive代码HiveAdmin.java

package com.hive;

import org.apache.hadoop.hive.ql.parse.*;
import org.apache.hadoop.hive.ql.session.SessionState;

/**
 *  * Created by Ganymede on 
 *  * 限制了超级管理员权限,普通用户不能授权、建库、建表等操作
 *   */
public class HiveAdmin extends AbstractSemanticAnalyzerHook {
    private static String[] admin = {"root", "hadoop", "hive", "ljbao"};  //配置Hive管理员

    @Override
    public ASTNode preAnalyze(HiveSemanticAnalyzerHookContext context,
                              ASTNode ast) throws SemanticException {
        switch (ast.getToken().getType()) {
            case HiveParser.TOK_CREATEDATABASE:
            case HiveParser.TOK_DROPDATABASE:
            case HiveParser.TOK_CREATEROLE:
            case HiveParser.TOK_DROPROLE:
            case HiveParser.TOK_GRANT:
            case HiveParser.TOK_REVOKE:
            case HiveParser.TOK_GRANT_ROLE:
            case HiveParser.TOK_REVOKE_ROLE:
            case HiveParser.TOK_CREATETABLE:
                String userName = null;
                if (SessionState.get() != null
                        && SessionState.get().getAuthenticator() != null) {
                    userName = SessionState.get().getAuthenticator().getUserName();
                }
                if (!admin[0].equalsIgnoreCase(userName)
                        && !admin[1].equalsIgnoreCase(userName) && !admin[2].equalsIgnoreCase(userName)&& !admin[3].equalsIgnoreCase(userName)) {
                    throw new SemanticException(userName
                            + " can't use ADMIN options, except " + admin[0] + "," + admin[1] + ","
                            + admin[2] + admin[3] + ".");
                }
                break;
            default:
                break;
        }
        return ast;
    }


    public static void main(String[] args) throws SemanticException {
        String[] admin = {"admin", "root"};
        String userName = "root1";
        for (String tmp : admin) {
            System.out.println(tmp);
            if (!admin[0].equalsIgnoreCase(userName) && !admin[1].equalsIgnoreCase(userName)) {
                throw new SemanticException(userName
                        + " can't use ADMIN options, except " + admin[0] + ","
                        + admin[1] + ".");
            }
        }
    }
}

第三步:打包上传jar配置相关hive文件

打包前需将环境配好,我是直接在服务器打包

先将上述代码中依赖的jar包放到$HADOOP_HOME/lib文件夹下

cp /home/hadoop/hadoop3.1/share/hadoop/common/hadoop-common-3.0.2.jar /home/hive/hive3.1/lib

开始打包

javac -Djava.ext.dirs="/home/hive/hive3.1/lib" HiveAdmin.java -d .jar cf hive-admin.jar com# 打包完成后,将hive-admin.jar包拷贝到lib下cp hive-admin.jar /home/hive/hive3.1/lib

第四步:重启hive相关的服务

hive --service metastore > metastore.log 2>&1 &
hive --service hiveserver2 > hiveserver2.log 2>&1 &

经过以上配置,即可以使用ljbao等四个默认用户进行授权操作了,这四个用户默认在admin中