下载安装mysql
- 下载并安装MySQL官方的 Yum Repository
wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm`
- 使用上面的命令就直接下载了安装用的Yum Repository,大概25KB的样子,然后就可以直接yum安装了。
yum -y install mysql57-community-release-el7-10.noarch.rpm`
- 下面就是使用yum安装MySQL了
yum -y install mysql-community-server # 时间可能会很长,请耐心等待
- 首先启动MySQL
`[root@BrianZhu /]``# systemctl start mysqld.service`
- 查看MySQL运行状态
`[root@BrianZhu /]``# systemctl status mysqld.service`
- 查看初始root密码
grep "password" /var/log/mysqld.log
- 修改root密码
mysql -uroot -p # 回车后会提示输入密码
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'new password';
# 这时设置的密码不能太简单,否则不通过;需要简短的密码可进行如下修改密码规则
mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)
mysql> set global validate_password_length=1;
Query OK, 0 rows affected (0.00 sec)
# 这时你就可以设置简单的密码了
配置hive元数据库
[root@master app]# mysql -uroot -prootroot
mysql> create user 'hive' identified by 'hive'; //创建一个账号:用户名为hive,密码为hive
或者
mysql> create user 'hive'@'%' identified by 'hive'; //创建一个账号:用户名为hive,密码为hive
##############################################################
mysql> GRANT ALL PRIVILEGES ON *.* to 'hive'@'%' IDENTIFIED BY 'hive' WITH GRANT OPTION; //将权限授予host为%即所有主机的hive用户
mysql> GRANT ALL PRIVILEGES ON *.* to 'hive'@'master' IDENTIFIED BY 'hive' WITH GRANT OPTION; //将权限授予host为master的hive用户
mysql> GRANT ALL PRIVILEGES ON *.* to 'hive'@'localhost' IDENTIFIED BY 'hive' WITH GRANT OPTION; //将权限授予host为localhost的hive用户(其实这一步可以不配)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
mysql> select user,host,password from mysql.user;
mysql> exit;
解压配置hive环境变量
- 解压
tar -zxvf apache-hive-2.3.4-bin.tar.gz
- 重命名
mv apache-hive-2.3.4-bin hive-2.3.4
- 配置环境变量
vi /etc/profile # 编辑系统配置文件
# 添加如下内容
HIVE_HOME=/opt/hive-2.3.4
PATH=$PATH:$HIVE_HOME/bin
export PATH HIVE_HOME
配置hive配置文件
编辑hive-env.xml文件
cd conf cp hive-env.sh.template hive-env.sh vi hive-env.sh #################添加如下信息wq保存 JAVA_HOME=/opt/jdk1.8.0_181 HADOOP_HOME=/opt/hadoop-2.7.7 HIVE_HOME=/opt/hive-2.3.4 export HIVE_CONF_DIR=$HIVE_HOME/conf export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
- 编辑hive-site.xml
新建文件 vi hive-site.xml 复制如下内容保存
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hservice:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
<!-- 设置 hive仓库的HDFS上的位置 -->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive</value>
<description>location of default database for the warehouse</description>
</property>
<!--资源临时文件存放位置 -->
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/hive-2.3.4/tmp/resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<!-- Hive在0.9版本之前需要设置hive.exec.dynamic.partition为true, Hive在0.9版本之后默认为true -->
<property>
<name>hive.exec.dynamic.partition</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<!-- 修改日志位置 -->
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/hive-2.3.4/tmp/logs</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/hive-2.3.4/tmp/ResourcesLog</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/hive-2.3.4/tmp/HiveRunLog</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/opt/hive-2.3.4/tmp/OpertitionLog</value>
<description>Top level directory where operation tmp are stored if logging functionality is enabled</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<!-- Hiveserver2已经不再需要hive.metastore.local这个配置项了(hive.metastore.uris为空,则表示是metastore在本地,否则就是远程)远程的话直接配置hive.metastore.uris即可 -->
<!-- property>
<name>hive.metastore.uris</name>
<value>thrift://hservice:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property -->
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hservice</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
</property>
<property>
<name>hive.server2.thrift.http.path</name>
<value>cliservice</value>
</property>
<!-- HiveServer2的WEB UI -->
<property>
<name>hive.server2.webui.host</name>
<value>hservice</value>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>755</value>
</property>
<property>
<name>spark.driver.extraJavaOptions</name>
<value>-XX:PermSize=128M -XX:MaxPermSize=512M</value>
</property>
</configuration>
配置日志地址, 修改hive-log4j.properties文件
#配置日志地址, 修改hive-log4j.properties文件 cp hive-log4j2.properties.template hive-log4j.properties vim hive-log4j.properties #将hive.log日志的位置改为${HIVE_HOME}/tmp目录 hive.log.dir=/opt/hive-2.3.4/tmp/ 创建tmp目录 mkdir ${HIVE_HOME}/tmp
配置hive-config.sh文件
cd /opt/hive-2.3.4/bin vi hive-config.sh ################添加如下内容 export JAVA_HOME=/opt/jdk1.8.0_181 export HADOOP_HOME=/opt/hadoop-2.7.7 export HIVE_HOME=/opt/hive-2.3.4 HIVE_CONF_DIR=$HIVE_HOME/conf
拷贝jar包
拷贝jline扩展包
# 切换到lib目录 [root@hservice lib]# ls mysql* mysql-connector-java-5.1.41-bin.jar mysql-metadata-storage-0.9.2.jar #拷贝JDBC扩展包 [root@hservice lib]# ls jl* jline-2.12.jar ##拷贝jline扩展包 cp jline-2.12.jar /opt/hadoop-2.7.7/share/hadoop/yarn/lib/
初始化hive配置
在hive/bin目录下运行
schematool -dbType mysql -initSchema ## MySQL作为元数据库,此时在hive的bin目录
使用scp打包hive到子节点
scp -r hive-2.3.4/ root@node1:/opt
scp -r hive-2.3.4/ root@node2:/opt
配置子节点的环境变量
启动hive
启动Metastore服务
# 先启动hadoop start-all.sh # 启动Metastore服务,执行Hive前, 须先启动metastore服务, 否则会报错 ./hive –service metastore # 此时在hive的bin目录
- 启动hive
输入hive
即可
# 您将看到如下信息
[root@hservice opt]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive-2.3.4/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in file:/opt/hive-2.3.4/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
OK
default
Time taken: 8.192 seconds, Fetched: 1 row(s)
hive> show tables;
OK
Time taken: 0.11 seconds
hive>