Skip to main content

Hive hiveserver2 and Web UI usage

Hive hiveserver2 and Web UI usage
HiveServer2 (HS2) is a server interface that enables remote clients to execute queries against Hive and retrieve the results (a more detailed intro here). The current implementation, based on Thrift RPC, is an improved version of HiveServer and supports multi-client concurrency and authentication. It is designed to provide better support for open API clients like JDBC and ODBC.
Step 1 - Change the directory to /usr/local/hive/bin
$ cd $HIVE_HOME/bin
Step 2 - Start hiveserver2 daemon
$ hiveserver2
OR
$ hive --service hiveserver2 &
Step 3 - You can browse to hiveserver2 web ui at following url
http://localhost:10002/hiveserver2.jsp
Step 4 - You can see the hive logs in
/tmp/hduser/hive.log
To kill hiveserver2 daemon
$ ps -ef | grep -i hiveserver2
$ kill -9 29707
OR
$ rm -rf /var/run/hive/hive-server.pid
There are many ways for hiveserver2 authentication process
You can see them in
$hive-site.xml
<property>
    <name>hive.server2.authentication</name>
    <value>NONE</value>
    <description>
      Expects one of [nosasl, none, ldap, kerberos, pam, custom].
      Client authentication types.
        NONE: no authentication check
        LDAP: LDAP/AD based authentication
        KERBEROS: Kerberos/GSSAPI authentication
        CUSTOM: Custom authentication provider
                (Use with property hive.server2.custom.authentication.class)
        PAM: Pluggable authentication module
        NOSASL:  Raw transport
    </description>
  </property>
1) If hive.server2.authentication is "NONE" in HIVE_HOME/conf/hive-site.xml then connect beeline with below url
!connect jdbc:hive2://
2) If value of "hive.server2.authentication" property in HIVE_HOME/conf/hive-site.xml to be set as "SASL" then connect hive beeline with below url
!connect jdbc:hive2://<host>:<port>/<db>
3) If "hive.server2.authentication" is nosasl then connect the beeline like below.
!connect jdbc:hive2://<host>:<port>/<db>;auth=noSasl
Make sure this property datanucleus.schema.autoCreateAll is true in hive-site.xml
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</ value>
<description> creates necessary schema on a startup if one does not exist. Set this to false, after creating it once </ description>
</property>

Comments

Popular posts from this blog

Apache Spark WordCount scala example

Apache Spark is an open source cluster computing framework. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance. Pre Requirements 1) A machine with Ubuntu 14.04 LTS operating system 2) Apache Hadoop 2.6.4 pre installed ( How to install Hadoop on Ubuntu 14.04 ) 3) Apache Spark 1.6.1 pre installed ( How to install Spark on Ubuntu 14.04 ) Spark WordCount Scala Example Step 1 - Change the directory to /usr/local/spark/sbin. $ cd /usr/local/spark/sbin Step 2 - Start all spark daemons. $ ./start-all. sh Step 3 - The JPS (Java Virtual Machine Process Status Tool) tool is limited to reporting information on JVMs for which it has the access permissions. $ jp...

Apache Spark Shell Usage

Apache Spark is an open source cluster computing framework. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance. Pre Requirements 1) A machine with Ubuntu 14.04 LTS operating system 2) Apache Hadoop 2.6.4 pre installed ( How to install Hadoop on Ubuntu 14.04 ) 3) Apache Spark 1.6.1 pre installed ( How to install Spark on Ubuntu 14.04 ) Spark Shell Usage The Spark shell provides an easy and convenient way to prototype certain operations quickly, without having to develop a full program, packaging it and then deploying it. Step 1 - Change the directory to /usr/local/hadoop/sbin. $ cd /usr/local/hadoop/sbin Step 2 - Start all hadoop daemons. $ ./start-all. sh ...