Hue编译安装及Hadoop相关组建的配置

Hue安装部署

Hue是一个开源的Apache Hadoop UI系统,是基于Python Web框架Django实现的。Hue可以使开发者在浏览器端的Web控制台上与Hadoop集群进行交互来分析处理数据,例如操作HDFS上的数据,运行MapReduce Job等等。

本文介绍CentOS6.5安装hue3.11.0,及Hadoop相关组建的配置。

安装依赖

1
2
3
4
yum install -y ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel  libxslt-devel make mysql mysql-devel openldap-devel Python-devel sqlite-devel openssl-devel gmp-devel libffi-devel unzip
yum install -y cyrus-sasl-plain
yum install -y libssl-devel libffi-devel
yum install -y python-simplejson python-setuptools rsync saslwrapper-devel pycrypto libyaml-devel libsasl2-dev libsasl2-modules-gssapi-mit libkrb5-dev libssl-devel

编译 hue

下载hue

下载hue-3.11.0.tgz
解压tar -zxvf hue-3.11.0.tgz

github 下载

1
2
3
git clone https://github.com/cloudera/hue.git branch-3.11.0

mv branch-3.11.0 hue-3.11.0
1
2
3
4
5
6
7
8
编译方式一:
cd hue-3.11.0
make apps
编译完成后会在当前目录下生产build等目录,hue-3.11.0即可作为安装目录

编译方式二:
make install PREFIX=/usr/local
会在/usr/local下生产hue目录,安装的时候就用此hue

配置hadoop的 HttpFS服务

如果hdfs启用了HA,则只能使用HttpFS服务,否则也可以使用Webhdfs

HttpFS服务配置:

core-site.xml文件添加:

1
2
3
4
5
6
7
8
<property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>*</value>
</property>

httpfs-site.xml中加入以下内容:

1
2
3
4
5
6
7
8
<property>
<name>httpfs.proxyuser.$username.hosts</name>
<value>*</value>
</property>
<property>
<name>httpfs.proxyuser.$groupname.groups</name>
<value>*</value>
</property>

这里都是hadoop 因为所有服务都是hadoop用户安装部署。

core-site.xml添加以下内容

1
2
3
4
5
6
7
8
9
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>

hdfs-site.xml添加这些语句

1
2
3
4
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

启动hdfs的HttpFS服务:

/home/hadoop/apache-hadoop/hadoop/sbin/httpfs.sh start

测试:访问http://namenode_address:14000/webhdfs/v1

修改hue配置

修改hue的/home/hadoop/hue-3.11.0/desktop/conf/hue.ini 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
# Webserver listens on this address and port
http_host=192.168.110.160
http_port=8008

# Time zone name
##time_zone=America/Los_Angeles
time_zone=Asia/Shanghai

# Webserver runs as this user
server_user=hadoop
server_group=hadoop

# This should be the Hue admin and proxy user
default_user=hadoop

# This should be the hadoop cluster admin
default_hdfs_superuser=hadoop

# Default encoding for site data
default_site_encoding=utf-8


# Note for MariaDB use the 'mysql' engine.
engine=mysql
host=mysql-test.shining.com
port=3306
user=hadoop
password=123456
# Execute this script to produce the database password. This will be used when 'password' is not set.
password_script=/path/script
name=db_hue

[hadoop]
# Enter the filesystem uri
fs_defaultfs=hdfs://testhadoop:8020 ##core-site.xml中的fs.defaultFS的值
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
webhdfs_url=http://192.168.110.159:14000/webhdfs/v1 ##启用HttpFS
# Directory of the Hadoop configuration
hadoop_conf_dir=$HADOOP_HOME/etc/hadoop
[[yarn_clusters]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=192.168.110.159

# The port where the ResourceManager IPC listens on
resourcemanager_port=8032
# URL of the ResourceManager API
resourcemanager_api_url=http://192.168.53.100:8088
# URL of the HistoryServer API
history_server_api_url=http://192.168.53.101:19888


# [[[ha]]]
# Resource Manager logical name (required for HA)
## logical_name=my-rm-name

# Un-comment to enable
## submit_to=True

# URL of the ResourceManager API
## resourcemanager_api_url=http://localhost:8088

[[mapred_clusters]]

[[[default]]]
# Enter the host on which you are running the Hadoop JobTracker
jobtracker_host=192.168.110.160
[beeswax]

# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=192.168.110.160

# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/home/hadoop/apache-hadoop/hive/conf

[spark]
# Host address of the Livy Server.
## livy_server_host=localhost

# Port of the Livy Server.
## livy_server_port=8998

# Configure Livy to start in local 'process' mode, or 'yarn' workers.
## livy_server_session_kind=yarn

# Host of the Sql Server
## sql_server_host=localhost

# Port of the Sql Server
## sql_server_port=10000
[hbase]
# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
# Use full hostname with security.
# If using Kerberos we assume GSSAPI SASL, not PLAIN.
#hbase_clusters=(Cluster|192.168.53.100:9090)

# HBase configuration directory, where hbase-site.xml is located.
## hbase_conf_dir=/home/hadoop/apache-hadoop/hbase/conf

# Hard limit of rows or columns per row fetched before truncating.
## truncate_limit = 500

# 'buffered' is the default of the HBase Thrift Server and supports security.
# 'framed' can be used to chunk up responses,
# which is useful when used in conjunction with the nonblocking server in Thrift.
## thrift_transport=buffered
建hue数据库
1
2
3
4
create database hue_123 DEFAULT CHARSET utf8 COLLATE utf8_general_ci;
grant all PRIVILEGES on hue_123.* to 'hadoop'@'192.168.53.101' IDENTIFIED BY '123456' with grant option ;
grant all PRIVILEGES on hue_123.* to 'hadoop'@'%' IDENTIFIED BY '123456' with grant option ;
FLUSH PRIVILEGES;
初始化hue(顺序执行)
1
2
3
apache-hadoop/hue/build/env/bin/hue syncdb 
(有提示设置用户名和密码)
apache-hadoop/hue/build/env/bin/hue migrate
启动

start

1
apache-hadoop/hue/build/env/bin/supervisor >/dev/null 2>&1 &

stop

1
2
ps -ef |grep hue
kill -9 pid
访问
1
http://192.168.110.160:8008

如果您感觉文章还可以的话,请帮点点下面的广告哦! 谢谢支持!

感谢您的支持!