首页
留言
Search
1
在Centos7下搭建Socks5代理服务器
1,035 阅读
2
在windows11通过Zip安装Mysql5.7
574 阅读
3
Mysql5.7开放远程登录
482 阅读
4
数据库
469 阅读
5
mysql5.7基本命令
377 阅读
综合
正则表达式
git
系统
centos7
ubuntu
kali
Debian
网络
socks5
wireguard
运维
docker
hadoop
kubernetes
hive
openstack
ElasticSearch
ansible
前端
三剑客
Python
Python3
selenium
Flask
PHP
PHP基础
ThinkPHP
游戏
我的世界
算法
递归
排序
查找
软件
ide
Xshell
vim
PicGo
Typora
云盘
安全
靶场
reverse
Java
JavaSE
Spring
MyBatis
C++
QT
数据库
mysql
登录
Search
标签搜索
java
centos7
linux
centos
html5
JavaScript
php
css3
mysql
spring
mysql5.7
linux全栈
ubuntu
BeanFactory
SpringBean
python
python3
ApplicationContext
kali
mysql8.0
我亏一点
累计撰写
139
篇文章
累计收到
8
条评论
首页
栏目
综合
正则表达式
git
系统
centos7
ubuntu
kali
Debian
网络
socks5
wireguard
运维
docker
hadoop
kubernetes
hive
openstack
ElasticSearch
ansible
前端
三剑客
Python
Python3
selenium
Flask
PHP
PHP基础
ThinkPHP
游戏
我的世界
算法
递归
排序
查找
软件
ide
Xshell
vim
PicGo
Typora
云盘
安全
靶场
reverse
Java
JavaSE
Spring
MyBatis
C++
QT
数据库
mysql
页面
留言
搜索到
4
篇与
hadoop
的结果
2022-04-06
Hive将数据导入到ElasticSearch
Hive将数据导入到ElasticSearch0.环境准备ElasticSearchHive1.安装插件1.下载对应版本https://www.elastic.co/cn/downloads/hadoophive支持jsonhttp://www.congiu.net/hive-json-serde/1.3.8/hdp23/json-serde-1.3.8-jar-with-dependencies.jarhttp://www.congiu.net/hive-json-serde/1.3.8/hdp23/json-udf-1.3.8-jar-with-dependencies.jarwget -r https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-7.17.1.zip2.安装unzip工具yum install -y unzip3.解压文件unzip elasticsearch-hadoop-7.17.1.zip4.找到jar文件cd elasticsearch-hadoop-7.17.1/dist/ ll #只能添加这个,hadoop.jar不用添加 elasticsearch-hadoop-hive-7.17.1.jar #/root/elasticsearch/elasticsearch-hadoop-7.17.1/dist/elasticsearch-hadoop-hive-7.17.1.jar5.进入hive添加jar(add jar 仅对当前窗口有效)hive #add jar 包 仅对当前窗口有效,下次使用需重新添加 hive> add jar /root/elasticsearch/elasticsearch-hadoop-7.17.1/dist/elasticsearch-hadoop-hive-7.17.1.jar; #Added [/root/elasticsearch/elasticsearch-hadoop-7.17.1/dist/elasticsearch-hadoop-hive-7.17.1.jar] to class path #Added resources: [/root/elasticsearch/elasticsearch-hadoop-7.17.1/dist/elasticsearch-hadoop-hive-7.17.1.jar]2.hive表映射1.创建Hive辅助表hive> create database 5ewb; hive> use 5ewb; hive> create table `inwb` ( `phone` bigint, `uid` bigint ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'; OK Time taken: 0.69 seconds2.导入数据hive> use 5ewb; OK hive> load data local inpath '/root/shegongku/wb5e.txt' into table `inwb`; Loading data to table 5ewb.user OK Time taken: 97.234 seconds hive> select * from `inwb` limit 10; OK NULL NULL 15890981333 5350176154 15944850489 6057766172 17073799004 6547208199 18392710332 3754369810 18047430444 6444293239 13762520188 3866009977 18408812716 6134347857 18477461107 6031338428 13647595899 6796854079 #删除数据中的空行 hive> insert overwrite table `inwb` select * from `inwb` where phone is not null;3.创建hive映射表hive hive> use 5ewb; OK hive> CREATE TABLE `outwb` ( `phone` bigint, `uid` bigint ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n' STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler' TBLPROPERTIES('es.resource' = 'wb/_doc', 'es.index.auto.create' = 'true', 'es.nodes' = 'http://10.107.116.11', 'es.port'='9200', 'es.http.timeout'='120m', 'es.nodes.wan.only'='true'); OK Time taken: 0.252 seconds3.导入数据hive> use 5ewb; OK Time taken: 0.176 seconds hive> insert overwrite table `outwb` select * from `inwb`;
2022年04月06日
185 阅读
0 评论
0 点赞
2022-04-03
Centos7安装ELK集群
Centos7安装ELK集群0.准备环境系统:centos7Hadoop分布式部署完毕hive部署完毕准备文件:elasticsearch-7.17.1-linux-x86_64.tar.gzkibana-7.17.1-linux-x86_64.tar.gzlogstash-7.17.1-linux-x86_64.tar.gz1.所有节点安装ElasticSearch1.解压压缩包tar -xzf elasticsearch-7.17.1-linux-x86_64.tar.gz -C /2.添加ElasticSearch的环境变量(每台机都执行)vim /etc/profile #末尾加入环境变量 export ELASTICSEARCH_HOME=/elasticsearch-7.17.1 export PATH=$PATH:$ELASTICSEARCH_HOME/bin #刷新环境变量 source /etc/profile3.修改ElasticSearch环境使用自己的jdkcd /elasticsearch-7.17.1 vim bin/elasticsearch-env #第二行插入java环境变量 JAVA_HOME="/elasticsearch-7.17.1/jdk"4.修改垃圾回收器配置参数vim config/jvm.options #在大约52行 #####修改前###### -XX:+UseConcMarkSweepGC ################ #####修改后###### -XX:+UseG1GC ################5.修改主配置文件设置集群vim config/elasticsearch.yml #####yaml文件,注意格式##### cluster.name: es node.name: node-x #节点名,各台机器不同 (1-3) network.host: 0.0.0.0 http.port: 9200 discovery.seed_hosts: ["master", "slave1", "slave2"] cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] #添加跨域第三方插件可以请求es http.cors.enabled: true http.cors.allow-origin: "*"6.修改普通用户可创建的最大线程数(每台机都执行)vim /etc/security/limits.conf #末尾追加 es soft nofile 65535 es hard nofile 65535 es soft nproc 4096 es hard nproc 4096 # End of file7.设置最大虚拟内存区域(每台机都执行)vim /etc/sysctl.conf #末尾追加 vm.max_map_count = 262144 ulimit -n 65536 #手动执行重新加载虚拟内存 sysctl -p8.批量拷贝cd / scp -r elasticsearch-7.17.1 root@slave1:/ scp -r elasticsearch-7.17.1 root@slave2:/ source /etc/profile9.修改另外两台机的主配置文件######slave1###### cd /elasticsearch-7.17.1 vim config/elasticsearch.yml node.name: node-2 #节点名,各台机器不同 (1-3) ################## ######slave2###### cd /elasticsearch-7.17.1 vim config/elasticsearch.yml node.name: node-3 #节点名,各台机器不同 (1-3) ##################2.创建新用户设置密码修改属组因为ElasticSearch不支持root用户启动,所以创建一个新用户三台机同时执行#新建用户 useradd es #设置密码000000 passwd es #修改文件属组 cd / chown -Rf es:es /elasticsearch-7.17.1/3.切换用户并启动三台机同时执行#切换用户 su es #启动ElasticSearch elasticsearch4.验证启动状态http://任意一台机IP:9200/_cluster/health?pretty{ "cluster_name" : "es", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 3, "active_shards" : 6, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }status:red:集群失败yellow:基本分片可用,备份不可用green:集群健康,所有分片和备份都可用5.前台停止es,通过后台启动ctrl+c强制终止后台启动命令elasticsearch -d6.安装kibana单节点:1.解压文件tar -xzf kibana-7.17.1-linux-x86_64.tar.gz -C /2.修改配置文件cd /kibana-7.17.1-linux-x86_64 vim config/kibana.yml ######修改内容###### server.host: "0.0.0.0" server.name: "master" #主机名 elasticsearch.hosts: ["http://10.107.116.10:9200"] #es地址 kibana.index: ".kibana" i18n.locale: "zh-CN" #中文 #elasticsearch.username: "admin" #账号 #elasticsearch.password: "000000" #密码 ###################3.创建kibana用户并且修改属组创建用户useradd kibana passwd kibana #设置密码000000修改属组cd / chown -Rf kibana:kibana /kibana-7.17.1-linux-x86_64/4.后台启动kibanasu kibana cd /kibana-7.17.1-linux-x86_64/ nohup bin/kibana >> /dev/null 2>&1 & exit5.验证kibana启动访问地址http://10.107.116.10:5601/7.安装logstash单节点:1.解压文件tar -xzf logstash-7.17.1-linux-x86_64.tar.gz -C /2.准备patterns新建patterns文件夹cd /logstash-7.17.1/ mkdir patterns创建java文件vim patterns/java ############### # user-center MYAPPNAME ([0-9a-zA-Z_-]*) # RMI TCP Connection(2)-127.0.0.1 MYTHREADNAME ([0-9a-zA-Z._-]|\(|\)|\s)* ###############3.修改配置文件创建配置文件 logstash.conf在使用如下配置时,需修改内容项:filter中的 两个 patterns_dirElasticSearch有账号密码就添加账号密码vim config/logstash.conf ########################### input { beats { port => 5044 } } filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } if [fields][docType] == "sys-log" { grok { patterns_dir => ["/logstash-7.17.1/patterns"] match => { "message" => "\[%{NOTSPACE:appName}:%{IP:serverIp}:%{NOTSPACE:serverPort}\] %{TIMESTAMP_ISO8601:logTime} %{LOGLEVEL:logLevel} %{WORD:pid} \[%{MYAPPNAME:traceId}\] \[%{MYTHREADNAME:threadName}\] %{NOTSPACE:classname} %{GREEDYDATA:message}" } overwrite => ["message"] } date { match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS Z"] } date { match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"] target => "timestamp" locale => "en" timezone => "+08:00" } mutate { remove_field => "logTime" remove_field => "@version" remove_field => "host" remove_field => "offset" } } if [fields][docType] == "point-log" { grok { patterns_dir => ["/logstash-7.17.1/patterns"] match => { "message" => "%{TIMESTAMP_ISO8601:logTime}\|%{MYAPPNAME:appName}\|%{WORD:resouceid}\|%{MYAPPNAME:type}\|%{GREEDYDATA:object}" } } kv { source => "object" field_split => "&" value_split => "=" } date { match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS Z"] } date { match => ["logTime","yyyy-MM-dd HH:mm:ss.SSS"] target => "timestamp" locale => "en" timezone => "+08:00" } mutate { remove_field => "message" remove_field => "logTime" remove_field => "@version" remove_field => "host" remove_field => "offset" } } if [fields][docType] == "mysqlslowlogs" { grok { match => [ "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{NUMBER:id}\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)", "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{NUMBER:id}\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)", "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nuse\s(?<dbname>\w+);\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)", "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\n# Query_time: %{NUMBER:query_time}\s+Lock_time: %{NUMBER:lock_time}\s+Rows_sent: %{NUMBER:rows_sent}\s+Rows_examined: %{NUMBER:rows_examined}\nSET\s+timestamp=%{NUMBER:timestamp_mysql};\n(?<query_str>[\s\S]*)" ] } date { match => ["timestamp_mysql","yyyy-MM-dd HH:mm:ss.SSS","UNIX"] } date { match => ["timestamp_mysql","yyyy-MM-dd HH:mm:ss.SSS","UNIX"] target => "timestamp" } mutate { convert => ["query_time", "float"] convert => ["lock_time", "float"] convert => ["rows_sent", "integer"] convert => ["rows_examined", "integer"] remove_field => "message" remove_field => "timestamp_mysql" remove_field => "@version" } } } output { if [fields][docType] == "sys-log" { elasticsearch { hosts => ["http://10.107.116.10:9200"] index => "sys-log-%{+YYYY.MM.dd}" #user => "elastic" #password => "000000" } } if [fields][docType] == "point-log" { elasticsearch { hosts => ["http://10.107.116.11:9200"] index => "point-log-%{+YYYY.MM.dd}" routing => "%{type}" #user => "elastic" #password => "000000" } } if [fields][docType] == "mysqlslowlogs" { elasticsearch { hosts => ["http://10.107.116.12:9200"] index => "mysql-slowlog-%{+YYYY.MM.dd}" #user => "elastic" #password => "000000" } } } ###########################修改logstash配置vim config/logstash.yml ################ api.http.host: 0.0.0.0 ################4.后台启动logstashcd /logstash-7.17.1/ nohup bin/logstash -f config/logstash.conf &5.验证启动ps -ef | grep logstash #查看相关进程 cat config/logstash.conf6.报错解决如果启动出现下面错误[2022-03-31T19:22:29,834][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit org.jruby.exceptions.SystemExit: (SystemExit) exit at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.20.1.jar:?] at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.20.1.jar:?] at logstash_minus_7_dot_17_dot_1.lib.bootstrap.environment.<main>(/logstash-7.17.1/lib/bootstrap/environment.rb:94) ~[?:?]解决办法rm -rf /logstash-7.17.1/data/.lock重新启动
2022年04月03日
197 阅读
0 评论
0 点赞
2022-03-24
Hadoop配置Hive
Hadoop配置Hive{cloud title="" type="default" url="http://pan.000081.xyz/%E5%8D%9A%E5%AE%A2/%E5%A4%A7%E6%95%B0%E6%8D%AE/Hive" password=""/}0.环境前置Hadoop分布式部署完毕准备文件mysql-5.7.26-1.el7.x86_64.rpm-bundle.tarapache-hive-2.3.4-bin.tar.gzmysql-connector-java-5.1.46.jar1.安装Mysql1.检查是否存在与mysql冲突的包rpm -qa | grep mariadb #输出 mariadb-libs-5.5.56-2.el7.x86_642.卸载冲突包rpm -ev --nodeps mariadb-libs-5.5.56-2.el7.x86_643.解压Mysql安装包mkdir /mysql tar -xf mysql-5.7.26-1.el7.x86_64.rpm-bundle.tar -C /mysql/ cd /mysql/4.rpm按顺序安装Mysqlrpm -ivh mysql-community-common-5.7.26-1.el7.x86_64.rpm rpm -ivh mysql-community-libs-5.7.26-1.el7.x86_64.rpm rpm -ivh mysql-community-client-5.7.26-1.el7.x86_64.rpm rpm -ivh mysql-community-server-5.7.26-1.el7.x86_64.rpm5.启动mysqlservice mysqld start6.查看初始密码grep "password" /var/log/mysqld.log #输出 2022-03-24T12:04:39.603157Z 1 [Note] A temporary password is generated for root@localhost: 2yEFsa!sd2S77.进入Mysql并且修改密码mysql -uroot -p2yEFsa!sd2S7 #修改密码,因为我修改的是简单密码,所以修改相关参数 #修改validate_password_policy参数的值 mysql> set global validate_password_policy=0; Query OK, 0 rows affected (0.00 sec) #validate_password_length (密码长度) 参数默认为8,修改为1 mysql> set global validate_password_length=1; Query OK, 0 rows affected (0.00 sec) #修改密码 mysql> alter user 'root'@'localhost' identified by '000000'; Query OK, 0 rows affected (0.00 sec)8.开放远程登录mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '000000'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)2.安装Hive1.解压Hivetar -xzf apache-hive-2.3.4-bin.tar.gz -C /2.添加环境变量vim /etc/profile #追加 export HIVE_HOME=/apache-hive-2.3.4-bin export PATH=$HIVE_HOME/bin:$PATH #刷新环境变量 source /etc/profile3.配置1.新建mysql数据库mysql -uroot -p000000 -e "create database hive_db";2.配置hive-site.xmlcd /apache-hive-2.3.4-bin/conf cp -r hive-default.xml.template hive-site.xml vim hive-site.xml #配置文件 <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://master:3306/hive_db?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>000000</value> </property> <!--mysql 驱动--> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> !如果改成mysql-connector-java8一定要把 com.mysql.jdbc.Driver改成com.mysql.cj.jdbc.Driver </property> <property> <name>hive.downloaded.resources.dir</name> <value>/apache-hive-2.3.4-bin/tmp</value> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/apache-hive-2.3.4-bin/tmp/${hive.session.id}_resources</value> </property> <property> <name>hive.server2.enable.doAs</name> <value>false</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property>3.将mysql驱动包放入hive的lib下pwd /apache-hive-2.3.4-bin/lib ls | grep mysql-connector-java-5.1.46.jar mysql-connector-java-5.1.46.jar4.启动Hadoopstart-all.sh5.配置hive-env.shcd /apache-hive-2.3.4-bin/conf cp -r hive-env.sh.template hive-env.sh vim hive-env.sh #追加 ---------------------------- export JAVA_HOME=/jdk1.8.0_191 export HADOOP_HOME=/hadoop-2.7.7 export HIVE_CONF_DIR=/apache-hive-2.3.4-bin/conf export HIVE_AUX_JARS_PATH=/apache-hive-2.3.4-bin/lib4.初始化并启动1.删除文件rm -rf /hadoop-2.7.7/share/hadoop/yarn/lib/jline-0.9.94.jar2.初始化Hiveschematool -initSchema -dbType mysql #出现以下即为成功 schemaTool completed
2022年03月24日
172 阅读
0 评论
0 点赞
2022-03-24
Hadoop分布式搭建
Hadoop分布式搭建{cloud title="" type="default" url="http://pan.000081.xyz/%E5%8D%9A%E5%AE%A2/%E5%A4%A7%E6%95%B0%E6%8D%AE" password=""/}0.环境系统:centos7 64位处理器:8核内存:8G硬盘:200G数量:3台主机名IP地址master10.107.116.10slave110.107.116.11slave210.107.116.121.基础环境配置1.配置静态ipvi /etc/sysconfig/network-scripts/ifcfg-xxx BOOTPROTO=static ONBOOT="yes IPADDR=10.107.116.x NETMASK=255.255.255.0 GATEWAY=10.107.116.254 DNS1=114.114.114.1142.关闭防火墙和selinuxsystemctl stop firewalld systemctl disable firewalld setenforce 0 vi /etc/selinux/config #修改为 SELINUX=disabled3.修改主机名hostnamectl set-hostname 主机名 bash4.配置hosts文件vi /etc/hosts 加入 10.107.116.10 master 10.107.116.11 slave1 10.107.116.12 slave25.配置ssh免密登入#生成密钥 ssh-keygen #三台机互相执行配置免密 ssh-copy-id -i 主机6.配置时间同步yum install -y chrony vi /etc/chrony.conf master: ------------------------------------ #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server master iburst allow 0.0.0.0/24 local stratum 10 ------------------------------------ slave(1和2) ------------------------------------ #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server master iburst ------------------------------------ systemctl restart chronyd && systemctl enable chronyd #查看时间同步源 chronyc sources -v #查看时间同步源状态 chronyc sourcestats -v #校准时间同步服务 chronyc tracking8.安装JDK环境#准备文件 jdk-8u191-linux-x64.tar.gz tar -xzf jdk-8u191-linux-x64.tar.gz -C / #添加环境变量 vi /etc/profile #追加 export JAVA_HOME=/jdk1.8.0_191 export PATH=$PATH:$JAVA_HOME/bin #刷新环境变量 source /etc/profile #验证安装 java -version #成功安装 java version "1.8.0_191" Java(TM) SE Runtime Environment (build 1.8.0_191-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)2.Hadoop安装所有节点1.解压Hadoop(hadoop-2.7.7.tar.gz)tar -xzf hadoop-2.7.7.tar.gz -C /2.添加环境变量vi /etc/profile #末尾追加 export HADOOP_HOME=/hadoop-2.7.7 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin #刷新环境变量 source /etc/profile3.验证Hadoop环境变量hadoop version Hadoop 2.7.7 Subversion Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac Compiled by stevel on 2018-07-18T22:47Z Compiled with protoc 2.5.0 From source with checksum 792e15d20b12c74bd6f19a1fb886490 This command was run using /hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar3.Hadoop配置文件0.文件路径cd /hadoop-2.7.7/etc/hadoop/1.配置core-site.xmlvim core-site.xml ------------------------------ <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/root/hadoop/tmp</value> </property> </configuration>2.配置hdfs-site.xmlvim hdfs-site.xml ------------------------------ <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration>3.配置mapred-site.xmlcp -r mapred-site.xml.template mapred-site.xml vim mapred-site.xml ------------------------------ <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>4.配置yarn-site.xmlvim yarn-site.xml ------------------------------- <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>5.配置slavesvim slaves #清空改为 -------------------------------- master slave1 slave24.初始化Hadoopmasterhadoop namenode -format## 5.启动Hadoopmasterstart-all.sh如果JAVA_HOME报错,则所有节点hadoop-env.sh文件改为绝对路径,再次启动报错内容0.0.0.0: Error: JAVA_HOME is not set and could not be found.修改路径为绝对路径vim /hadoop-2.7.7/etc/hadoop/hadoop-env.sh export JAVA_HOME=${JAVA_HOME} 改为 export JAVA_HOME=/jdk1.8.0_1916.验证是否启动成功jps #出现 11858 NameNode 11495 ResourceManager 12167 SecondaryNameNode 12535 Jps 11993 DataNode 12411 NodeManager
2022年03月24日
177 阅读
0 评论
0 点赞