【基礎(chǔ)篇】
連接AWS
action->start,connect
打開terminal cd key-pair.pem的地址 輸入:ssh -I "xxxxx.pem" ubuntu@xxxxxxxx.compute-1.amazonaws.com
?進入服務器Linux系統(tǒng)
vi file.txt 編輯txt文件
啟用Hadoop
在Linux根目錄下:sh runstart.sh?
?啟用Hadoop 之后可以進行Hadoop操作
hadoop fs -ls /?查看Hadoop根目錄下的文件
hadoop fs -cat /user/myfilm/part-m-00000 | head -5 查看文件的前五行
hadoop fs -cat 查看文件內(nèi)容
hadoop fs -get file1 file2 把Hadoop的file1放到Linux的file2里
hadoop fs -put product.txt /userdata 把Linux的product.txt放到Hadoop的/userdata里
hadoop fs -rm 刪除文件夾包括其中的所有子文件夾和文件
進入mysql
在Linux任意目錄下:mysql -u ubuntu -p 輸入密碼
?進入mysql
看數(shù)據(jù)庫:show databases;
進數(shù)據(jù)庫:use (database);
看數(shù)據(jù)庫下的表:show tables;
【Sqoop篇】
Sqoop作用:在mysql和HDFS之間互相導出
從mysql導入HDFS
更多參數(shù)見:https://blog.csdn.net/w1992wishes/article/details/92027765
從mysql的sakila數(shù)據(jù)庫下將表actor導入到HDFS位置為/userdata的父級目錄下:
sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table actor
從mysql的sakila數(shù)據(jù)庫下將表film導入到HDFS位置為/user/myfilms的目錄下:
sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--target-dir /user/myfilms \--table film
從mysql的sakila數(shù)據(jù)庫下將表city的兩類'city_id, city'的導入到HDFS位置為/user/userdata的父級目錄下:
sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table city \--columns 'city_id, city'
從mysql的sakila數(shù)據(jù)庫下將表rental滿足條件'inventory_id <= 10'的數(shù)據(jù)導入到HDFS位置為/user/userdata的父級目錄下:
sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table rental \--where 'inventory_id <= 10'
針對rental_id來更新import表:
sqoop import \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--warehouse-dir /userdata \--table rental \--where 'inventory_id > 10 and inventory_id < 20' \--incremental append \--check-column rental_id
從HDFS導入mysql
Mysql> CREATE TABLE new_rental SELECT * FROM rental LIMIT 0;
$ sqoop export \--connect jdbc:mysql://localhost/sakila \--username ubuntu --password training \--export-dir /userdata/rental \--table new_rental
【Pig篇】
Pig作用:處理HDFS的數(shù)據(jù)
use Pig interactively
在Linux輸入pig 出現(xiàn)援引?“grunt” –?the Pig shell
例:film.pig
Line #1: Upload (read) data from (hdfs)?/user/myfilm into ‘data’ variable
data = LOAD '/user/myfilm' USING PigStorage(',')
as (film_id:int, title:chararray, rental_rate:float);
Line #4: Filter data by rental_rate greater than or equal to $3.99
data = FILTER data BY rental_rate >= 3.99;
Line #6: Return the data to the screen (dump)
DUMP data;
Line #8: Also, store the data into a new HDFS folder called “top_films”
STORE data INTO '/user/top_films' USING PigStorage('|');
例: realestate.pig
Load “realestate.txt” data into “l(fā)istings”object (notice file path):
listings = LOAD '/mydata/class2/realestate.txt' USING PigStorage(',')
as
(listing_id:int, date_listed:chararray, list_price:float,
sq_feet:int, address:chararray);
Convert date (string) to datetime format:
listings = FOREACH listings GENERATE listing_id, ToDate(date_listed, 'YYYY-MM-dd') AS date_listed, list_price, sq_feet, address;
--DUMP listings;
Filter data:
bighomes = FILTER listings BY sq_feet >= 2000;
Select columns (same as before):
bighomes_dateprice = FOREACH bighomes GENERATE
listing_id, date_listed, list_price;
DUMP bighomes_dateprice;
Store data in HDFS:
STORE bighomes_dateprice INTO '/mydata/class2/homedata';