Introduction to Hive

栏目: IT技术 · 发布时间: 4年前

内容简介:Apache Hive is often referred to as a data warehouse infrastructure built on top of Apache Hadoop. Originally developed by Facebook to query their incoming ~20TB of data each day, currently, programmers use it for ad-hoc querying and analysis over large da

Apache Hive is often referred to as a data warehouse infrastructure built on top of Apache Hadoop. Originally developed by Facebook to query their incoming ~20TB of data each day, currently, programmers use it for ad-hoc querying and analysis over large data sets stored in file systems like HDFS (Hadoop Distributed Framework System) without having to know specifics of map-reduce. The best part of Hive is that the queries are implicitly converted to efficiently chain map-reduce jobs by the Hive engine.

Features of Hive:

  • Supports different storage types such as plain text, csv, Apache Hbase, and others
  • Data modeling such as Creation of databases, tables, etc.
  • Easy to code; Uses SQL-like query language called HiveQL
  • ETL functionalities such as Extraction, Transformation, and Loading data into tables coupled with joins, partitions, etc.
  • Contains built-in User Defined Functions (UDF) to manipulate dates, strings, and other data-mining tools
  • Unstructured data are displayed as data look like tables regardless of the layout
  • Plug-in capabilities for the custom mapper, reducer, and UDF
  • Enhanced querying on Hadoop

Use Cases of Hive:

  • Text mining — Unstructured data with a convenient structure overlaid and analyzed with map-reduce
  • Document indexing — Assigning tags to multiple documents for easier recovery
  • Business queries — Querying larger volumes of historic data to get actionable insights, e.g. transaction history, payment history, customer database, etc.
  • Log processing — Processing various types of log files like call logs, weblogs, machine logs, etc.

Coding in Hive

We will be using a table called “transaction” to look at how to query data in Hive. The transaction table contains attributes id, item, and sales.

DDL commands in Hive

DDL is the the short name of Data Definition Language, which deals with database schemas and descriptions, of how the data should reside in the database. Some common examples are

Create table

  • Creating a table — CREATE TABLE transaction(id INT, item STRING, sales FLOAT);
  • Storing a table in a particular location — CREATE TABLE transaction(id INT, item STRING, sales FLOAT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\001’ STORED AS TEXTFILE LOCATION ;
  • Partitioning a table — CREATE TABLE transaction(id INT, item STRING, sales FLOAT) PARTITIONED BY (id INT)

Drop table

Alter table

  • ALTER TABLE transaction RENAME TO transaction_front_of_stores;
  • To add a column — ALTER TABLE transaction ADD COLUMNS (customer_name STRING);

Show Table

Describe Table

  • DESCRIBE transaction;
  • DESCRIBE EXTENDED transaction;

DML Commands in HIVE

DML is the short name of Data Manipulation Language which deals with data manipulation and includes most commonly used SQL statements such as SELECT, INSERT, UPDATE, DELETE, etc., It is primarily used to store, modify, retrieve, delete and update data in a database.

Loading Data

  • Loading data from an external file — LOAD DATA LOCAL INPATH “” [OVERWRITE] INTO TABLE ;
  • LOAD DATA LOCAL INPATH “/documents/datasets/transcation.csv” [OVERWRITE] INTO TABLE transaction;
  • Writing dataset from a separate table — INSERT OVERWRITE TABLE transaction SELECT id, item, date, volume FROM transaction_updated;
  • Select Statement

    The select statement is used to fetch data from a database table. Primarily used for viewing records, selecting required field elements, getting distinct values and displaying results from any filter, limit or group by operation.

    To get all records from the transaction table:

    SELECT * FROM transaction;

    To get distinct transaction ids from the transaction table:

    SELECT DISTINCT id from transaction;

    Limit Statement

    Used along with the Select statement to limit the number of rows a coder wants to view. Any transaction database contains a large volume of data which means selecting every row will result in higher processing time.

    SELECT * FROM transaction LIMIT 10;

    Filter Statement

    SELECT * FROM transaction WHERE sales>100;

    Group by Statement

    Group by statements are used for summarizing data at different levels. Think of a scenario where we want to calculate total sales by items.

    SELECT item, SUM(sales) as sale FROM transaction GROUP BY item;

    what if we want to filter out all items which saw a sale of at least 1000.

    SELECT item, SUM(sales) as sale FROM transaction GROUP BY item HAVING sale>1000;

    Joins in Hive

    To combine and retrieve the records from multiple tables we use Hive Join. Currently, Hive supports inner, outer, left, and right joins for two or more tables. The syntax is similar to what we use in SQL. Before we look at the syntax let’s understand how different joins work.

    Different joins in HIVE

    SELECT A.* FROM transaction A {LEFT|RIGHT|FULL} JOIN transaction_date B ON (A.ID=B.ID);

    Notes:

    • Hive doesn’t support IN/EXISTS sub queries
    • Hive doesn’t support join conditions that doesn’t contain equality conditions
    • Multiple tables can be joined but organize tables such that the largest table appears last in the sequence
    • Hive converts joins over multiple tables into a single map/reduce job if for every table the same column is used in the join clauses

    Optimizing queries in Hive

    To optimize queries in hive here are the 7 rule of thumb you should know

    1. Group by, aggregation functions and joins take place in the reducer by default whereas filter operations happen in the mapper
    2. Use the hive.map.aggr=true option to perform the first level aggregation directly in the map task
    3. Set the number of mappers/reducers depending on the type of task being performed. For filter conditions use set mapred.mapper.tasks=X; For aggregating operations: set mapred.reduce.tasks=Y;
    4. In joins, the last table in the sequence is streamed through the reducers whereas the others are buffered. Organize tables such that the largest table appears last in the sequence
    5. STREAM TABLE and MAP JOINS can be used to speed up to join tasks

    以上所述就是小编给大家介绍的《Introduction to Hive》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

    查看所有标签

    猜你喜欢:

    本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

    51单片机应用从零开始

    51单片机应用从零开始

    杨欣、王玉凤、刘湘黔 / 清华大学 / 2008-1 / 39.80元

    《51单片机应用与实践丛书•51单片机应用从零开始》在分析初学者认知规律的基础上,结合国内重点大学一线教师的教学经验以及借鉴国外经典教材的写作手法,对51单片机的应用基础知识进行系统而翔实的介绍。读者学习每一章之后,"实例点拨"环节除了可以巩固所学的内容外,还开辟了单片机应用的视野;再加上"器件介绍"环节,又充实了对单片机从基础到应用所需要的知识。8051单片机不仅是国内用得最多的单片机之一,同时......一起来看看 《51单片机应用从零开始》 这本书的介绍吧!

    XML、JSON 在线转换
    XML、JSON 在线转换

    在线XML、JSON转换工具

    正则表达式在线测试
    正则表达式在线测试

    正则表达式在线测试

    HEX CMYK 转换工具
    HEX CMYK 转换工具

    HEX CMYK 互转工具