Skip to content

Instantly share code, notes, and snippets.

@saptak
Last active November 5, 2015 11:29
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save saptak/c081dbf1da8304b9d4c2 to your computer and use it in GitHub Desktop.
Save saptak/c081dbf1da8304b9d4c2 to your computer and use it in GitHub Desktop.

Introduction

In this tutorial we will be analyzing geolocation and truck data. We will import this data into HDFS and build derived tables in Hive. Then we will process the data using Pig and Hive. The processed data is then imported into Microsoft Excel where it can be visualized.

Prerequisite:

Goals of the Tutorial

The goal of this tutorial is that you get familiar with the basics of following:

  • Hadoop and HDP
  • Ambari File User Views and HDFS
  • Ambari Hive User Views and Apache Hive
  • Ambari Pig User Views and Apache Pig
  • Data Visualization with Excel

Concepts: Hadoop & HDP

In this section we will learn about Apache Hadoop and what makes it scale to large data sets. We will also talk about various components of Hadoop ecosystem that make Apache Hadoop enterprise ready in form of Hortonworks Data Platform(HDP) distribution. The module discusses Apache Hadoop, its capabilities as a data platform and how the core of Hadoop and its surrounding ecosystem solution vendors provides the enterprise requirements to integrate alongside the Data Warehouse and other enterprise data systems as part of a modern data architecture, and as a step on the journey toward delivering an enterprise ‘Data Lake’

Apache Hadoop:

Apache Hadoop® is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly gain insight from massive amounts of structured and unstructured data. Numerous Apache Software Foundation projects make up the services required by an enterprise to deploy, integrate and work with Hadoop.

The base Apache Hadoop framework is composed of the following modules:

  • Hadoop Common Libraries and utilities needed by other Hadoop modules.
  • Hadoop Distributed File System (HDFS), a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster.
  • Hadoop YARN, a resource-management platform responsible for managing computing resources in clusters and using them for scheduling of users’ applications.
  • Hadoop MapReduce, a programming model for large scale data processing.

Each project has been developed to deliver an explicit function and each has its own community of developers and individual release cycles.

Hortonworks Data Platform (HDP)

Hortonworks Data Platform is a packaged software hadoop distribution that aim to ease deployment and management of Hadoop clusters compared with simply downloading the various Apache code bases and trying to run them together a system. Architected, developed, and built completely in the open, Hortonworks Data Platform (HDP) provides an enterprise ready data platform that enables organizations to adopt a Modern Data Architecture.

With YARN as its architectural center it provides a data platform for multi-workload data processing across an array of processing methods – from batch through interactive to real-time, supported by key capabilities required of an enterprise data platform — spanning Governance, Security and Operations.

The Hortonworks Sandbox is a single node implementation of the Hortonworks Data Platform (HDP). It is packaged as a virtual machine to make evaluation and experimentation with HDP fast and easy. The tutorials and features in the Sandbox are oriented towards exploring how HDP can help you solve your business big data problems. The Sandbox tutorials will walk you through bringing some sample data into HDP and manipulating it using the tools built into HDP. The idea is to show you how you can get started and show you how to accomplish tasks in HDP. HDP is free to download and use in your enterprise and you can download it here: HDP on Sandbox

Lab 0: Set-up

Start the Sandbox VM and Open Ambari

Start the HDP Sandbox following the Sandbox Install Guide to start the VM:

Lab0_1

Once you have installed the Sandbox VM, it resolves to the host on your environment, the address of which varies depending upon the Virtual Machine you are using(Vmware, VirtualBox etc). As, a general thumb rule, wait for the installation to complete and confirmation screen will tell you the host your sandbox resolves to. For example:

In case of VirtualBox: host would be 127.0.0.1

Lab0_2

If you are using a private cluster or a cloud to run sandbox. Please find the host your sandbox resolves to.

Append the port number :8888 to your host address, open your browser, and access Sandbox Welcome page at http://host:8888/.

hadoop tutorial

Navigate to Ambari welcome page using the url given on Sandbox welcome page.

Both the username and password to login are admin.

NOTE
If you want to search for the host address your sandbox is running on, ssh into the sandbox terminal upon successful installation and follow subsequent steps:

  1. login using username as “root” and password as “hadoop”.
  2. Type ifconfig and look for inet address under eth.
  3. Use the inet address, append :8080 and open it into a browser. It shall direct you to Ambari login page.
  4. This inet address is randomly generated for every session and therefore differs from session to session.

The following table has some useful URLs as well:

Sandbox welcome page
http://host:8888

Ambari Dashboard
http://host:8080

Ambari Welcome
http://host:8080/views/ADMIN_VIEW/2.1.0/INSTANCE/#/

Hive User View
http://host:8080/#/main/views/HIVE/1.0.0/Hive

Pig User View
http:/host:8080/#/main/views/PIG/0.1.0/MyPig

FIle User View
http://host:8080/#/main/views/FILES/0.2.0/MyFiles

SSH web Client
http://host:4200

Hadoop Configuration
http://host:50070/dfshealth.html 
http://host:50070/explorer.html

Enter the Ambari Welcome URL and then you should see a similar screen:

There are 5 key capabilities to explore in the Ambari Welcome screen:

Lab0_3

  1. Operate Your Cluster” will take you to the Ambari Dashboard which is the primary UI for Hadoop Operators
  2. Manage Users + Groups” allows you to add & remove Ambari users and groups
  3. Clusters” allows you to grant permission to Ambari users and groups
  4. Ambari User Views” list the set of Ambari Users views that are part of the cluster
  5. Deploy Views” provides administration for adding and removing Ambari User Views

Take a few minutes to quickly explore these 5 capabilities and to become familiar their features.

Enter the Ambari Dashboard URL and you should see a similar screen:

Lab0_4

Briefly skim through the Ambari Dashboard links (circled above) by clicking on

  1.  MetricsHeatmap and Configuration

and then the

  1.  DashboardServicesHostsAlertsAdmin and User Views icon (represented by 3×3 matrix ) to become familiar with the Ambari resources available to you.

NOTE
To learn more about Hadoop please explore the HDP Getting Started documentation.
If you have questions, feedback or need help getting your environment ready visit  developer.hortonworks.com.  Please also explore the HDP documentation.   To ask a question check out the Hortonworks Forums.

Lab 1: HDFS – Loading Sensor Data into HDFS

Introduction:

In this section you will download the sensor data and load that into HDFS using Ambari User Views. You will get introduced to the Ambari Files User View to manage files. You can perform tasks like create directories, navigate file systems and upload files to HDFS. In addition you’ll perform a few other file-related tasks as well. Once you get the basics, you will create two directories and then load two files into HDFS using the Ambari Files User View.

Outline:

  • HDFS backdrop
  • Step 1.1: Download data – Geolocation.zip
  • Step 1.2: Load Data into HDFS
  • Suggested readings

HDFS backdrop:

A single physical machine gets saturated with its storage capacity as the data grows. Thereby comes impending need to partition your data across separate machines. This type of File system that manages storage of data across a network of machines is called Distributed File Systems. HDFS is a core component of Apache Hadoop and is designed to store large files with streaming data access patterns, running on clusters of commodity hardware. With Hortonworks Data Platform HDP 2.2, HDFS is now expanded to support heterogeneous storage media within the HDFS cluster.

Step 1.1: Download and Extract the Sensor Data Files

  • You can download the sample sensor data contained in a compressed (.zip) folder here: Geolocation.zip
  • Save the Geolocation.zip file to your computer, then extract the files. You should see a Geolocation folder that contains the following files:
    • geolocation.csv – This is the collected geolocation data from the trucks. it contains records showing truck location, date, time, type of event, speed, etc.
    • trucks.csv – This is data was exported from a relational database and it shows info on truck model, driverid, truckid, and aggregated mileage info.

Step 1.2: Load the Sensor Data into HDFS

  • Go to the Ambari Dashboard and open the HDFS User View by click on the User Views icon and selecting the HDFS Files menu item.

Screen Shot 2015-07-21 at 10.17.21 AM

  • Starting from the top root of the HDFS file system, you will see all the files the logged in user (admin in this case) has access to see:

Lab2_2

  • Click tmp. Then click  Lab2_3 button to create the /tmp/admin directory and then create the /tmp/admin/data directory.

Screen Shot 2015-07-27 at 9.42.07 PM

  • Now traverse to the /tmp/admin/data directory and upload the corresponding geolocation.csv and trucks.csv files into it.

Screen Shot 2015-07-27 at 9.43.28 PM

You can also perform the following operations on a file by right clicking on the file: DownloadMovePermissionsRename and Delete.

Lab2_5

Data manipulation with Hive

Introduction

In this section of tutorial you will be introduced to Apache Hive. In the earlier section we covered how to load data into HDFS. So now you have geolocation and trucks files stored in HDFS as csv files. In order to use this data in Hive we will tell you how to create a table and how to move data into Hive warehouse, from where it can be queried upon. We will analyze this data using SQL queries in Hive User Views and store it as ORC. We will also walk through Apache Tez and how a DAG is created when you specify Tez as execution engine for Hive. Lets start..!!

Outline

  • Hive basics
  • Step 2.1: Use Ambari Hive User Views
  • Step 2.2: Define a Hive Table
  • Step 2.3: Load Data into Hive Table
  • Step 2.4: Define an ORC table in Hive
  • Step 2.5: Review Hive Settings
  • Step 2.6: Analyze Truck Data
  • Suggested readings

Hive

Hive is a SQL like query language that enables analysts familiar with SQL to run queries on large volumes of data.  Hive has three main functions: data summarization, query and analysis. Hive provides tools that enable easy data extraction, transformation and loading (ETL).

Step 2.1: Become Familiar with Ambari Hive User View

Apache Hive presents a relational view of data in HDFS and ensures that users need not worry about where or in what format their data is stored. Hive can display data from RCFile format, text files, ORC, JSON, parquet, sequence files and many of other formats in a tabular view. Through the use of SQL you can view your data as a table and create queries like you would in an RDBMS. To make it easy to interact with Hive we use a tool in the Hortonworks Sandbox called the Ambari Hive User View. Ambari Hive User View provides an interactive interface to Hive. We can create, edit, save and run queries, and have Hive evaluate them for us using a series of MapReduce jobs or Tez jobs. Let's now open the Ambari Hive User View and get introduced to the environment, go to the Ambari User VIew icon and select Hive :Screen Shot 2015-07-21 at 10.10.18 AM The Ambari Hive User View looks like the following:Lab2_2 Now let’s take a closer look at the SQL editing capabilities in the User View:

  1. There are four tabs to interact with SQL: 1. Query: This is the interface shown above and the primary interface to write, edit and execute new SQL statements 2. Saved Queries: You can save your favorite queries and quickly have access to them to rerun or edit. 3. History: This allows you to look at past queries or currently running queries to view, edit and rerun. It also allows you to see all SQL queries you have authority to view. For example, if you are an operator and an analyst needs help with a query, then the Hadoop operator can use the History feature to see the query that was sent from the reporting tool. 4. UDFs: Allows you to define UDF interfaces and associated classes so you can access them from the SQL editor.
  2. Database Explorer: The Database Explorer helps you navigate your database objects. You can either search for a database object in the Search tables dialog box, or you can navigate through Database -> Table -> Columns in the navigation pane.
  3. The principle pane to write and edit SQL statements. This editor includes content assist via CTRL + Space to help you build queries. Content assist helps you with SQL syntax and table objects.

NOTE
The command to autocomplete queries is CTRL-Space on all systems including Mac OS X.

  1. Once you have created your SQL statement you have 3 options: 1. Execute: This runs the SQL statement. 2. Explain: This provides you a visual plan, from the Hive optimizer, of how the SQL statement will be executed. 3. Save as: Allows you to persist your queries into your list of saved queries.
  2. When the query is executed you can see the Logs or the actual query results. 1. Logs: When the query is executed you can see the logs associated with the query execution. If your query fails this is a good place to get additional information for troubleshooting. 2. Results: You can view results in sets of 50 by default.
  3. There are four sliding views on the right hand side with the following capabilities, which are in context of the tab you are in: 1. Query: This is the default operation,which allows you to write and edit SQL. 2. Settings: This allows you to set properties globally or associated with an individual query. 3. Visual Explain: This will generate an explain for the query. This will also show the progress of the query. 4. TEZ: If you use TEZ as the query execution engine then you can view the DAG associated with the query. This integrates the TEZ User View so you can check for correctness and helps with performance tuning by visualizing the TEZ jobs associated with a SQL query. 5. Notifications: This is how to get feedback on query execution.
    Take a few minutes to explore the various Hive User View features.

Step 2.2 Define a Hive Table

Now that you are familiar with the Hive User View, let's create the initial staging tables for the geolocation and trucks data. In this section we will learn how to use the Ambari Hive User View to create four tables: geolocaiton_stage, trucking_stage, geolocation, trucking.  First we are going to create 2 tables to stage the data in their original csv text format and then will create two more tables where we will optimize the storage with ORC. Here is a visual representation of the Data Flow:Lab2_3

  1. Copy-and-paste the the following table DDL into the empty Worksheet of the Query Editor to define a new table named geolocation_staging:
    –Create table geolocation for staging initial load
CREATE TABLE geolocation_stage (truckid string, driverid string, event string, latitude double, longitude double, city string, state string, velocity bigint, event_ind bigint, idling_ind bigint)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
  1. Click the green Execute button to run the command. If successful, you should see the Succeeded status in the Query Process Results section:Lab2_4
  2. Create a new Worksheet by clicking the blue New Worksheet button:Lab2_5
  3. Notice the tab of your new Worksheet is labeled Worksheet (1). Double-click on this tab to rename the label to trucks_stage�:Lab2_6
  4. Copy-and-paste the following table DDL into your trucks_stage worksheet to define a new table named trucks_stage: –Create table trucks for staging initial load
CREATE TABLE trucks_stage(driverid string, truckid string, model string, jun13_miles bigint, jun13_gas bigint, may13_miles bigint, may13_gas bigint, apr13_miles bigint, apr13_gas bigint, mar13_miles bigint, mar13_gas bigint, feb13_miles bigint, feb13_gas bigint, jan13_miles bigint, jan13_gas bigint, dec12_miles bigint, dec12_gas bigint, nov12_miles bigint, nov12_gas bigint, oct12_miles bigint, oct12_gas bigint, sep12_miles bigint, sep12_gas bigint, aug12_miles bigint, aug12_gas bigint, jul12_miles bigint, jul12_gas bigint, jun12_miles bigint, jun12_gas bigint,may12_miles bigint, may12_gas bigint, apr12_miles bigint, apr12_gas bigint, mar12_miles bigint, mar12_gas bigint, feb12_miles bigint, feb12_gas bigint, jan12_miles bigint, jan12_gas bigint, dec11_miles bigint, dec11_gas bigint, nov11_miles bigint, nov11_gas bigint, oct11_miles bigint, oct11_gas bigint, sep11_miles bigint, sep11_gas bigint, aug11_miles bigint, aug11_gas bigint, jul11_miles bigint, jul11_gas bigint, jun11_miles bigint, jun11_gas bigint, may11_miles bigint, may11_gas bigint, apr11_miles bigint, apr11_gas bigint, mar11_miles bigint, mar11_gas bigint, feb11_miles bigint, feb11_gas bigint, jan11_miles bigint, jan11_gas bigint, dec10_miles bigint, dec10_gas bigint, nov10_miles bigint, nov10_gas bigint, oct10_miles bigint, oct10_gas bigint, sep10_miles bigint, sep10_gas bigint, aug10_miles bigint, aug10_gas bigint, jul10_miles bigint, jul10_gas bigint, jun10_miles bigint, jun10_gas bigint, may10_miles bigint, may10_gas bigint, apr10_miles bigint, apr10_gas bigint, mar10_miles bigint, mar10_gas bigint, feb10_miles bigint, feb10_gas bigint, jan10_miles bigint, jan10_gas bigint, dec09_miles bigint, dec09_gas bigint, nov09_miles bigint, nov09_gas bigint, oct09_miles bigint, oct09_gas bigint, sep09_miles bigint, sep09_gas bigint, aug09_miles bigint, aug09_gas bigint, jul09_miles bigint, jul09_gas bigint, jun09_miles bigint, jun09_gas bigint, may09_miles bigint, may09_gas bigint, apr09_miles bigint, apr09_gas bigint, mar09_miles bigint, mar09_gas bigint, feb09_miles bigint, feb09_gas bigint, jan09_miles bigint, jan09_gas bigint)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
STORED AS TEXTFILE; 
  1. Execute the query and make sure it runs successfully. Let's review some aspects of the CREATE TABLE statements issued above. If you have a SQL background this statement should seem very familiar except for the last 3 lines after the columns definition:

  2. The ROW FORMAT clause specifies each row is terminated by the new line character.

  3. The FIELDS TERMINATED BY clause specifies that the fields associated with the table (in our case, the two csv files) are to be delimited by a comma.

  4. The STORED AS clause specifies that the table will be stored in the TEXTFILE format.
    For details on these clauses consult the Apache Hive Language Manual.

  5. To verify the tables were defined successfully, click the refresh icon in the Database Explorer. Under Databases, click default database to expand the list of table and the new tables should 

  6. appear: Lab2_7

  7. Click on the trucks_stage table name to view its schema.

  8. Click on the Load sample data icon to generate and execute a select SQL statement to query the table for a 100 rows. Notice your two new tables are currently empty.

NOTE
You can have multiple SQL statements within each editor worksheet, but each statement needs to be separated by a semicolon ;. If you have multiple statements within a worksheet but you only want to run one of them just highlight the statement you want ran and then click the Execute button.

A few additional commands to explore tables:

  • show tables; – List the tables created in the database by looking up the list of tables from the metadata stored in HCatalog.
  • describe _table_name_; – Provides a list of columns for a particular table (ie describe geolocation_stage;)
  • show create _table_name_; – Provides the DDL to recreate a table (ie show create table geolocation_stage;)
  • By default, when you create a table in Hive, a directory with the same name gets created in the /apps/hive/warehouse folder in HDFS. Using the Ambari Files User View, navigate to the /apps/hive/warehouse folder. You should see both a geolocation_stage and trucks_stage directory:Lab2_8

NOTE
The definition of a Hive table and its associated metadata (i.e., the directory the data is stored in, the file format, what Hive properties are set, etc.) are stored in the Hive metastore, which on the Sandbox is a MySQL database.

Step 2.3: Load Data into a Hive table

  1. Let’s load some data into your two Hive tables. In this tutorial we are going to show you two different ways of populating a Hive table with data from our CSV files. One way will involve moving our data file into the correct hive directory, while the other method will involve us executing a simple Hive query to load the data.
    The first way to populate a table is to put a file into the directory associated with the table. Using the Ambari Files User View, click on the Move icon next to the file /tmp/admin/data/geolocation.csv. (Clicking on Move is similar to cut in cut-and-paste.)Screen Shot 2015-07-27 at 9.45.11 PM
  2. After clicking on the Move arrow your screen should look like the following:Lab2_10 Notice two things have changed:
  3. The file name geolocation.csv has grayed out some
  4. The icons associated with the operations on the files are removed. This is to indicate that this file is in a special state that is ready to be moved.
  5. Now navigate to the destination path /apps/hive/warehouse/geolocation_stage. You might notice that as you navigate through the directories that the file is pinned at the top. Once you get to the appropriate directory click on the Paste icon to move the file:
    paste-geolocation-2
  6. Go back to the Ambari Hive View and click on the Load sample data icon next to the geolocation_stage table. Notice the table is no longer empty, and you should see the first 100 rows of the table:Lab2_12
  7. Now we’re going to show you the second way to load the data using a simple Hive query. Enter the following SQL command into an empty Worksheet in the Ambari Hive User View:
LOAD DATA INPATH '/tmp/admin/data/trucks.csv' OVERWRITE INTO TABLE trucks_stage;

This query is telling us that we want to load the data at the path /tmp/admin/data/trucks.csv, and then take the data and move it into the trucks_stage table which has all of the columns defined already.

  1. You should now see data in the trucks_stage table:Lab2_13
  2. From the Files view, navigate to the /tmp/admin/data folder. Notice the folder is empty! The LOAD DATA INPATH command moved the trucks.csv file from the /tmp/admin/data folder to the /apps/hive/warehouse/trucks_stage folder.
  3. Lastly, we need to remove the header rows from each table that were loaded into the table. To do this we just need to use a single command for each table.
ALTER TABLE trucks_stage SET TBLPROPERTIES ("skip.header.line.count"="1");
ALTER TABLE geolocation_stage SET TBLPROPERTIES ("skip.header.line.count"="1");

Now when querying these two tables, the header lines should no longer appear in the results! Step 2.4: Define an ORC Table in Hive

Introducing Apache ORC

The Optimized Row Columnar (new Apache ORC project) file format provides a highly efficient way to store Hive data. It was designed to overcome limitations of the other Hive file formats. Using ORC files improves performance when Hive is reading, writing, and processing data. To use the ORC format, specify ORC as the file format when creating the table: CREATE TABLE  STORED AS ORC In this step, you will create two ORC tables (geolocation and trucks) that are created from the text data in your geolocation_stage and trucks_stage tables.

  1. From the Ambari Hive User View, execute the following table DDL to define a new table named geolocation and trucks:
    – Create table geolocation as ORC from geolocation_stage table
CREATE TABLE geolocation STORED AS ORC AS SELECT * FROM geolocation_stage;

– Create table trucks as ORC from trucks_stage table

CREATE TABLE trucks STORED AS ORC AS SELECT * FROM trucks_stage;
  1. Refresh the Database Explorer and verify you have a table named geolocation and trucks in the default database: Lab2_16
  2. View the contents of the geolocation table. Notice it contains the same rows as geolocation_stage.
  3. To verify geolocation is an ORC table, execute the following query:
describe formatted geolocation;
  1. Scroll down to the bottom of the Results tab and you will see a section labeled Storage Information. The output should look like:Lab2_15

NOTEsign
If you want to try running some of these commands from the Hive Shell follow the following steps from your terminal shell (ie putty):

  1. ssh root@127.0.0.1 -p 2222 Root pwd is hadoop
  2. su hive
  3. hive
    Starts Hive shell and now you can enter commands and SQL
  4. quit;
    Exits out of the Hive shell.

Step 2.5: Review Hive Settings

  1. Open the Ambari Dashboard in another tab by right clicking on the Ambari icon
    Lab2_17 
  2. Go to the Hive page then select the Configs tab then click on Settings tab:Lab2_18 Once you click on the Hive page you should see a page similar to above:
  3. Hive Page
  4. Hive Configs Tab
  5. Hive Settings Tab
  6. Version History of Configuration
    Scroll down to the Optimization Settings: Lab2_19 In the above screenshot we can see:
  7. Tez is set as the optimization engine
  8. Cost Based Optimizer (CBO) is turned on
    This shows the new HDP 2.3 Ambari Smart Configurations, which simplifies setting configurations

NOTE

New in HDP 2.3

Hadoop is configured by a collection of XML files. In early versions of Hadoop operators would need to do XML editing to change settings. There was no default versioning. Early Ambari interfaces made it easier to change values by showing the settings page with dialog boxes for the various settings and allowing you to edit them. However, you needed to know what needed to go into the field and understand the range of values. Now with Smart Configurations you can toggle binary features and use the slider bars with settings that have ranges.
By default the key configurations are displayed on the first page. If the setting you are looking for is not on this page you can find additional settings in the Advanced tab:Lab2_20 For example, what if we wanted to improve SQL performance by using the new Hive vectorization features, where would we find the setting and how would we turn it on. You would need to do the following steps:

  1. Click on the Advanced tab and scroll to find the property
  2. Or, start typing in the property into the property search field and then this would filter the setting you scroll for.
    As you can see from the green circle above the hive.vectorized.execution.enabled is turned on already.

Step 2.6: Analyze the Trucks Data

Next we will be using Hive, Pig and Excel to analyze derived data from the geolocation and trucks tables. The business objective is to better understand the risk the company is under from fatigue of drivers, over-used trucks, and the impact of various trucking events on risk. In order to accomplish this we are going to apply a series of transformations to the source data, mostly though SQL, and use Pig to calculate risk. In Step 10 we will be using Microsoft Excel to generate a series of charts to better understand risk.
Lab2_21

Let’s get started with the first transformation. We want to calculate the miles per gallon for each truck. We will start with our truck data table. We need to sum up all the miles and gas columns on a per truck basis. Hive has a series of functions that can be used to reformat a table. The keyword LATERAL VIEW is how we invoke things. The stack function allows us to restructure the data into 3 columns labeled rdate, gas and mile with 54 rows. We pick truckid, driverid, rdate, miles, gas from our original table and add a calculated column for mpg (miles/gas) and then we will calculate average mileage.

  1. Using the Ambari Hive User View, execute the following query:
    – Create table truck_mileage from existing trucking data
CREATE TABLE truck_mileage 
STORED AS ORC AS 
SELECT truckid, driverid, rdate, miles, gas, miles / gas mpg FROM trucks 
LATERAL VIEW stack( 54, 'jun13',jun13_miles,jun13_gas,'may13',may13_miles,may13_gas,'apr13',apr13_miles,apr13_gas,'mar13',mar13_miles,mar13_gas,'feb13',feb13_miles,feb13_gas,'jan13',jan13_miles,jan13_gas,'dec12',dec12_miles,dec12_gas,'nov12',nov12_miles,nov12_gas,'oct12',oct12_miles,oct12_gas,'sep12',sep12_miles,sep12_gas,'aug12',aug12_miles,aug12_gas,'jul12',jul12_miles,jul12_gas,'jun12',jun12_miles,jun12_gas,'may12',may12_miles,may12_gas,'apr12',apr12_miles,apr12_gas,'mar12',mar12_miles,mar12_gas,'feb12',feb12_miles,feb12_gas,'jan12',jan12_miles,jan12_gas,'dec11',dec11_miles,dec11_gas,'nov11',nov11_miles,nov11_gas,'oct11',oct11_miles,oct11_gas,'sep11',sep11_miles,sep11_gas,'aug11',aug11_miles,aug11_gas,'jul11',jul11_miles,jul11_gas,'jun11',jun11_miles,jun11_gas,'may11',may11_miles,may11_gas,'apr11',apr11_miles,apr11_gas,'mar11',mar11_miles,mar11_gas,'feb11',feb11_miles,feb11_gas,'jan11',jan11_miles,jan11_gas,'dec10',dec10_miles,dec10_gas,'nov10',nov10_miles,nov10_gas,'oct10',oct10_miles,oct10_gas,'sep10',sep10_miles,sep10_gas,'aug10',aug10_miles,aug10_gas,'jul10',jul10_miles,jul10_gas,'jun10',jun10_miles,jun10_gas,'may10',may10_miles,may10_gas,'apr10',apr10_miles,apr10_gas,'mar10',mar10_miles,mar10_gas,'feb10',feb10_miles,feb10_gas,'jan10',jan10_miles,jan10_gas,'dec09',dec09_miles,dec09_gas,'nov09',nov09_miles,nov09_gas,'oct09',oct09_miles,oct09_gas,'sep09',sep09_miles,sep09_gas,'aug09',aug09_miles,aug09_gas,'jul09',jul09_miles,jul09_gas,'jun09',jun09_miles,jun09_gas,'may09',may09_miles,may09_gas,'apr09',apr09_miles,apr09_gas,'mar09',mar09_miles,mar09_gas,'feb09',feb09_miles,feb09_gas,'jan09',jan09_miles,jan09_gas )dummyalias AS rdate, miles, gas;

Lab2_22

  1. To view the data generated by the script, click Load Sample Data icon in the Database Explorer next to truck_mileage. After clicking the next button once, you should see a table that list each trip made by a truck and driver: Lab2_23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment