A “Hello
World” Example using OLH
This post will discuss the basic mechanics for loading an Oracle table using Oracle Loader for Hadoop (OLH).For this “Hello World” discussion, we will use JDBC to drive the example, loading delimited text living in HDFS files into a simple un-partitioned table called “FIVDTI” living in an Oracle schema.It will illustrate the bare bones structure of an OLH job, and touch upon the minimal configuration properties you need to set to get something working end to end.This tutorial assumes that you know how to run basic MapReduce jobs and know how to connect to Oracle with SQL*Plus to create and drop tables using Oracle SQL.
Restating what was explained in the introduction to this tutorial, OLH uses a MapReduce job to read data living in HDFS and to load it into a target table living in Oracle.If the data you are loading is in a typical form (e.g. delimited text or CSV files), you should be able to load a table interactively with a single command.
Requirements for Running an OLH Job
Since
OLH runs MapReduce jobs, the OLH command will need to run on either some system
on a Hadoop cluster, or on a system that has client access to the cluster.That system will also need JDBC access to an
Oracle database that has a schema with the table that you want to load.Finally the system will need access to an
OLH installation and its jar files (i.e. OLH_HOME in our running example below)..
If you want to use a Hadoop client, setting it up on a development node is not a lot of hard work.You simply need to download and install the Hadoop software that is running on the cluster using the configuration files specific to the cluster.
You want to sanity test everything to make sure the plumbing works.If you can kick off the Hadoop Wordcount MapReduce job and can read/write/delete HDFS files interactively using Hadoop, your Hadoop plumbing should be ready to accept an OLH job.(See http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.htmlfor details on sanity checking MapReduce.)
At this point it would be good to carve out a subdirectory in HDFS that you own, that will hold result directories of OLH jobs that you want to run.(In our working example in this post the result log directory will live in “/user/olh_test/results/fivdti”.)You use hadoop to do this:
hadoop fs –mkdir/user/oracle/olh_test/results/fivdti
You want to make sure that you are either the owner of this HDFS directory or at least have read and write access to it.
Connecting to an Oracle database requires you to connect using an Oracle connection URL with the name of the Oracle user and the Oracle password.In this example, we are assuming the Oracle schema and the Oracle user are the same.(See http://radiofreetooting.blogspot.com/2007/02/user-schema.html for an explanation of the difference between the two concepts.)
Before kicking off an OLH job it is worth taking the time to ensure the Oracle connection credentials are correct.Remember OLH will load an Oracle table across many Hadoop map or reduce tasks each of which will try to make a JDBC connection to the database, so you want to eliminate trial-and-error type authentication hiccups before passing them on to OLH.(See http://www.dba-oracle.com/t_oracle_jdbc_connection_testing.htm for details.)
While configuring and testing the OLH framework you will also want some minimal administration of Hadoop and Oracle.Specifically you will want to browse the Hadoop URL that contains JobTracker information where you can monitor a load job.You will also need to connect to Oracle via SQL*Plus to do administration with your schema and to manually inspect the target table using SQL.For purposes of OLH, installing a formal Oracle client on your development system, though convenient, is overkill.You simply need to log onto a system where SQL*Plus is available (e.g. the system where Oracle is running) and where you can connect to the Oracle database.
The Structure of an OLH Command
Let’s start by looking at an OLH command that you would invoke to kick off an OLH MapReduce job. Again this will use JDBC to connect to the Oracle database and load a table with delimited text living in files in HDFS.
$HADOOP_HOME/bin/hadoop jar
$OLH_HOME/jlib/oraloader.jar
oracle.hadoop.loader.OraLoader
-D
oracle.hadoop.loader.jobName=OLHP_fivdti_dtext_jdbc_0_722
-D
oracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml
-D mapred.reduce.tasks=0
-D mapred.input.dir=/user/olh_performance/fivdti/56000000_90
-D
mapred.output.dir=/user/oracle/olh_test/results/fivdti/722
-conf /tmp/oracle_connection.xml
-conf /tmp/dtextInput.xml
-conf /tmp/jdbcOutput.xml
The command starts with an invocation of hadoop (living in the Hadoop client) passing the “jar” command.
$HADOOP_HOME/bin/hadoopjar
This is followed by a reference to OLH’s jar file (“oraloader.jar”) and the OLH loader class (fully qualified by its classpath) that defines an OLH load job.
$OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader
The following two properties are consumed by OLH.(Note that the space between a –D and the property name is not a typo.It is Hadoop’s convention for setting properties that are directed for Hadoop rather than for the JVM running underneath it.)
The first is to give the job a meaningful name when it executes in Hadoop. When creating a name, I find it useful to capture the name of the table being loaded, the type of input, the load method, the number of reducers used, and a unique job number for the OLH run (i.e. 722).
-D oracle.hadoop.loader.jobName=OLHP_fivdti_dtext_jdbc_0_722
MapReducejobs are long running batch jobs.While debugging and tuning OLH load operations you will want to spend a lot of time looking at Hadoop’s JobTracker URL to find your job by name and see how it behaves.
The second property points to the loader map.The loader map is an XML file that indicates what Oracle table will be loaded, and indicates the fields in a delimited text file that map to columns in the target table.
-D oracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml
The next three properties are consumed by Hadoop.
The
first designates the number of reduce tasks to run.The reduce stage in OLH is used for
improving load performance by sorting records by table partitions and
optionally sorting by table columns.If
the table being loaded is not partitioned and if there is no value of sorting
by columns, then you will want to set this value to zero.This means that the MapReduce job only runs the map stage, which simply
loads the Oracle table. (Note that a zero value can be used even if the table is partitioned, but you probably will want to use the OCI Direct Load method, which will be discussed in the next post.)
-D mapred.reduce.tasks=0
The second property designates the HDFS directory that holds input files of delimited text that are to serve as the payload for the target table.
-D mapred.input.dir=/user/olh_performance/fivdti/56000000_90
Finally you to specify an HDFS directory where OLH can write logging output specific to the job.
In this case we carved out a directory under /user/oracle/olh_test/results, and assigned a unique integer for the job (i.e. 722).An important note: the subdirectory “722” is created by the Hadoop once the job is submitted.If it exists before the job is submitted Hadoop will fail.
-D mapred.output.dir=/user/oracle/olh_test/results/fivdti/722/
The final arguments are files that complete configuration.For purposes of modularity they are split across three configuration files that can be reused for different types of jobs.
The first file has the credentials needed to connect to the Oracle database.
-conf/tmp/oracle_connection.xml
The second file designates the form of input (“delimited text”).
-conf/tmp/dtextInput.xml
The last file designates the output (i.e. online loading of an Oracle table via JDBC).
-conf/tmp/jdbcOutput.xml
Again all of these properties embedded in these various configuration files could be bundled into one large configuration file or called out explicitly using in-line “-D” properties.
However,
assuming you don’t have a penchant for obscurity or verbosity, it’s probably
prudent to breakdown the various configuration settings into configurations
files that are organized to be reusable.Tuning Hadoop to run big MapReduce jobs can take some time to get
optimal, and once you get there it’s a good idea to isolate the settings so
they can be easily reused for different jobs.It’s also easier to test different combinations on the fly (e.g. trying
JDBC and then trying OCI Direct Path) against the same table.You just need to swap the jdbcOutput.xml for
the analogous specification for OCI Direct Path.
Loader Maps
Loader maps are used to map fields read as input to columns living in the Oracle table that will be loaded.They also identify the table being loaded (e.g. “FIVDTI”) and the schema (“OLHP”) where the table resides.
First let’s look at the Oracle table we are loading:
SQL> describe fivdti
NameNullType
-------------------------------------------------------------
F1NUMBER
I2NUMBER(38)
V3VARCHAR2(50)
D4DATE
T5TIMESTAMP(6)
V6VARCHAR2(200)
I7NUMBER(38)
Again, the loader map file is identified by the following OLH property:
-D oracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml
The file contains this specification:
<LOADER_MAP>
<SCHEMA>OLHP</SCHEMA>
<TABLE>FIVDTI</TABLE>
<COLUMN field="F0">F1</COLUMN>
<COLUMN field="F1">I2</COLUMN>
<COLUMN field="F2">V3</COLUMN>
<COLUMN field="F3" format="yyyy-MM-dd
HH:mm:ss">D4</COLUMN>
<COLUMN field="F4">T5</COLUMN>
<COLUMN field="F5">V6</COLUMN>
<COLUMN field="F6">I7</COLUMN>
</LOADER_MAP>
We need to map fields in the delimited text to the column names specified in the table.Field names in delimited text files can default to a simple naming convention, where fields are named by default to “F0, F1, F2…” reflecting the physical order that they appear in, in a line of CSV text.These field names are then paired with column names.At the top of the specification is the schema name and table name.
What is critical about loader maps is ensuring that the text fields being loaded can be legally converted to the Oracle data type of the mapped columns, considering issues such as precision and scale that are asserted on the Oracle columns.Typically for standard scalar types this is straight forward.However DATE columns are fussier and require the user to describe an explicit format.(Strictly speaking, loader maps are not required in cases when column names reflect field names and the DATE data type is not used, but in this case we are using default field names that are different from the formal column names of the Oracle table.)
Configuration Specified in Files
Let’s look at the settings living in the configuration files specified above.
OLH Connection Properties
The oracle_connection.xml file in the example above contains the credentials needed to find the Oracle database and to connect to a schema.This information gets passed to the OLH MapReduce jobs.
The connection URL follows the “jdbc:oracle:thin” pattern.You need to know the host where the Oracle database is running (e.g. “myoraclehost”), the listening port (e.g. 1511), and the database service name (e.g. “dbm”).You also need to identify the Oracle user (e.g. “olhp”) and password (e.g. “welcome1”).(In a production environment you will want to use Oracle Wallet to avoid storing passwords in clear text, but we will save that issue for a later post.)
<configuration>
<property>
<name>oracle.hadoop.loader.connection.url</name>
<value>jdbc:oracle:thin:@myoraclehost:1511/dbm</value>
</property>
<property>
<name>oracle.hadoop.loader.connection.user</name>
<value>olhp</value>
</property>
<property>
<name>oracle.hadoop.loader.connection.password</name>
<value>welcome1</value>
</property>
</configuration>
OLH Input Format
The dtextInput.xml configuration file in the example above describes the physical characteristics of the rows of data that OLH will read.OLH provides off-the-shelf built-in input format implementations that cover many common formats that you would expect to see living in Hadoop data files.These include delimited text and CSV, text with regular expressions, Hive formats, and Oracle NoSQL database.In general, it’s much easier to generate output in a form that is compliant with the existing built-ins, although rolling your own input format class is a supported option.
In our running example we are using default settings for delimited text, which is CSV.Additional properties for delimited text allow you to specify alternative field terminators, and require explicit field enclosers (useful when you want to use CSV as an input format but when there are commas embedded in field values.)
<configuration>
<property>
<name>mapreduce.inputformat.class</name>
<value>oracle.hadoop.loader.lib.input.DelimitedTextInputFormat</value>
</property>
</configuration>
OLH Output Format
The jdbcOutput.xml file is used to specify how we want to load Oracle.Output formats can be divided into two types: those that load the data directly into an Oracle database table, or those that store the data in a new set of files in HDFS that can be pulled into Oracle later.
For our running example, we are going to use JDBC.
<configuration>
<property>
<name>mapreduce.outputformat.class</name>
<value>oracle.hadoop.loaderlib.output.JDBCOutputFormat</value>
</property>
</configuration>
Running the OLH Job
When you kick off the OLH Job it will give you console output that looks something like this:
Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights
reserved.
13/05/30 09:18:00 INFO loader.OraLoader: Oracle Loader for Hadoop Release 2.1.0
- Production
Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.
13/05/30 09:18:00 INFO loader.OraLoader: Built-Against:
hadoop-2.0.0-mr1-cdh4.1.2 hive-0.9.0-cdh4.1.2 avro-1.6.3 jackson-1.8.8
13/05/30 09:18:02 INFO loader.OraLoader: oracle.hadoop.loader.loadByPartition
is disabled because mapred.reduce.tasks=0
13/05/30 09:18:02 INFO loader.OraLoader: oracle.hadoop.loader.enableSorting
disabled: cannot sort by key when number of reducers is zero
13/05/30 09:18:02 INFO output.DBOutputFormat: Setting map tasks speculative
execution to false for : oracle.hadoop.loader.lib.output.JDBCOutputFormat
13/05/30 09:18:02 WARN loader.OraLoader: Sampler error: the number of reduce
tasks must be greater than one; the configured value is 0 . Job will continue
without sampled information.
13/05/30 09:18:02 INFO loader.OraLoader: Sampling time=0D:0h:0m:0s:14ms (14 ms)
13/05/30 09:18:02 INFO loader.OraLoader: Submitting OraLoader job OraLoader .
13/05/30 09:18:02 INFO input.FileInputFormat: Total input paths to process : 90
13/05/30 09:18:04 INFO loader.OraLoader: map 0% reduce 0%
13/05/30 09:18:19 INFO loader.OraLoader: map 1% reduce 0%
13/05/30 09:18:20 INFO loader.OraLoader: map 2% reduce 0%
13/05/30 09:18:22 INFO loader.OraLoader: map 3% reduce 0%
….
13/05/30 09:21:13 INFO loader.OraLoader: map 95% reduce 0%
13/05/30 09:21:16 INFO loader.OraLoader: map 96% reduce 0%
13/05/30 09:21:18 INFO loader.OraLoader: map 97% reduce 0%
13/05/30 09:21:20 INFO loader.OraLoader: map 98% reduce 0%
13/05/30 09:21:23 INFO loader.OraLoader: map 99% reduce 0%
13/05/30 09:21:31 INFO loader.OraLoader: map 100% reduce 0%
13/05/30 09:21:33 INFO loader.OraLoader: Job complete: OraLoader
(job_201305201106_0524)
Note that because we are loading a simple non-partitioned table, this is a map-only job where loading is done by mappers and there is no reduce phase.The warning message at the outset of the job is about an OLH feature called the sampler.It is used when tables are partitioned to balance the reducers doing a load.Since the target table is not partitioned the warning about the sampler being disabled is not interesting.
Where to look if something went wrong
I’ve run OLH jobs interactively daily for more than a year.When something goes wrong, Hadoop console output will make it obvious, and typically gives a pretty good idea of what problem you are having.I also rely on Hadoop’s Job and Task tracker UIs which allow you to drill down to a failed task and look at the output it produces:typically Java stack dumps and log messages that detail the problems it was having.
The results directory in HDFS that was specified in the “mapred.output.dir"setting in the OLH command contains lots of information about a job run.In the directory there will be a top level report called “oraloader-report.txt”.It offers a clean breakdown of time spent by tasks running in Hadoop that were used to load the target table.It probably is the quickest way of looking at the workloads and determining if they are unbalanced.
Hadoop and Connection Resources
About the only additional issue that you need to be concerned about for this kind of job is the number of concurrent connections the Oracle database accepts.This becomes a problem when the number of Hadoop tasks that are loading a table concurrently exceeds the number of connections that Oracle will accept.If that happens the loading tasks will fail with an ORA-00020 error.
You want to check the number of map and reduce slots that are configured for Hadoop.For map-only jobs, if the number of map slots is less than the number of connections Oracle accepts, there won’t be a problem.The same holds true for full MapReduce jobs if the number of reduce slots are less than the max number of Oracle connections accepted.
If this is not true you need to artificially restrict the concurrency of load tasks.
For map-only jobs (like the one illustrated above) this means you will need to restrict the number of map slots in the cluster available to the OLH job to something less than the number of connections Oracle allows.
For full MapReduce OLH jobs (which are more typical) loading occurs in the reduce phase, and this can be easily controlled in the OLH command by tweaking the “mapred.reduce.tasks” property mentioned above, and setting it to an appropriate number.
-D mapred.reduce.tasks=20
Summary
To summarize, a bare bones OLH configuration typically needs the following information:
- How to connect to Oracle
- How many reduce tasks to run
- The form of input (e.g. CSV) and output (e.g. JDBC)
- An HDFS directory containing input files
- An HDFS directory where OLH can write information about job results
- A loader map that tells what fields correspond to what columns in a table
That's it. Easy-peasy lemon squeezy. OLH commands pretty much look like the one used above. Creating configurations for other tables typically requires only creating new a loader map specification.All the other configuration files can be reused.
The next post will discuss using OLH with OCI Direct Load which is what you will want to use when loading big tables that are sorted by key or are partitioned.We will spend a lot of time discussing performance issues such as load balancing, using the OLH sampler, and using the SDP transport protocol.