Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

eSTEP TechCast - September 2013 Material available

$
0
0
Dear Partners,

We would like to extend our sincere thanks to those of you who attended our TechCast on "Oracle Virtual Compute Appliance".

The materials (presentation, replay) from the TechCast are now available for all of you via oureSTEP portal.  You will need to provide your email address and the pin below to access the downloads. Link to the portal is shown below.

URL:http://launch.oracle.com/
PIN:
eSTEP_2011

The material can be found under tabeSTEP TechCast

Feel free to explore also the other delivered TechCasts and more useful information under the Download and Links tab. Any feedback is appreciated to help us improve the service and information we deliver.

Thanks and best regards,

Partner HW Enablement EMEA

What WebCenter info will you gain this year at OpenWorld?

$
0
0

Oracle OpenWorld 2013 is all about learning new things and gaining new insights that will help you and your business be more effective and successful. For many organizations, getting a handle on how information such as forms, faxes and other documents is a key focus area because there is a large and fast return on investment potential (and who doesn't want to see that!!).

That is where WebCenter Content and Imaging come in to play. It might be true that forms processing doesn't have that much sizzle, but as we all know, the CIO and CFO care a lot more about saving money and cutting operational costs then the latest sexy toy.  Forms processing is indeed a bit of an "unsung hero" but probably deserves another look by most companies.

Before you show up at OpenWorld this year, check out this blog post by our friends Dwayne Parkinson and Dan Stitely over at Team Informatics on the issue and then stop by and talk to them directly in the Oracle Demogrounds this year. You and your boss might be very glad you did!

http://blog.teaminformatics.com/2013/09/03/forms-the-unsung-hero-of-webcenter-content/

And for a full list of Oracle WebCenter related sessions, labs and event, be sure to visit the Focus On WebCenter page.

eSTEP TechCast - September 2013 Material available

$
0
0
Thanks to those who attended our TechCast on "Oracle Virtual Compute Appliance".

The materials (presentation, replay) from the TechCast are now available for all of you via oureSTEP portal.  You will need to provide your email address and the pin below to access the downloads. Link to the portal is shown below.

URL:http://launch.oracle.com/
PIN:
eSTEP_2011

The material can be found under tabeSTEP TechCast

Feel free to explore also the other delivered TechCasts and more useful information under the Download and Links tab

OBIEE 11.1.1.6.12 is the Final Regular Bundle Patch

$
0
0

(in via Ian)
In addition to the information we shared at the beginning of the week about OBIEE 11.1.1.6.12 being available, we would like to inform you,

that 11.1.1.6.12 is the Final Regular Bundle Patch and that

Error Correction Support in Product Life Cycle for OBIEE 11.1.1.6.x ends April 2014.

With the release of 11.1.1.6.12, the bi-monthly cadence of bundle patches for the OBIEE 11.1.1.6.x series will be suspended.We will continue to support the product and issue additional patches and updates as needed.

Please be aware that the Error Correction Support product life cycle for Oracle Business Intelligence Enterprise Edition and Oracle Business Intelligence Publisher 11.1.1.6  ends April 2014.

We are encouraging you to move to OBIEE 11.1.1.7.


For more details on upgrading, please see the following documents on
My Oracle Support ( https://support.oracle.com):

Oracle OpenWorld 13: Learning from the success of others

$
0
0

September is upon us and that can only mean one thing - OpenWorld is just around the corner!  One of the best things about the show for WebCenter customers is the chance to learn from other customers about what worked, what did not and how to get the most value from your information and the infrastructure that supports it all.

All of us want to help our organizations save money and be more efficient.  Logitech is one company that will be speaking this year about how they did just that using WebCenter Imaging and related technologies. Logitech is a global provider of personal computer accessories. Their Accounts Payable department was manually processing over 90,000 AP invoices, 100,000 employee expense reports, and 12,000 smart claims per year.  With this kind of load, they were understandably having difficulty managing the increased workload of manual data entry and complex validations. Physically entering the data into Oracle®Financials led to a large number of data inaccuracies and exceptions, creating a long cycle for purchase order to payment disbursement.

With support from Keste, a leading WebCenter business partner, Logitech was able to save many thousands of dollars a year and radically improve business operating efficiency.  How did they do it?  How much did they save?  There is only one way to find out... join us on Monday morning, September 23rd at 10:45 am in the Moscone West 2014 room to hear all the details!

To see the full list of sessions dedicated to WebCenter, check out the online guide at https://oracleus.activeevents.com/2013/connect/focusOnDoc.do?focusID=22574.

We hope to see you in San Francisco later this month!

New Java Champions: Tasha Carl and Gerrit Grunwald

$
0
0

Selected for their technical knowledge, leadership, inspiration, and tireless work for the Java community; two new Java Champions have been selected. They are Tasha Carl and Gerrit Grunwald. Congratulations!


Tasha Carl (July '13, Belgium)

A self-described "Byte Alchemist," Tasha has been a freelance Java/Web developer and software architect since 1997.  She told us "I am doing everything between developer and information systems architect, with my main emphasis on Java EE." She was founder of the Brussels Java User Group, and is now co-JUG leader. She a member of the Devoxx team, and is an international speaker at conferences and user groups. She is involved in Devoxx4Kids and is interested in training the next generation of Java developers. She wrote the Sagan-1 Robot Simulator, designed for kids to program a robot using simple commands. Learn about these and many more projects Tasha is involved in at her web page. Follow Tasha @imifos.



Gerrit Grunwald (July '13, Germany)

Gerrit is a software engineer with more than 8 years of experience in software development. He has been involved in Java desktop application developments and controls development. His current interests include JavaFX, HTML5 and Swing especially development of custom controls in one of these technologies. He is also interested in Java-driven embedded technologies like Raspberry Pi and BeagleBoard. He is a true believer in open source and has participated in popular projects like JFXtras as well as his own projects (SteelSeries Swing, SteelSeries Canvas, Enzo, FXGConverter). He is an active member of the Java community, where he founded and lead the Java User Group Münster (Germany) and co leads the JavaFX community. He is an international speaker at conferences and user groups. Read Gerrit's blog and follow him @hansolo_ .

The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. Nominees are named and selected through a peer review process. (Current Oracle employees are not eligible.) Learn more at the Java Champions page on Java.net.

Join Webcast: Aberdeen Group & Oracle Sponsor "Are Your Fragmented Systems Keeping You up at Night?"

$
0
0

The average organization has more than 7 systems that must be integrated with their core accounting application.

Join experts from Aberdeen Research and Oracle to learn the latest facts about accounting integration and reporting. You’ll also see how Fusion Accounting Hub can help you with:

  • Simple, accurate, and centralized accounting from a single source of truth 
  • Robust rules engine built for Finance, not IT 
  • Complete audit and drill-down capabilities 
  • Real-time secure access to reports and live financial data 
  • Embedded, multi-dimensional reporting

Don’t let fragmented systems hold your finance department back. With many CFOs now playing a crucial role in driving business transformation, Finance and IT have the power to partner together to make things right.

Date: Thursday, September 12, 2013
Time: 10:00 am PDT

Register today

Misys Kondor+ runs best on SPARC T5

$
0
0

Misys is a leading financial software vendor, providing the broadest portfolio of banking, treasury, trading and risk solutions available on the market. At ISV Engineering, we have a long-standing collaboration with the Kondor+ product line, that came from the Turaz acquisition (formerly part of Thomson Reuters) and now part of the Misys Treasury & Capital Markets (TCM) portfolio. Jean-Marc Jacquot, Senior Technical Consultant at Misys TCM, was recently interviewed by ITplace.TV (in French language) about a recent IT redesign of the Kondor+ installed base at a major French financial institution.

The customer was running various releases of Kondor+ over a wide range of SPARC machines, from Sun V240 to M4000. The IT project aimed at rationalizing these deployments for better ROI, SLA and raw performance —to meet the new system requirements of the latest Kondor+ release. In the short list, SPARC & Solaris beat x86 & Linux on performance, virtualization and price. The customer ordered his first SPARC T5-2 systems this year.

Re. performance, in real-life benchmarking performed at the customer site, SPARC T4 was faster than SPARC M4000 and HP G8 machines. Re. virtualization, the use of Oracle VM Server for SPARC (a.k.a. Logical Domains or LDom) allows the consolidation of machines with different system times; the seasonal right-sizing of the "end-of-year" machine; and the mixing of Solaris 11 and Solaris 10 —to meet the different OS requirements of different Kondor+ releases. Re. price, Jean-Marc pointed out that Solaris embeds for free virtualization technologies (LDom, Zones) that come for an hefty fee on x86 platforms.

This proofpoint is particularly interesting because it shows the superiority of SPARC in a real-life deployment. SPARC is a balanced design —not in the race e.g. for absolute single-thread performance, price or commoditization— that is built to perform extremely well for enterprise workloads and service levels. Notably, Jean-Marc had an anecdote on the stability of Solaris SPARC : he had just performed the 1st reboot of a system that was up for the past 2606 days. That's 7 years!

PS : Jean-Marc Jacquot will be speaking live, along with other customer testimonies, at the upcoming Oracle SPARC Showcase in Paris, September 17th.


Using Obfuscation with Java ME SDK 3.3

$
0
0

Newsflash 757208

Obfuscation is a really helpful mechanism to reduce the size of your Java ME Embedded application code to a minimum.

When developing Java embedded applications using the Java ME SDK  3.3 with NetBeans you would normally be able to easily install the ProGuard obfuscator via the NetBeans ProGuard plugin and then set it to automatically obfuscate every project build.

However, for NetBeans 7.3 a licensing incompatibly prevents the ProGuard plugin to be available directly on the NetBeans 7.3 update center. This issue has been fixed for the upcoming NetBeans 7.4.

If you want to use ProGuard with NetBeans 7.3 there is an easy workaround described on the NetBeans bug tracker: https://netbeans.org/bugzilla/show_bug.cgi?id=227701. Scroll down to the end of the thread to see:

——————————————–

For now it is possible to use following workaround for proguard:

1. Download proguard.jar from http://sourceforge.net/projects/proguard/files/proguard/
2. Insert following line in {YOUR_PROJECT_DIR}/nbproject/private/private.properties OR {NB_USERDIR}/build.properties (no need to insert in both):

obfuscator.classpath={PATH_TO_proguard.jar}

(e.g. obfuscator.classpath=C:\\JavaME\\Proguard\\proguard.jar)

——————————————-

Hope this helps. Cheers,

– Terrence


Filed under: Embedded, Mobile & Embedded Tagged: Java ME Embedded, Java ME SDK, NetBeans, obfuscation, ProGuard

EMEA Partner Webcast: Oracle Exadata and the impact of Database 12c - Wed. 11th September

$
0
0
Oracle Database 12c

 

On July 1st, Oracle launched Oracle Database 12c, 'the first database designed for the Cloud'. One of the key features introduced with this new release is multitenant architecture which simplifies the process of consolidating databases onto the cloud, enabling customers to manage many databases as one - without changing their applications.

As you all know Oracle Exadata is engineered to be the highest performance and most available platform for running the Oracle Database. The introduction of Database 12c continues supporting this strategy enabling the creation of a DB Cloud or a sophisticated Database-as-a-Service (DBaaS).

  • What is the impact of Oracle Database 12c on your current Exadata customers? 
  • How and when should they start upgrading to 12c? 
  • How do Oracle Database 12c and Exadata benefit from each other?

These are key questions for Exadata partners! 

The Oracle EMEA Engineered Systems team is running this webcast specifically for Oracle Exadata partners on Wednesday, Sep. 11th at  at 15:00 CET / 2 pm UK. Join us to discuss how Oracle Database 12c and Exadata benefit from each other and the impact that Database 12c will have on the Exadata business.

Webcast Joining details:

To Join the webcast

For audio reception please use the following details:

Country specific Dial-in Numbers
Session/Conference ID: 4070776
Password: 333111

Webcast Replay:

This webcast will be recorded and replay posted on the Exadata Partner Community Collaborative Workspace (You need toregister as a community member to have access to the workspace).

For any information please contact:

  • Juan Salvador Rios - Business Analyst Consultant Engineered Systems EMEA
  • Roxana Dragus - Sales Enablement Coordinator & Communication Support Engineered Systems EMEA

Stay tuned on the latest Exadata Win Stories, References and much more by registering as a member of the Exadata EMEA Partner Community or visiting our blog.

OpenWorld Attendees: Attend Finance General Session - Empowering Modern Finance

$
0
0

Attending OpenWorld 2013?  Don’t forget to add the Finance General Session to your schedule! Oracle teams up with Deloitte, IDC, and Curse, Inc. to share how disruptive technologies are empowering the modern finance organization. Empowering Modern Finance. (Session ID: GEN8986)

Add it to your schedule today!

GlassFish All the Way!

$
0
0

Not to miss is the GlassFish community event: The Foundation for Opportunity on Sunday, Sept. 22nd from 9:15 am to 11:15 am. There are many reasons to attend this special event. On the short list are: 

- Learn about Java EE 7/GlassFish 4 successes and release roadmap 
- Meet the Oracle executives who influence the direction of GlassFish and Java EE 
- Get your questions answered by key advocates, architects, Java EE spec leads, and product managers.
- Hear success stories from engineers at several companies who implemented Java EE 7 and GlassFish 4 
Space is limited, so pre-register now for this event via the Schedule Builder.  

If this is too short and you want more, attend the GlassFish & Friends party, enroll for GlassFish sessions, hands-on labs, BoFs and visit the DEMOground and OTN Expert Drop-in in the exhibit hall. 

Not registered for JavaOne yet, Hurry you can still save US$200 off the onsite price!   

Check out the Oracle press release: JavaOne San Francisco 2013 Schedule and Keynote Lineup, Featuring IBM and Freescale as Diamond Sponsors

How to Load Oracle Tables From Hadoop Tutorial (Part 4 - OSCH Hello World)

$
0
0

Oracle OSCH:A “World Hello” Example

World Hello

In this post we will walk through Alice in Wonderland's looking glass and do a “Hello World” example for Oracle SQL Connector for HDFS (i.e. OSCH).The above title, “World Hello” is a play on words meant to drive home the relationship between the two loading models: OLH and OSCH.They both can be used to load an Oracle table but while OLH is run on Hadoop and uses Hadoop’s Map Reduce engine to write data into Oracle tables, OSCH uses the SQL engine running on Oracle to read data living in HDFS files.OLH pushes data to Oracle on demand from Hadoop.OSCH pulls data from HDFS on demand from Oracle.

Below we will first review the OSCH execution’s model.We will then discuss configuration.OSCH has a few more moving parts to worry about than OLH which invariably will create hiccups, but if you follow my instructions, in the following order, these should be minimized.

  • Perform one-time configuration steps
  • Create an Oracle external table that works against local test data
  • Load the same local test data into an HDFS directory
  • Recreate the external table to reference a Hadoop cluster
  • Use OSCH External Table publishing tool to point the external table to the test data location in HDFS

The OSCH Execution Model

OSCH was explained in the first lesson of the tutorial, but since we are revisiting it in depth, let’s review how it works.

OSCH is simply the plumping that lets Oracle external tables access HDFS content.Oracle external tables are a well established mechanism for reading content that is not populated or managed by Oracle.For conventional Oracle external tables, the content lives as files visible to the OS where the Oracle system is running.  These would be either local files, or shared network files (e.g. NFS).When you create an Oracle External table you point it to a set of files that constitute data that can be rendered as SQL tables.Oracle External table definitions call these “location” files.

Before OSCH was invented, external tables introduced an option called a PREPROCESSOR directive.Originally it was an option that allowed a user to preprocess a single location file before the content was streamed into Oracle.For instance, if your contents were zip files, the PREPROCESSOR option could specify that “unzip –p” is to be called with each location file, which would unzip the files before passing the unzipped content to Oracle.The output of an executable specified in the PREPROCESSOR directive is always stdout (hence the “-p” option for the unzip call).A PREPROCESSOR executable is a black box to Oracle.All Oracle knows is that when it launches it and feeds it a location file path as an argument, the executable will feed it a stream of bits thatrepresents data of an external table.

OSCH repurposed the PREPROCESSOR directive to provide access to HDFS.Instead of calling something like “unzip” it calls an OSCH tool that streams HDFS file content from Hadoop.The files it reads from HDFS are specified as OSCH metadata living in the external table “location” files locally.(These metadata files are created using OSCH’s publishing tool.)In other words, for OSCH, location files do not contain HDFS content, but contains references to HDFS files living in a Hadoop cluster.The OSCH supplied preprocessor expects to find OSCH metadata in this file.

All this is encapsulated with the Oracle external table definition.The preprocessor logic gets invoked every time one issues a SELECT statement in SQL against the external table.At run time, the OSCH preprocessor is invoked, which opens a “location” file with metadata.It parses it the metadata and then generates a list of files in HDFS it will open, one at a time, and read, piping the content into Oracle.(The metadata also includes optional CODEC directives, so if the HDFS content needs to be decompressed before being fed to Oracle, the OSCH preprocessor can handle it).

BTW, if you just got nervous about the performance implications of the “one at a time” phrase above, don’t be.This model is massively scalable.

One Time Configuration Steps

Checklist

Understand the Requirements for Installing and Configuring OSCH

The things you will need for installing and configuring OSCH include:
  • Access to the system where Oracle is running and to the OS account where Oracle is running (typically the Unix account "oracle”)
  • Access to SQL*Plus and permission to connect as DBA
  • Ability to create an Oracle user (e.g. "oschuser") with enough permission to create an external table and directory objects
  • The OSCH kit 
  • The Hadoop client kit for the Hadoop cluster you want to access
  • The Hadoop client configuration for HDFS access
  • Permission to read, write, and delete files in HDFS as OS user "oracle"  (i.e. "oracle" is an Hadoop user)
The formal documentation to install OSCH is here.Below I try to outline a process that has works best for me.

Install the Bits

Logon to the system where Oracle is running as “oracle”.Carve out an independent directory structure(e.g. /home/oracle/osch) outside of the directory structure of ORACLE_HOME.Install the OSCH kit (called“orahdfs-2.2.0”) and the Hadoop client kit (“hadoop-2.0.0”).I typically make these peers.Both kits need to be unzipped.Hadoop client kits typically require some building to create a few native libraries typically related to CODECs.You will also unzip the Hadoop configurations files (“hadoop-conf”).Finally you want to create a default directory for location files that will be referenced by external tables.This is the “exttab” directory below.This directory needs read and write privileges set for “oracle”.

At this point you should have a directory structure that looks something like this:

/home/oracle/osch/orahdfs-2.2.0
/home/oracle/osch/hadoop-2.0.0
/home/oracle/osch/hadoop-conf
/home/oracle/osch/exttab

Configure HDFS

Follow the standard Hadoop client instructions that allow you access the Hadoop cluster via HDFS from the Oracle system logged in as“oracle”.Typically this is to call Hadoop pointing to the hadoop-conf files you copied over.

With Hadoop you will want to be able to create, read, and write files under HDFS /user/oracle directory.For the moment carve out an area where we will put test data to read from HDFS using OSCH.

hadoop --config /home/oracle/osch/hadoop-conf fs –mkdir /user/oracle/osch/exttab

Configure OSCH

In the OSCH kit you will need to configure the preprocessor that is used to access the Hadoop cluster and read HDFS files.It is in the OSCH kit under the bin directory, and is called hdfs_stream.This is a bash script which invokes an OSCH executable under the covers.You need to edit the script and provide a definition for OSCH_HOME.You will also need to modify and export modified PATH and JAVA_LIBRARY_PATH definitions to pick up Hadoop client binaries.

e.g.
OSCH_PATH = /home/oracle/orahdfs-2.2.0
export PATH=/home/oracle/hadoop-2.0.0/bin:/user/bin:/bin
export JAVA_LIBRARY_PATH=/home/oracle/hadoop-2.0.0/lib/native

Optionally hdfs_stream allows you to specify where external table log files go.By default it goes into the log directory living in the OSCH installation (e.g. /home/oracle/orahdfs-2.2.0/log).

When you’ve complete this step, interactively invoke hdfs_stream with a single bogus argument “foo”, again on the Oracle system logged in as “oracle”.

e.g.

./hdfs_stream
OSCH: Error reading or parsing location file foo

This might seem lame, but it is a good sanity check that ensures Oracle can execute the script while processing an external table.If you get a Java stack trace rather than the above error message, the paths you defined in hdfs_stream are probably broken and need to be fixed.

Configure Oracle for OSCH

In this step you need to first connect to Oracle as SYSDBA and create an Oracle DIRECTORY object that points to the file location where hdfs_stream exists.You create one of these to be shared by any Oracle users running OSCH to connect to a particular Hadoop cluster.

SQLPLUS> CREATE DIRECTORY osch_bin_path as ‘/home/oracle/osch/oradhdfs-2.2.0/bin’;

Assuming you’ve created a vanilla Oracle user (e.g. "oschuser") which will own the external table, you want to grant execute privileges on the osch_bin_path directory.

SQLPLUS>GRANT EXECUTE ON DIRECTORY osch_bin_path TO oschuser;

Now reconnect to Oracle as “oschuser” and create an additional directory to point to the directory where location files live.

SQLPLUS> CREATE DIRECTORY exttab_default_directory AS‘/home/oracle/osch/exttab’;

At this point you have configured OSCH to run against a Hadoop cluster.Now you move on to create external tables to map to content living in HDFS.

Create an Oracle External Table that works against Local Test Data

You want to create an external table definition that mirrors the table you want to load (e.g. reflecting the same column names and data types.)

Even the simplest local external table definitions take some time to get right, and 99% of the external table verbiage needed to get it working against HDFS is identical to getting it to work against local files, so it makes sense to get a vanilla local external table working before trying it against HDFS. 

What you want to do is take a small representative set of sample data that you want to access in HDFS and localize it into as a single file local to the Oracle system and to the “oracle” user.Call it testdata.txt and put it in the /home/oracle/osch/exttab directory, which is our directory for location files.   I would recommend starting with a simple text CSV file.

To make things easier we will use the OSCH External Table tool to create an external table definition that you can use as a template to tweak to conform to your data.  This tool can be run from any system that can connect to the Oracle database, but in this case we are going to stay put and run it locally where Oracle is running as the OS "oracle" user.

The tool requires two environmental settings to run: specifically JAVA_HOME and CLASSPATH which needs to reference the tool's jar files:

export JAVA_HOME=/usr/lib/jdk
export CLASSPATH=/home/oracle/osch/orahdfs-2.2.0/jlib/*

For our running example it would look like this:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle..hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.defaultDirectory=exttab_default_directory
  -D oracle.hadoop.exttab.locationFileCount=1
  -D oracle.hadoop.exttab.columnCount=7
  -createTable --noexecute

Let’s decompose this command.

The following invokes the OSCH External Table tool by pointing to the OSCH jar file (“orahdfs.jar”):

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
/home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable

These two lines connect to the Oracle database service ("dbm") as Oracle user “oschuser”:

  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser

This identifies the name of the external table we want to create:

-D oracle.hadoop.exttab.tableName=helloworld_exttab

This tells the tool the directory in HDFS where data lives:

-D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab

This indicates where the location files will live (using the name of the Oracle directory created above that maps to "/home/oracle/osch/exttab"):

-D oracle.hadoop.exttab.defaultDirectory=exttab_default_dir

This indicates how many location files we generate.For now since we are only loading one HDFS file, we need only one location file to reference it, so we feed it a value of 1:

-D oracle.hadoop.exttab.locationFileCount=1

 This indicates how many columns are in the table:

-D oracle.hadoop.exttab.columnCount=7

Finally we tell the tool to just pretend to create an external table.  This will generate an external table definition and output it to the console:

-createTable --noexecute

The generated console output should look something like this:

Oracle SQL Connector for HDFS Release 2.2.0 - Production

Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved.

The create table command was not executed.

The following table would be created.

CREATE TABLE "OSCHUSER"."HELLOWORLD_EXTTAB"
(
 "C1"                             VARCHAR2(4000),
 "C2"                             VARCHAR2(4000),
 "C3"                             VARCHAR2(4000),
 "C4"                             VARCHAR2(4000),
 "C5"                             VARCHAR2(4000),
 "C6"                             VARCHAR2(4000),
 "C7"                             VARCHAR2(4000)
)
ORGANIZATION EXTERNAL
(
   TYPE ORACLE_LOADER
   DEFAULT DIRECTORY "EXTTAB_DEFAULT_DIRECTORY"
   ACCESS PARAMETERS
   (
     RECORDS DELIMITED BY 0X'0A'
     CHARACTERSET AL32UTF8
     STRING SIZES ARE IN CHARACTERS
     PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream'
     FIELDS TERMINATED BY 0X'2C'
     MISSING FIELD VALUES ARE NULL
     (
       "C1" CHAR(4000),
       "C2" CHAR(4000),
       "C3" CHAR(4000),
       "C4" CHAR(4000),
       "C5" CHAR(4000),
       "C6" CHAR(4000),
       "C7" CHAR(4000)
     )
   )
   LOCATION
   (
     'osch-20130904094340-966-1'
   )
) PARALLEL REJECT LIMIT UNLIMITED;

Cut and paste the console output to an editor (or cut and paste the text above) and temporarily remove the PREPROCESSOR directive and rename the location file (i.e. "osch=20130904094340-966-1") to "testdata.txt" (the name of your data file).  You then want to twiddle with the external table verbiage and change the dummy column names (e.g. "C1"), data types (e.g. "VARCHAR2(4000)", and field definitions (e.g. "CHAR(4000)") to reflect the table you want to load. (The details for creating Oracle external tables are explained here).  Note that the rest of the verbiage (e.g. "RECORDS DELIMITED BY") is used to support standard CSV text files, so if the data in your test file is correctly formed as CSV input, then this stuff should be left as is.

When you think your external table definition is correct, create the table in Oracle and  try accessing the data from SQL:

SQLPLUS>SELECT * FROM helloworld_exttab;

After you’ve got a SQL SELECT statement working, it's time to load the same data file it into HDFS and recreate the external table for remote access.

Load an HDFS directory with Local Test Data File

Using your hadoop client on your Oracle system upload the working test file you got working into HDFS into a the data directory you created earlier.

hadoop fs –put /home/oracle/osch/exttab/testdata.txt/user/oracle/osch/exttab

Recreate the External Table Using the PREPROCESSOR Directive

Now drop the local external table, and recreate it using the identical syntax that worked above, but putting back the PREPROCESSOR directive:

PREPROCESSOR"OSCH_BIN_PATH":hdfs_stream

This will redirect processing to HDFS files living in your Hadoop cluster.Don’t try doing a SELECT statement yet.The last step is to recreate location files so they point to content living in HDFS.

Big Switch

Using the OSCH Publishing Tool to point to test data living in HDFS

By adding the PREPROCESSOR directive, you now have an external table that is bound to data living in a Hadoop cluster.You now want to point the external table to data living somewhere in HDFS.For our case that is the data living in the HDFS directory we created and populated above: “/user/oracle/osch/exttab”.

First delete the local data file, testdata.txt, living under /home/oracle/osch/exttab.  That way we know if the external table works, it's not fooling us simply accessing local data.

Then rerun the External Table tool with the "publish" command:

/home/oracle/osch/hadoop-2.0.0/bin/hadoop jar
  /home/oracle/osch/orahdfs-2.2.0/jlib/orahdfs.jar oracle.hadoop.exttab.ExternalTable
  -D oracle.hadoop.connection.url=jdbc:oracle:thin:@localhost/dbm
  -D oracle.hadoop.connection.user=oschuser
  -D oracle.hadoop.exttab.tableName=helloworld_exttab
  -D oracle.hadoop.exttab.dataPaths=/user/oracle/osch/exttab
  -D oracle.hadoop.exttab.locationFileCount=1
  -publish

This time the tool executes the "publish" command, which connects to the Oracle database, prompts for "oschuser"'s password, reads the files living in the HDFS under“/user/oracle/osch/exttab”and creates one location file that references our singleton data file "testdata.txt" that we moved into HDFS. If you look at your local directory, “ /home/oracle/osch/exttab”, you will see that it has been populated with a machine generated file (e.g. “osch-20130821102309-6509-1”) which contains XML verbiage referring to testdata.txt in HDFS.

Test an Oracle External Table that works against HDFS Data

Now you connect to Oracle as “oschuser" and issue the same SQL query you did when the data was local.  You should get identical results as you did earlier (the order of the rows might be different).

SQLPLUS>SELECT * FROM helloworld_exttab;

At this point you have SQL access to content living in HDFS.   To use it to load an Oracle table (e.g. "helloworld") you need to use either an INSERT statement:

SQLPLUS> INSERT INTO helloworld SELECT * FROM helloworld_exttab;

or a CREATE TABLE statement.

SQLPLUS>CREATE TABLE helloworld as SELECT * from helloworld_exttab;

What Did We Just Do?

Abby Normal

Aside from doing one time initialization steps, what we did was create an external table and tested it locally to see if it would work with a particular data file format, then we recreated the external table definition, adding the PREPROCESSOR directive to point to HDFS living in a Hadoop cluster.  We then used the OSCH External Table tool to point an external table to a directory in HDFS with data files having the same format.

The bindings here are simple to understand:

  • The PREPROCESSOR directive references hdfs_stream which binds external tables to a particular Hadoop cluster
  • The External Table publishing tool binds an external table to a set of data files living in that cluster

If you want an to access multiple Hadoop clusters, you simply need to create a copy of “hdfs_stream” giving it a new name. (e.g. "hdfs_stream_2”), configure it to work against the other cluster, and embed the use the PREPROCESSOR directive to call “hdfs_stream_2”.

If you want two external tables to point to two different data sources of the same format, then create a new external table with the same attributes, and use OSCH External Table tool to point to another directory in HDFS.

One question that frequently comes up has to do with using OSCH for SQL access.  Specifically, since external tables map HDFS data, are they useful for doing general purpose Oracle SQL queries against HDFS data, not just for loading an Oracle table?

If the data set is very large and you intend to run multiple SQL queries, then you want load it into an Oracle table and run your queries against it.The reason has to do with the “black box” design of external tables.The storage is not controlled by Oracle, so there are no indices and no internal structures that Oracle would need to make access by SQL efficient.SELECT statements against any external table are a full table scan, something Oracle SQL optimizer tries to avoid because it is resource expensive.

One last point, always use external table definitions to facilitate the conversion of text to Oracle native datatypes (e.g. NUMBER, INTEGER, TIMESTAMP, DATE).  Do not rely on CAST and other functions (e.g. to_date) in SQL.  The data type conversion code in external tables is much more efficient.

Next Steps

This post was to get a toy example working with a single data file. The next post will focus on how to tune OSCH to for large data sets living in HDFS and exploit Oracle Parallel query infrastructure for high performance loads.  We will also discuss the pros and cons of using OSCH versus OLH.

Video: Steffo Weber on Identity Management and Oracle API Gateway

$
0
0

Identity management architect Steffo Weber's OTN article Protecting IDPs from Malformed SAML Requests offers a concise yet detailed technical dive into how you can use Oracle API Gateway as an XML firewall to protect Oracle Identity Federation from receiving malformed SAML requests. Steffo shares a bit of background on the article in this short video interview.

You'll find more interviews with members of the OTN architect community here: http://www.youtube.com/playlist?list=PLTwx5YGQHdjlvZrZ47I0IrfRj8V7Cdxhe

Connecting Ops Center to an Enterprise Management Framework

$
0
0

I got a question about making Ops Center work with other tools:

"My environment uses CA Unicenter. Is there a way for me to forward alerts from Ops Center over to CA Unicenter?"

There are two ways to do this.

One way is to use Halcyon's Neuron Integration, which can take alerts from Ops Center and pass them on to CA Unicenter or other Enterprise Management Frameworks.

Another way is to use Oracle Enterprise Manager Cloud Control to take data from Ops Center and then send it on to CA Service Desk.

1. Make sure that any asset that you want to pass along alerts for is Agent-managed in Ops Center.

2. Use the System Monitoring Plug-in to connect Ops Center to Cloud Control.

3. Use the CA Service Desk Connector to connect Cloud Control to CA Unicenter.

Cloud Control has connectors for a variety of Enterprise Management Frameworks, including BMC Service Desk, IBM Tivoli, and HP Service Manager.


Oracle Cloud Applications Day 2013 - tutto sulle Oracle Cloud Applications!

$
0
0

SOCIAL. MOBILE. CLOUD. Tre parole al centro della rivoluzione del modo di fare business oggi.

Partecipa all'Oracle Cloud Applications Day 2013 e scoprirai come la tua azienda può diventare protagonista di questo cambiamento e trasformare il proprio business.

Ti aspettiamo il prossimo 28 ottobre a Milano presso la sede de Il Sole24Ore con:

* casi di eccellenza a confronto
* parallele di approfondimento
* face-to-face meeting
* spazio expo Partner
* hands-on demo

Per maggiori info visita il sito dedicato.

Preparing for #OOW: DB12c, M6, In-memory, Clouds, Big Data... and IoT

$
0
0

It’s always difficult to fit the upcoming Oracle Open Worldtopics, and all its sessionsin one title, even if "Simplifying IT. Enabling Business Transformation." makes it clear on what Oracle is focusing on, I wanted to be more specific on the "How". At least for those of you who attended Hot Chips conference, some of the acronyms will be familiar to you, some may not (I will come back to "IoT" later). For those of you attending, or those of you who will get the sessions presentations once available online, here are few things that you don't want to miss which will give you not only what Oracle R&D has done for you since last year, but also what customers -like you- have implemented thanks to the red-stack and its partners, being ISVs or SIs.

First, don't miss Oracle Executives Key notes, second, have a look into the general sessions delivered by VPs of Engineering to get a more in-deep direction, and last but not least, network with your peers, being on specifics deep-dive sessions, experience sharing or even the demo ground where you will be able to get the technologies in action with the Oracle developers subject matters experts.You will find hereafter a small selection.

Oracle Strategy and roadmaps

Industry Focus

      Projects implementation feedbacks & lessons learn

      Deep-dive with the Experts

      Learn how to do it yourself (in 1 hour): Hands-on-Labs

      Watch the technologies at work : Demos Ground

      This digest is an extract of the many valuable sessions you will be able to attend to accelerate your projects and IT evolution.

      From Mainframe to Coherence: Is it worth it?

      $
      0
      0

      OP-Pohjola Group is the leading financial services group in Finland. It is made up of some 200 member cooperative banks and OP-Pohjola Group Central Cooperative which they own, including its subsidiaries and closely related companies. OP-Pohjola Group has more than four million customers -- joint banking and non-life insurance customers total over one million. With over 530 branches, the Group boasts the broadest customer base and the most extensive branch network in Finland.

      OP-Pohjola architecture is based on innovative Java EE technology running on Oracle WebLogic Server. A vital part of their architecture is Oracle Coherence distributed cache which has enabled remarkable cost savings in mainframe request volumes and improved performance in most critical and popular eBanking services. Oracle Coherence has had a critical role in enabling mobile eBanking solution as a cost effective platform to build new services with exponential usage growth. Efficient caching technology has enabled both service efficiency and remarkable savings in mainframe costs with mainframe request volumes dropping by 40%. This mainframe optimization has resulted in both financial cost savings while simultaneously enabling growth of eBanking service volumes. Come join OP-Pohjola in this session to hear about how this customer leverages Oracle technologies such as WebLogic and Coherence to achieve such fantastic results. In addition, while at OpenWorld don’t miss other Cloud Application Foundation Innovator. You can join the session whether you are an OpenWorld attendee or not.

      ADF How-To #8: Customizing UI Labels

      $
      0
      0

      In this week's How-To we are explaining how to change labels in the UI via customization. The detailed steps can be found here . We have also prepared a video walking you through the steps, available via our Youtube Channel.

      For any questions or comments, please use the comments section below or visit our OTN forum. We are always looking for topic suggestions for additional How-Tos.

      Thanks, Listeners! OTN ArchBeat Podcast is #1!

      $
      0
      0

      The OTN ArchBeat Podcast has been around since March of 2009, featuring conversations with members of the OTN architect community. Since that launch the podcast has produced 229 individual programs. While the podcast has generally fared well among the 25 Oracle podcasts, over the last year or so the audience has built to the point that in August 2013 the podcast hit a new and remarkable milestone. I'm very happy to announce that for the month of August the OTN ArchBeat Podcast was the most downloaded of all Oracle podcasts. Here are the top 10 Oracle podcasts for August, based on the aggregate number of downloads.

      1. OTN ArchBeat Podcast
      2. Oracle Database PodCasts
      3. Oracle's AppCasts
      4. Oracle Fusion Middleware Podcasts
      5. Oracle Magazine Podcast
      6. Oracle Technology Network TechCasts
      7. Oracle Author Podcasts
      8. Oracle Sun Servers and Storage Podcasts
      9. Meet The MySQL Experts

      The OTN ArchBeat Podcast would not be possible without the cooperation, insight, and expertise of a small army of panelists who have participated in the conversations over the past four years. Panels for the various discussions almost always include members of the Oracle ACE program, along with subject matter experts from within Oracle, and the occasional outside expert. Compiling a complete list of all the panelists would take more time than I have, but their participation has driven the conversations, and those conversations have driven the success of the podcast. For that I am enormously grateful.

      But of course all that effort and expertise would go to waste if no one was listening. So I am especially grateful for the people who download the programs. Thanks for listening. I hope you will continue to do so.

      Viewing all 19780 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>