Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Oracle Enterprise Manager Cloud Control 12c: Die Verwendung von Gruppen

$
0
0
Mit Oracle Enterprise Manager Cloud Control 12c können Sie eine Vielzahl von Zielsystemen verwalten, sowohl was die Vielfältigkeit als auch die pure Anzahl betrifft. Eine große Anzahl von Zielsystemen wirft die Frage auf, wie diese Menge effizient verwaltet werden kann. Dazu gehören die Kontrolle des Zugriffs, die möglichst automatische Einstellung des Monitorings und die Bildung von benutzerorientieren Sichten.

Zu diesem Zweck gibt es das Konzept der Gruppen, in denen Zielsysteme (Targets) zusammengefasst werden können. In Oracle Enterprise Manager Cloud Control 12c gibt es drei verschiedene Typen von Gruppen, die im aktuellen Tipp erklärt und voneinander abgegrenzt werden.

Top tweets SOA Partner Community – November 2012

$
0
0

Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity

clip_image001OTNArchBeat‏@OTNArchBeat

SOA Galore: New Books for Technical Eyes Only http://bit.ly/YAWCLp

clip_image002SOA Community‏@soacommunity

SOA, Cloud and Service Technology Symposium a super success! http://wp.me/p10C8u-xA

clip_image004Troux Technologies‏@trouxsoftware

Get a complementary copy of the Gartner Magic Quadrant for Enterprise Architecture Tools!! http://hub.am/TG5SvI

clip_image002[1]SOA Community‏@soacommunity

Generating an EJB SDO Service Interface for Oracle SOA Suite by Edwin Biemond http://wp.me/p10C8u-vZ

clip_image005Danilo Schmiedel‏@dschmied

How-to undeploy multiple SOA composites with WLST or ANT http://goo.gl/ydmVJ @OC_Wire@soacommunity

clip_image002[2]SOA Community‏@soacommunity

Oracle Open World 2012 – Middleware update http://wp.me/p10C8u-xC

clip_image006OracleBlogs‏@OracleBlogs

Oracle BPM enable BAM by Peter Paul http://ow.ly/2sYDNC

clip_image007Jon petter hjulstad‏@Jphjulstad

Installing Oracle Event Processing 11g Antony Reynolds- https://blogs.oracle.com/reynolds/entry/event_processed …

clip_image002[3]SOA Community‏@soacommunity

Oracle BPM enable BAM by Peter Paul http://wp.me/p10C8u-vX

clip_image008arjankramer‏@arjankramer

[CG Oracle Blog] Expanding the Oracle Enterprise Repository with functional documentation http://goo.gl/nvGDG #capgemini

clip_image006[1]OracleBlogs‏@OracleBlogs

Announcing Upcoming SOA and JMS Introductory Blog Posts http://ow.ly/2sXJvq

clip_image009Oracle B2B‏@Oracle_B2B

Very Cool - Mobile ADF with SOA Suite and EBusiness Suite-check out the blog: http://bit.ly/TUQUE8 - What could you do with this? #B2B#SOA

clip_image010Ronald Luttikhuizen‏@rluttikhuizen

Oracle Fusion Applications User Experience Patterns and Guidelines available http://www.oracle.com/technetwork/topics/ux/applications/gps-1601227.html …

clip_image002[4]SOA Community‏@soacommunity

On the way to the German #opnday. Want to discuss OFM request a meeting for the oFM expert zone #soacommunity

clip_image002[5]SOA Community‏@soacommunity

Top tweets SOA Partner Community – October 2012 http://wp.me/p10C8u-xq

clip_image002[6]SOA Community‏@soacommunity

Thanks for the excellent BPM Partner Advisory Council. Excellent product feedback and many new ideas for go 2 market pic.twitter.com/GoiN5PA1View photo

clip_image002[7]SOA Community‏@soacommunity

Hosting our BPM Advisory Council in London. Discuss with top BPM #Specialized Partners roadmap & joint go2market. http://www.oracle.com/goto/emea/soa

clip_image002[8]SOA Community‏@soacommunity

User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Demo Now…http://wp.me/p10C8u-vE

clip_image006[2]OracleBlogs‏@OracleBlogs

Oracle Service Bus duplicate message check using Coherence by Jan van Zoggel http://ow.ly/2sORk7

clip_image012orclateamsoa‏@orclateamsoa

A-Team Blog #ateam: A brief note for customers running SOA Suite on AIX platforms http://ow.ly/2sKICg

clip_image002[9]SOA Community‏@soacommunity

Distribute #SOA#specialization plaques - let us know when you receive them! #soacommunitypic.twitter.com/JWbyc6MHView photo

clip_image006[3]OracleBlogs‏@OracleBlogs

Critical Patch Update For Oracle Fusion Middleware - CPU October 2012 http://ow.ly/2sIQ3P

clip_image006[4]OracleBlogs‏@OracleBlogs

Storing SCA Metadata in the Oracle Metadata Services Repository by Nicols Fonnegra Martinez and Markus Lohn http://ow.ly/2sINkR

clip_image002[10]SOA Community‏@soacommunity

Sending out the October edition of the #SOACommunity newsletter - read it! Did not receive it - become a member http://www.oracle.com/goto/emea/soa #oracleExpand

clip_image014Whitehorses‏@whitehorsesnl

Whiteblog: ACM – Adaptive Case Management (http://bit.ly/SY3KL8 )

clip_image006[5]OracleBlogs‏@OracleBlogs

BPM 11g - Dynamic Task Assignment with Multi-level Organization Units http://ow.ly/2sGuUg

clip_image006[6]OracleBlogs‏@OracleBlogs

Using Cloud OER to Find Fusion Applications On-Premise Service Concrete WSDL URL by Rajesh Raheja http://ow.ly/2sGses

clip_image002[11]SOA Community‏@soacommunity

BPM 11.1.1.5 for Apps: BPM for EBS Demo available http://wp.me/p10C8u-vG

clip_image015Sabine Leitner‏@LeitnerSabine

@soacommunity Sie können am 24.10. nicht live in München beim #OracleDay sein? Dann verfolgen Sie hier unsere Live-Tweets! @OracleDay2012

clip_image001[1]OTNArchBeat‏@OTNArchBeat

Following the Thread in OSB | Antony Reynolds #oracle#soahttp://pub.vitrue.com/gNdM

clip_image002[12]SOA Community‏@soacommunity

Thanks to #Accenture& @HajoNormann for an excellent 2 days #BPM workshop. Great results great projects ;-) pic.twitter.com/ULTwwUaL

clip_image002[13]SOA Community‏@soacommunity

SOA Community Newsletter October 2012 http://wp.me/p10C8u-wv

clip_image017ServiceTechSymposium‏@techsymp

The October issue of the Service Technology Magazine is now published with new items! Read them at http://www.servicetechmag.com

clip_image018Erl Tech Book Series‏@techbks

New http://serviceorientation.com resource site goes live today! http://serviceorientation.com/ . pic.twitter.com/x8yLPLtw

clip_image002[14]SOA Community‏@soacommunity

DSS: SOA 11g (11.1.1.6) Solutions- End To End B2B Scenarios http://wp.me/p10C8u-vI

clip_image006[7]OracleBlogs‏@OracleBlogs

Exploring MDS Explorer by Mark Nelson http://ow.ly/2sL8jf

clip_image006[8]OracleBlogs‏@OracleBlogs

Oracle SOA Governance EMEA Workshop for Partners & System Integrators: Nov 5-7th | Madrid, Spain http://ow.ly/2sKzkT

clip_image002[15]SOA Community‏@soacommunity

Oracle Service Bus duplicate message check using Coherence by Jan van Zoggel http://wp.me/p10C8u-vV

clip_image006[9]OracleBlogs‏@OracleBlogs

DSS: SOA 11g (11.1.1.6) Solutions- End To End B2B Scenarios http://ow.ly/2sNIdp

SOA & BPM Partner Community

For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInMixForum

Spotlight on an office - Nairobi, Kenya

$
0
0

Hi everyone, my name is Joash Mitei. I am a graduate Intern at Oracle Systems Kenya and I will briefly take you through our offices and theworking environment here in Nairobi, Kenya. I’ve been with Oracle since February 2012 and I’m responsible for Applications Pre-sales focusing on Oracle EPM and E-Business Suite. My background is Finance and Accounting therefore joining Oracle was almost a totally a different ball game but the transition has been smooth.

The Oracle offices here are located on the second floor ofMebank Towers. We moved to the 2nd floor just three months ago from the 5th floor mainly because of the growing workforce. We are covering the whole Eastern Africa region hence diversity in culture is evident. This is a plus since you get to interact with people of very different backgrounds, cultures and ways of thinking.

The building itself is on the outskirts of the CBD hence free from the hustle and bustle of the town. The office is split into different sections; there is a main working area which has an open desk design that fosters interaction between colleagues, there are 4 conference rooms for meetings and presentations, there are 3 quiet rooms for a little privacy when needed and there is a dining area for meals and ‘hanging out’.

The working environment is world-class, to say the least. The employees are very professional, quite smart and needless to say, very busy. There are 4 interns covering sales and pre-sales in both Tech and Apps. As an intern you get support from your supervisor but you are required to show initiative yourself and thus the need to be very pro-active and inquisitive. The local management is well structured and communicative to ensure effectiveness and efficiency in the office.

Apart from the daily work, we usually have events to boost staff morale such as ‘TGIF hang -out’, football matches against each other or versus other companies, and team building retreats. All these are monumental in fostering the RED POTENTIAL. We also do numerous CSR activities in the local communities .

Well, that’s the Kenyan office for you. Glad to be your tour guide.

Have a superb day!

ADF & Fusion Development Webcast–December 11th 2012

$
0
0

Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff.
Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF's integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.
Join us at this FREE virtual event and learn the latest in Fusion Development including:

  • Is Oracle ADF development faster and simpler than Forms, Apex or .Net?
  • Mobile Application Development with ADF Mobile
  • Oracle ADF development with Eclipse
  • Oracle WebCenter Portal and ADF Development
  • Application Lifecycle Management with ADF
  • Building Process Centric Applications with ADF and BPM
  • Oracle Business Intelligence and ADF Integration
  • Live Q&A chats with Oracle technical staff

Developer lead, manager or architect – this event has something for everyone. Don't miss this opportunity. For details and registration please click here.

image

View Session Abstracts
We look forward to welcoming you at this free event!

December 11th, 2012
9:00 – 13:00 GMT & 10:00 – 14:00 CET & 12:00 – 16:00 AST & 13:00 – 17:00 MSK & 14:30 – 18:30 IST

WebLogic Partner Community

For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInMixForumWiki

Virtual Developer Day: Oracle Fusion Development

$
0
0

Virtual Developer Day: Oracle Fusion Development

Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff.

Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF's integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.

Join us at this FREE virtual event and learn the latest in Fusion Development including:

  • Is Oracle ADF development faster and simpler than Forms, Apex or .Net?

  • Mobile Application Development with ADF Mobile

  • Oracle ADF development with Eclipse

  • Oracle WebCenter Portal and ADF Development

  • Application Lifecycle Management with ADF

  • Building Process Centric Applications with ADF and BPM

  • Oracle Business Intelligence and ADF Integration

  • Live Q&A chats with Oracle technical staff

Developer lead, manager or architect – this event has something for everyone. Don't miss this opportunity


December 11th, 2012
9:00 – 13:00 GMT
10:00 – 14:00 CET
12:00 – 16:00 AST
13:00 – 17:00 MSK
14:30 – 18:30 IST

Register online now for this FREE event!

Agenda

9:00 a.m. – 9:30 a.m.

Opening

9:30 a.m. – 10:00 a.m.

Keynote
Oracle Fusion Development

Track 1
Introduction to Fusion Development

Track 2
What's New in Fusion Development

Track 3
Fusion Development in the Enterprise

Track 4
Hands On Lab - WebCenter Portal and ADF Lab w/ JDeveloper

10:00 a.m. – 11:00 a.m.

Is Oracle ADF development faster and simpler than Forms, Apex or .Net?

Mobile Application Development with ADF Mobile

Oracle WebCenter Portal and ADF Development

Lab materials can be found on event wiki here. Q&A about the lab is available throughout the event.

11:00 a.m. – 12:00 p.m.

Rich Web UI made simple – an ADF Faces Overview

Oracle Enterprise Pack for Eclipse - ADF Development

Building Process Centric Applications with ADF and BPM

12:00 p.m. – 1:00 p.m.

Next Generation Controller for JSF

Application Lifecycle Management for ADF

Oracle Business Intelligence and ADF Integration

View Session Abstracts

We look forward to welcoming you at this free event!


Java @Contented annotation to help reduce false sharing

$
0
0

See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time.

More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard.

Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection.

Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

NetBeans at JavaOne Latin America 2012

$
0
0

The place to be in early December is Sao Paolo, Brazil, for JavaOne 2012 Latin America (pt_ BR site)--and the NetBeans team will be making the trip!

Drop-in on technical sessions and hands-labs that show the latest features of the NetBeans IDE in action. Watch demos of HTML5, CSS3 and JavaScript support in NetBeans IDE 7.3 (Release: Winter 2013) and find out how developers can easily and quickly create rich Web and mobile applications. Discover how the IDE provides the best and latest support for building JavaEE and JavaFX 2.0 applications, and join the conversation about what's up ahead for NetBeans development.

With over 50 technical sessions, tons of demos and labs, JavaOne Latin America is the conference to attend to enhance your coding skills and mingle with experts and developers from the Oracle and Java communities. Mark your calendars and check out NetBeans IDE in the following sessions!


Tuesday, December 4

12:15 - 13:15
Designing Java EE Applications in the Age of CDI
Speakers: Michel Graciano, Consultant, Summa Technologies do Brasil; Michael Santos, TecSinapse
Mezanino: Sala 14


Wednesday, December 5

10:00 - 11:00
Make Your Clients Richer: JavaFX and the NetBeans Platform
Speakers: Gail Anderson, Director of Research; Paul Anderson, Director of Training, Anderson Software Group, Inc.
Mezanino: Sala 12


Thursday, December 6

13:45 - 14:45
Unlocking the Java Platform with NetBeans
Speaker: John Jullion-Ceccarelli, Software Development Director, Oracle
Keynote Hall

15:00 - 16:00
Project EASEL: Developing and Managing HTML5 in a Java World
Speaker: John Jullion-Ceccarelli, Software Development Director, Oracle
Mezanino: Sala 14


See full conference schedule for detailed agenda.

Get more JavaOne news.

emca fails with "Database instance is unavailable" though available

$
0
0

The following example shows the symptoms of failure, and the exact error message.

$ emca -repos create

...
Password for SYSMAN user:  

Do you wish to continue? [yes(Y)/no(N)]: Y
Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \
         checkDbAvailabilityImpl
WARNING: ORA-01034: ORACLE not available

Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \
         throwDBUnavailableException
SEVERE: 
Database instance is unavailable. Fix the ORA error thrown and 
run EM Configuration Assistant again.

Some of the possible reasons may be : 

1) Database may not be up. 
2) Database is started setting environment variable ORACLE_HOME 
with trailing '/'. Reset ORACLE_HOME and bounce the database. 

For eg. Database is started setting environment variable 
ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db  and bounce 
the database.

Fix:

Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file.

Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.


Friday Tips #3

$
0
0

Even though yesterday was Thanksgiving here in the US, we still have a Friday tip for those of you around your computers today. In fact, we have two! The first one came in last week via our #AskOracleVirtualization Twitter hashtag. The tweet has disappeared into the ether now, but we remember the gist, so here it is:


Question:
Will there be an Oracle Virtual Desktop Client for Android?

Answer by our desktop virtualization product development team:
We are looking at Android as a supported platform for future releases.


Question:
How can I make a Sun Ray Client automatically connect to a virtual machine?

Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization:
Someone recently asked how they can assign VM’s to specific Sun Ray Desktop Units (“DTU’s”) without any user interfaction being required, without the “Desktop Selector” being displayed, or any User Directory.  That is, they wanted each Sun Ray to power on and immediately connect to a pre-assigned Solaris VM.  

This can be achieved by using “tokens” for user assignment – that is, the tokens found on Smart Cards, DTU’s, or OVDC clients can be used in place of user credentials.  Note, however, that mixing “token-only” assignments and “User Directories” in the same VDI Center won’t work.  

Much of this procedure is covered in the documentation, particularly here. But it can useful to have everything in one place, “cookbook-style”: 

1. Create the “token-only” directory type:

From the VDI administration interface, select:

 “Settings”, “Company”, “New”, select the “None” radio button, and click “Next.”

Enter a name for the new “Company”, and click “Next”, then “Finish.”

2. Create Desktop Providers, Pools, and VM’s as appropriate.

3.Access the Sun Ray administration interface at http://servername:1660 and login using “root” credentials, and access the token-id’s you wish to use for assignment.  If you’re using DTU tokens rather than Smart Card tokens, these can be found under the “Tokens” tab, and “Search-ing” using the “Currently Used Tokens” tab.  DTU’s can be identified by the prefix “psuedo.”  For example:


4. Copy/paste this token into the VDI administrative interface, by selecting “Users”, “New”, and pasting in the token ID, and click “OK” - for example:

5. Assign the token (DTU) to a desktop, that is, in the VDI Admin Gui, select “Pool”, “Desktop”, select the VM, and click "Assign" and select the token you want, for example:


In addition to assigning tokens to desktops, you'll need to bypass the login screen.  To do this, you need to do two things: 

1. Disable VDI client authentication with: 

/opt/SUNWvda/sbin/vda settings-setprops -p clientauthentication=Disabled

2. Disable the VDI login screen – to do this,  add a kiosk argument of "-n" to the Sun Ray kiosk arguments screen.   You set this on the Sun Ray administration page - "Advanced", "Kiosk Mode", "Edit", and add the “-n” option to the arguments screen, for example:

3. Restart both the Sun Ray and VDI services:

# /opt/SUNWut/sbin/utstart –c
# /opt/SUNWvda/sbin/vda-service restart


Remember, if you have a question for us, please post on Twitter with our hashtag (again, it's #AskOracleVirtualization), and we'll try to answer it if we can. See you next time!

Oracle OpenWorld São Paulo Is Back!

$
0
0

Guess what’s back and bigger than ever! Oracle OpenWorld São Paulo, and we can’t wait to see YOU there! Be part of the first everOracle PartnerNetwork Exchange Latin America, a program that incorporates special activities specifically tailored to you, our partners. OracleOpenWorld Latin America is taking place from December 4th– 6th at the Transamerica Expo Center, so if you haven’t already registered, hurry and do so to take advantage of our Early Bird pricing here!

This year’s jam-packed agenda includes keynotes from Hugo Freytes, SVP of Latin America Alliances and Channels, Judson Althoff, SVP of Worldwide Alliances and Channels and many more! The OPN Keynote session will take place on December 5th from 10:00am to 12:00am, and the program will feature four tracks including Applications, Cloud, Engineered Systems and Technology for partners, complete with endless content! Clickhereto view the Oracle OpenWorld Latin America Oracle PartnerNetwork Agenda.

Also, we wanted to offer a huge THANK YOU to our 2012Oracle PartnerNetwork Exchange Latin America and Lounge sponsors: Avnet and Preteco!


Be sure to stop by our Oracle PartnerNetwork Lounge to hold meetings, network with your peers, and engage in relevant conversations with your partners, customers and other industry professionals.

Finally, don’t wait to register! Early Bird Pricing for OPN Exchange @ OpenWorld has ends November, 23. You really don't want to miss this great opportunity to learn, network, and be a part of the experience. Register here!

Welcome to the new Oracle PartnerNetwork Exchange @ OpenWorld Latin America 2012!

The OPN Communications Team


An introduction to Oracle Retail Data Model with Claudio Cavacini

$
0
0

In this video, Claudio Cavacini of Oracle Retail explains Oracle Retail Data Model, a solution that combines pre-built data mining, online analytical processing (OLAP) and dimensional models to deliver industry-specific metrics and insights that improve a retailers’ bottom line.

Claudio shares how the Oracle Retail Data Model (ORDM) delivers retailer and market insight quickly and efficiently, allowing retailers to provide a truly multi-channel approach and subsequently an effective customer experience. The rapid implementation of ORDM results in predictable costs and timescales, giving retailers a higher return on investment.

Please visit our website for further information on Oracle Retail Data Model.

2012 Independent Oracle Users Group Survey: Closing the Security Gap

$
0
0

What Security Gaps Do You Have?
The latest survey report from the Independent Oracle Users Group (IOUG) uncovers trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases.

According to the report, “despite growing threats and enterprise data security risks, organizations that do implement appropriate detective, preventive, and administrative controls are seeing significant results.”

Download a free copy of the 2012 IOUG Data Security Survey Report and find out what your business can do to close the security gap. 

ioug security survey


A Myriad of Options

$
0
0

I am currently working with a customer that is close to outgrowing their Exadata X2-2 half rack in both compute and storage capacity.  The platform is used for one of their larger data warehouse applications and the move to Exadata almost two years ago has been a resounding success, forcing them to grow the platform sooner than anticipated.

At a recent planning meeting, we started looking at the options for expansion and have developed five alternatives, all of which meet or exceed their growth requirements, yet have different pros and cons in terms of the impact to their production and test environments.

The options include an in-rack upgrade to a full rack of Exadata using the recently released X3-2 platform (an option that even applies to an older V2 rack), multi-rack cabling the existing X2-2 to another full rack or half rack X2-2 (and utilizing both compute and storage capacity in the other rack), or simply adding a new X3-2 half rack (and taking advantage of the added compute and flash performance in the X3-2).

While the decision is yet to be made, it had me thinking that one of the benefits of Exadata over a traditional database deployment is that when the time comes to expand the platform, there are a myriad of options.

The November OBIEE 11.1.1.6.6 Bundle Patch has been released

$
0
0

The November Oracle Business Intelligence Enterprise Edition - OBIEE 11.1.1.6.6 Bundle Patch is now available for download from My Oracle Support

For OBIEE on 11.1.1.6.0, the plan is to run a monthly bundle patch

11.1.1.6.6 bundle patch includes 67 bug fixes.
11.1.1.6.6 bundle patch is cumulative, so it includes everything in 11.1.1.6.1, 11.1.1.6.2, 11.1.1.6.2BP1, 11.1.1.6.4 and 11.1.1.6.5.

Please note that this release is only recommended for BI customers i.e. not customers on Fusion Apps

Bundled Patch Details

11.1.1.6.6 bundle patch is available for the supported platforms :

  • Microsoft Windows (32-bit)
  • Linux x86 (32-bit)
  • Microsoft Windows x64 (64-bit)
  • Linux x86-64 (64-bit)
  • Oracle Solaris on SPARC (64-bit)
  • Oracle Solaris on x86-64 (64-bit)
  • IBM AIX PPC (64-bit)
  • HPUX- IA (64-bit)

Currency Conversion in Oracle BI applications

$
0
0
Authored by Vijay Aggarwal and Hichem Sellami

A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts.Example; Order Amount, Invoice Amount etc.

With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business.

This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users.

Source Systems

OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users.

For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly.

OBIA Design

The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance.

When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies.

OBIA Facts store amounts in various currencies:

Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies.

Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company.

Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation.

The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules.There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics.

When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt.So it is important to select one Ledger or Business Unit when viewing data in Local Currency.More on this can be found in the section: Toggling Currency Preferences in the Dashboard.

Design Logic

When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables.It also loads the exchange rates required to convert the document amount into the corresponding global amounts.

If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates.

The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA.

The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion.

Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters.

Some Gotchas to Look for

It is necessary to think through the currencies during the initial implementation.

1)Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency.Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time.

2)If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done.

Technical Section

This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system.

In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables.

W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly.

EBS:In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process.

  1. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G.
    1. Delete the records which have Start Date > minimum conversion date
    2. Update the End Date of the existing records.
  2. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter.
    1. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP.
    2. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range.
  1. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date).
  2. Filter the records marked as ‘Filter’ in the INFA map.
  3. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly.
  4. SIL map will then insert/update the GS data into W_EXCH_RATE_G.

These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G.

We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this.

Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly.

PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only.

1 Jan 2010 – USD to INR – 45

31 Jan 2010 – USD to INR – 46

PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW.

Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate.

We generate From Date and To Date while extracting data from source and this has certain assumptions:

If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected.

Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type.

Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards.

As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked.

Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA.

W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well.

What if all 5 Global Currencies configured are same?

As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record.

We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases.

Reusable Lookups

Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned.

Changing Currency preferences in the Dashboard:

In the 796x series, all amount metrics in OBIA were showing the Global1 amount.The customer needed to change the metric definitions to show them in another Currency preference.Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution.

List of Currency Preferences

Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies.The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt.This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml

This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names.There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic.In Static mode, all users will see the full list as in the user preference currencies file.In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file.Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles.

Adding Currency to an Amount Field

When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference.For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager.If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page.If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error.

There are two ways to add the Currency field to an amount metric:

  1. In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report.This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”…
  2. In the form of currency symbol ($ for USD, € for EUR,…)For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area.
    1. Select Column Properties option in the Edit list of a metric.
    2. In the Data Format tab, select Custom as Treat Number As.
    3. Enter the following syntax under Custom Number Format:[$:currencyTagColumn=Subjectarea.table.column]Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

Hot Off the Presses! Get Your Early Release of the December Procurement Newsletter!

$
0
0

Get all the recent news and featured topics for the Procurement modules including Purchasing, iProcurement, Sourcing and iSupplier. Find out what Procurement experts are recommending to prevent and resolve issues.  Webcast information and important links are also included.  The December newsletter features articles on:

ØMaximizing your search results to include the Procurement Community

ØConcurrent Processing Analyzer

ØPreventing FRM-40654 errors

And there is much, much more…..

Access the newsletter now:  DocID: 111111.1

Cutting edge technology, a lone Movember ranger and a 5-a-side football club ...meet the team at Oracle’s Belfast Offices.

$
0
0

By Olivia O’Connell

To see what’s in store at Oracle’s next Open Day which comes to Belfast this week, I visited the offices with some colleagues to meet the team and get a feel for what‘s in store on November 29th.

After being warmly greeted by Frances and Francesca, who make sure Front of House and Facilities run smoothly, we embarked on a quick tour of the 2 floors Oracle occupies, led by VP Bo, it was time to seek out some willing volunteers to be interviewed/photographed- what a shy bunch! A bit of coaxing from the social media team was needed here!

In a male-dominated environment, the few women on the team caught my eye immediately. I got chatting to Susan, a business analyst and Bronagh, a tech writer. It becomes clear during our chat that the male/female divide is not an issue –“everyone here just gets on with the job,” says Suzanne, “We’re all around the same age and have similar priorities and luckily everyone is really friendly so there are no problems. ”A graduate of Queen’s University in Belfast majoring in maths & computer science, Susan works closely with product management and the development teams to ensure that the final project delivered to clients meets and exceeds their expectations.Bronagh, who joined us following working for a tech company in Montreal and gaining her post-grad degree at University of Ulster agrees that the work is challenging but “the environment is so relaxed and friendly”.

Software developer David is taking the Movember challenge for the first time to raise vital funds and awareness for men’s health. Like other colleagues in the office, he is a University of Ulster graduate and works on Reference applications and Merchandising Tools which enable customers to establish e-shops using Oracle technologies.

The social activities are headed up by Gordon, a software engineer on the commerce team who joined the team 4 years ago after graduating from the University of Strathclyde at Glasgow with a degree in Computer Science.

Everyone is unanimous that the best things about working at Oracle’s Belfast offices are the casual friendly environment and the opportunity to be at the cutting edge of technology.

We’re looking forward to our next trip to Belfast for some cool demos and meet candidates. And as for the camera-shyness? Look who came out to have their picture taken at the end of the day!

The Oracle offices in Belfast are located on the 6th floor, Victoria House, Gloucester Street, Belfast BT1 4LS, UK


View Larger Map

Open day takes place on Thursday, 29th November 4pm – 8pm.

Visit the 5 Demo Stations to find out more about each teams' activities and projects to date. See live demos including "Engaging the Customer", "Managing Your Store","Helping the Customer", "Shopping on-line" and "The Commerce Experience" processes. The"Working @Oracle" stand will give you the chance to connect with our recruitment team and get information about the Recruitment process and making your career path in Oracle.

Register here.

Partner Blog Series: PwC Perspectives - The Gotchas, The Do's and Don'ts for IDM Implementations

$
0
0

It is generally accepted among business communities that technology by itself is not a silver bullet to all problems, but when it is combined with leading practices, strategy, careful planning and execution, it can create a recipe for success.This post attempts to highlight some of the best practices along with dos & don’ts that our practice has accumulated over the years in the identity & access management space in general, and also in the context of R2, in particular.

Best Practices

The following section illustrates the leading practices in “How” to plan, implement and sustain a successful OIM deployment, based on our collective experience.

Planning is critical, but often overlooked

A common approach to planning an IAM program that we identify with our clients is the three step process involving a current state assessment, a future state roadmap and an executable strategy to get there. It is extremely beneficial for clients to assess their current IAM state, perform gap analysis, document the recommended controls to address the gaps, align future state roadmap to business initiatives and get buy in from all stakeholders involved to improve the chances of success.

When designing an enterprise-wide solution, the scalability of the technology must accommodate the future growth of the enterprise and the projected identity transactions over several years. Aligning the implementation schedule of OIM to related information technology projects increases the chances of success.

As a baseline, it is recommended to match hardware specifications to the sizing guide for R2 published by Oracle. Adherence to this will help ensure that the hardware used to support OIM will not become a bottleneck as the adoption of new services increases. If your Organization has numerous connected applications that rely on reconciliation to synchronize the access data into OIM, consider hosting dedicated instances to handle reconciliation. Finally, ensure the use of clustered environment for development and have at least three total environments to help facilitate a controlled migration to production.

If your Organization is planning to implement role based access control, we recommend performing a role mining exercise and consolidate your enterprise roles to keep them manageable. In addition, many Organizations have multiple approval flows to control access to critical roles, applications and entitlements. If your Organization falls into this category, we highly recommend that you limit the number of approval workflows to a small set.

Most Organizations have operations managed across data centers with backend database synchronization, if your Organization falls into this category, ensure that the overall latency between the datacenters when replicating the databases is less than ten milliseconds to ensure that there are no front office performance impacts.

Ingredients for a successful implementation

During the development phase of your project, there are a number of guidelines that can be followed to help increase the chances for success.

Most implementations cannot be completed without the use of customizations.If your implementation requires this, it’s a good practice to perform code reviews to help ensure quality and reduce code bottlenecks related to performance. We have observed at our clients that the development process works best when team members adhere to coding leading practices.Plan for time to correct coding defects and ensure developers are empowered to report their own bugs for maximum transparency.

Many organizations struggle with defining a consistent approach to managing logs.This is particularly important due to the amount of information that can be logged by OIM. We recommend Oracle Diagnostics Logging (ODL) as an alternative to be used for logging. ODL allows log files to be formatted in XML for easy parsing and does not require a server restart when the log levels are changed during troubleshooting.

Testing is a vital part of any large project, and an OIM R2 implementation is no exception. We suggest that at least one lower environment should use production-like data and connectors.Configurations should match as closely as possible.For example, use secure channels between OIM and target platforms in pre-production environments to test the configurations, the migration processes of certificates, and the additional overhead that encryption could impose.

Finally, we ask our clients to perform database backups regularly and before any major change event, such as a patch or migration between environments. In the lowest environments, we recommend to have at least a weekly backup in order to prevent significant loss of time and effort. Similarly, if your organization is using virtual machines for one or more of the environments, it is recommended to take frequent snapshots so that rollbacks can occur in the event of improper configuration.

Operate & sustain the solution to derive maximum benefits

When migrating OIM R2 to production, it is important to perform certain activities that will help achieve a smoother transition. At our clients, we have seen that splitting the OIM tables into their own tablespaces by categories (physical tables, indexes, etc.) can help manage database growth effectively. If we notice that a client hasn’t enabled the Oracle-recommended indexing in the applicable database, we strongly suggest doing so to improve performance. Additionally, we work with our clients to make sure that the audit level is set to fit the organization’s auditing needs and sometimes even allocate UPA tables and indexes into their own table-space for better maintenance. Finally, many of our clients have set up schedules for reconciliation tables to be archived at regular intervals in order to keep the size of the database(s) reasonable and result in optimal database performance.

For our clients that anticipate availability issues with target applications, we strongly encourage the use of the offline provisioning capabilities of OIM R2. This reduces the provisioning process for a given target application dependency on target availability and help avoid broken workflows. To account for this and other abnormalities, we also advocate that OIM’s monitoring controls be configured to alert administrators on any abnormal situations.

Within OIM R2, we have begun advising our clients to utilize the ‘profile’ feature to encapsulate multiple commonly requested accounts, roles, and/or entitlements into a single item. By setting up a number of profiles that can be searched for and used, users will spend less time performing the same exact steps for common tasks.

We advise our clients to follow the Oracle recommended guides for database and application server tuning which provides a good baseline configuration. It offers guidance on database connection pools, connection timeouts, user interface threads and proper handling of adapters/plug-ins.All of these can be important configurations that will allow faster provisioning and web page response times.

Many of our clients have begun to recognize the value of data mining and a remediation process during the initial phases of an implementation (to help ensure high quality data gets loaded) and beyond (to support ongoing maintenance and business-as-usual processes). A successful program always begins with identifying the data elements and assigning a classification level based on criticality, risk, and availability.It should finish by following through with a remediation process.

Dos & Don’ts

Here are the most common dos and don'ts that we socialize with our clients, derived from our experience implementing the solution.

Dos

Don’ts

Scope the project into phases with realistic goals. Look for quick wins to show success and value to the stake holders.

Avoid “boiling the ocean” and trying to integrate all enterprise applications in the first phase.

Establish an enterprise ID (universal unique ID across the enterprise) earlier in the program.

Avoid major UI customizations that require code changes.

Have a plan in place to patch during the project, which helps alleviate any major issues or roadblocks (product and database).

Avoid publishing all the target entitlements if you don't anticipate their usage during access request.

Assess your current state and prepare a roadmap to address your operations, tactical and strategic goals, align it with your business priorities.

Avoid integrating non-production environments with your production target systems.

Defercomplex integrations to the later phases and take advantage of lessons learned from previous phases

Avoid creating multiple accounts for the same user on the same system, if there is an opportunity to do so.

Have an identity and access data quality initiative built into your plan to identify and remediate data related issues early on.

Avoid creating complex approval workflows that would negative impact productivity and SLAs.

Identify the owner of the identity systems with fair IdM knowledge and empower them with authority to make product related decisions. This will help ensure overcome any design hurdles.

Avoid creating complex designs that are not sustainable long term and would need major overhaul during upgrades.

Shadow your internal or external consulting resources during the implementation to build the necessary product skills needed to operate and sustain the solution.

Avoid treating IAM as a point solution and have appropriate level of communication and training plan for the IT and business users alike.


Conclusion

In our experience, Identity programs will struggle with scope, proper resourcing, and more.We suggest that companies consider the suggestions discussed in this post and leverage them to help enable their identity and access program. This concludes PwC blog series on R2 for the month and we sincerely hope that the information we have shared thus far has been beneficial.

For more information or if you have questions, you can reach out to Rex Thexton, Senior Managing Director, PwC and or Dharma Padala, Director, PwC. We look forward to hearing from you.

Meet the Writers:


Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years.


Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.


Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions.


John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL).











Ancillary Objects: Separate Debug ELF Files For Solaris

$
0
0
We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual.

ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file.

There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out— customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits.

We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files.

The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where:

  • Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise).

  • Complex finely tuned code, where the original authors may no longer be available.

  • Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue.
Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway.

Design

The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them.
  • The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files).

  • A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other.

  • Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects.

  • As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status.

  • Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary.

  • If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case.

  • If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost.

The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files).

Among the benefits of this approach are:

  • There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used.

  • The section contents are identical in either case— there is no need to alter data to accommodate multiple files.

  • It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification.

  • The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful.

Using Ancillary Objects (From the Solaris Linker and Libraries Guide)

By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size.

For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections.

  • To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks.

  • To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object.

  • To limit the exposure of internal implementation details.

Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option.

$ ld ... -z ancillary[=outfile] ...

By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option.

When -z ancillary is specified, the link-editor performs the following actions.

  • All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object.

  • All remaining non-allocable sections are written to the ancillary object.

  • The following non-allocable sections are written to both the primary object and ancillary object.

    .shstrtab

    The section name string table.

    .symtab

    The full non-dynamic symbol table.

    .symtab_shndx

    The symbol table extended index section associated with .symtab.

    .strtab

    The non-dynamic string table associated with .symtab.

    .SUNW_ancillary

    Contains the information required to identify the primary and ancillary objects, and to identify the object being examined.

  • The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file.

  • Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0.

This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects.

The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc.

$ cat hello.c
#include <stdio.h>

int
main(int argc, char **argv) 
{ 
        (void) printf("hello, world\n");
        return (0);
}
$ cc -g -zancillary hello.c
$ file a.out a.out.anc
a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically
       linked, not stripped, ancillary object a.out.anc
a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out
$ ./a.out
hello world

The resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands.

As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example.

Index

Section Name

Type

Primary Flags

Ancillary Flags

Primary Size

Ancillary Size

13

.text

PROGBITS

ALLOCEXECINSTR

ALLOCEXECINSTRSUNW_ABSENT

0x131

0

20

.data

PROGBITS

WRITEALLOC

WRITEALLOCSUNW_ABSENT

0x4c

0

21

.symtab

SYMTAB

0

0

0x450

0x450

22

.strtab

STRTAB

STRINGS

STRINGS

0x1ad

0x1ad

24

.debug_info

PROGBITS

SUNW_ABSENT

0

0

0x1a7

28

.shstrtab

STRTAB

STRINGS

STRINGS

0x118

0x118

29

.SUNW_ancillary

SUNW_ancillary

0

0

0x30

0x30

The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab.

It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the.SUNW_ancillary section to locate the ancillary object, and access the symbol contained within.

The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together.

$ elfdump -T SUNW_ancillary a.out a.out.anc
a.out:
Ancillary Section:  .SUNW_ancillary
     index  tag                    value
       [0]  ANC_SUNW_CHECKSUM     0x8724              
       [1]  ANC_SUNW_MEMBER       0x1         a.out
       [2]  ANC_SUNW_CHECKSUM     0x8724         
       [3]  ANC_SUNW_MEMBER       0x1a3       a.out.anc
       [4]  ANC_SUNW_CHECKSUM     0xfbe2              
       [5]  ANC_SUNW_NULL         0                   

a.out.anc:
Ancillary Section:  .SUNW_ancillary
     index  tag                    value
       [0]  ANC_SUNW_CHECKSUM     0xfbe2              
       [1]  ANC_SUNW_MEMBER       0x1         a.out
       [2]  ANC_SUNW_CHECKSUM     0x8724              
       [3]  ANC_SUNW_MEMBER       0x1a3       a.out.anc
       [4]  ANC_SUNW_CHECKSUM     0xfbe2              
       [5]  ANC_SUNW_NULL         0          

The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined.

  • The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects.

  • The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files.

  • It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows.

Debugger Access and Use of Ancillary Objects

Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table.

The following steps can be used by a debugger to assemble the information contained in these files.

  1. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined.

  2. Create a section header array in memory, using the section header array from the object being examined as an initial template.

  3. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set.

The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program.


Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects.


ELF Implementation Details (From the Solaris Linker and Libraries Guide)

To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual.

Part IV ELF Application Binary Interface

Chapter 13: Object File Format
Object File Format

Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type.
e_type
Identifies the object file type, as listed in the following table.
NameValueMeaning
ET_NONE0No file type
ET_REL1Relocatable file
ET_EXEC2Executable file
ET_DYN3Shared object file
ET_CORE4Core file
ET_LOSUNW0xfefeStart operating system specific range
ET_SUNW_ANCILLARY0xfefeAncillary object file
ET_HISUNW0xfefdEnd operating system specific range
ET_LOPROC0xff00Start processor-specific range
ET_HIPROC0xffffEnd processor-specific range
Sections

Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section.

...

sh_type

Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5.
sh_flags
Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8.
...
Table 13-5 ELF Section Types, sh_type

NameValue

.
.
.
SHT_LOSUNW0x6fffffee
SHT_SUNW_ancillary0x6fffffee
.
.
.

...

SHT_LOSUNW - SHT_HISUNW

Values in this inclusive range are reserved for Oracle Solaris OS semantics.
SHT_SUNW_ANCILLARY
Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section.

...

Table 13-8 ELF Section Attribute Flags

NameValue

.
.
.
SHF_MASKOS0x0ff00000
SHF_SUNW_NODISCARD0x00100000
SHF_SUNW_ABSENT0x00200000
SHF_SUNW_PRIMARY0x00400000
SHF_MASKPROC0xf0000000
.
.
.

...

SHF_SUNW_ABSENT

Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files.

SHF_SUNW_PRIMARY

The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status.

...

Two members in the section header, sh_link, and sh_info, hold special information, depending on section type.

Table 13-9 ELF sh_link and sh_info Interpretation

sh_typesh_linksh_info

.
.
.
SHT_SUNW_ANCILLARYThe section header index of the associated string table.0
.
.
.

Special Sections

Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section.

Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes.

Table 13-10 ELF Special Sections

NameTypeAttribute

.
.
.
.SUNW_ancillarySHT_SUNW_ancillaryNone
.
.
.

...

.SUNW_ancillary

Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details.

...

Ancillary Section

Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF.
In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others.

This section contains an array of the following structures. See sys/elf.h.

typedef struct {
        Elf32_Word      a_tag;
        union {
                Elf32_Word      a_val;
                Elf32_Addr      a_ptr;
        } a_un;
} Elf32_Ancillary;

typedef struct {
        Elf64_Xword     a_tag;
        union {
                Elf64_Xword     a_val;
                Elf64_Addr      a_ptr;
        } a_un;
} Elf64_Ancillary;
For each object with this type, a_tag controls the interpretation of a_un.
a_val
These objects represent integer values with various interpretations.

a_ptr
These objects represent file offsets or addresses.
The following ancillary tags exist.
Table 13-NEW1 ELF Ancillary Array Tags

NameValuea_un

ANC_SUNW_NULL0Ignored
ANC_SUNW_CHECKSUM1a_val
ANC_SUNW_MEMBER2a_ptr

ANC_SUNW_NULL
Marks the end of the ancillary section.

ANC_SUNW_CHECKSUM
Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member.

ANC_SUNW_MEMBER
Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name.
An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as:

TagMeaning

ANC_SUNW_CHECKSUMChecksum of this object
ANC_SUNW_MEMBERName of object #1
ANC_SUNW_CHECKSUMChecksum for object #1
.
.
.
ANC_SUNW_MEMBERName of object N
ANC_SUNW_CHECKSUMChecksum for object N
ANC_SUNW_NULL

An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match.

Related Other Work

The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html.

At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor.

It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach.

The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details ofProject Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

Virtual Developer Day: Oracle Fusion Development - Dec 12

$
0
0

You can't gift wrap it, and you can't stuff it in a stocking, but you can give yourself the gift of improved Oracle Fusion Development skills by participating in Virtual Developer Day: Oracle Fusion Development.

This free online event features six sessions in three tracks, plus live hands-on labs, all focused on Fusion Development with Oracle ADF. The event takes place on Tuesday December 11th, 2012, and is scheduled specifically for those in EMEA.

Visit the registration page for more information, plus complete agenda information and session abstracts.

Viewing all 19780 articles
Browse latest View live




Latest Images