Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

How I Setup SQL Developer to Look and Run

$
0
0

I do a lot of live presentations and demonstrations. I’m frequently asked, ‘why does your SQL Developer look that way?’ or ‘How can I make SQL Developer work that way?’

It’s all about the preferences (or options.)

So here’s what you can do if you want to be like me. By the way, I can’t really help you if you want to be like me, that’s a matter for a psychiatry blog.

Here are the out-of-the-box options I tweak:

Visual Stuff

Fonts

I go big, easier for folks to see in demos.

I go big, easier for folks to see in demos.

Grids

The checkboard backgrounds makes the data easier to read, and I like my resultsets pinned by default

The checkboard backgrounds makes the data easier to read, and I like my resultsets pinned by default


Navigation Filters
New for version 4.0, disable the object types you don't use or care about

New for version 4.0, disable the object types you don’t use or care about

Behavior

Automatic stuff OFF

I'll invoke with the keyboard (ctrl+space) thank you very much.

I’ll invoke with the keyboard (ctrl+space) thank you very much.


SQL History Limit
Tools – Preferences – Database – Worksheet – SQL History Limit: I bump this up to 500.

Customer Editor Code Template
See the previous post. I like to add one for my contact info so I can bring it up easily in an editor. You’ll want to add all of yours, of course.

Bonus Tip if you ever do a Presentation

On Windows, go download ZoomIt. It’s free, and it’s awesome.


ArchBeat Twitter Tuesday - Top 10 Tweets - Feb 11-17, 2014

$
0
0

Webinar: Improve the Employee Experience

$
0
0

By Chris Leone

I am returning from our HCM World conference where I had an opportunity to speak with over 100 Oracle HCM customers in three days. The customers shared their stories of deploying HCM in the cloud or their plans to do so.

Momentum of customers transitioning to the cloud is continuing to grow. One of the overall objectives of these HCM initiatives is providing better experiences for their employees.  HR is rapidly transitioning from a system for passively managing your personal profiles, updating time reports, and getting a paycheck, to an active channel to interact with employees from the time they apply for a position, to welcoming them on board, creating aspirational career plans, and enabling them to execute those plans. Employees are looking for new ways to communicate with the company and with each other. Providing compelling experiences is critical to the success of such programs.

I noted my observations with an industry analyst I respect, Mark Smith of Ventana Research who noted he had just completed some research on the very same subject and after a lively hour long discussion thought we'd share these thoughts and strategies with you.

We will try to recapture the vigorous conversation in a webinar titled 'Top Trends in 2014 to Improve the Employee Experience' and ask for you to join the conversation. Get more information via the link below and look forward to hearing how you are improving experiences for your co-workers.

TwitterLinkedInFacebook


wow

Chris Leone is Senior Vice President of Development for Oracle Fusion Human Capital Management, as well as Oracle Taleo Enterprise Cloud Service and Oracle Taleo Social Sourcing Cloud Service, all part of Oracle Application Cloud Services. In this role, Mr. Leone is responsible for driving the strategy, product management, product development, and product go-to-market functions.

For more than 20 years, Mr. Leone has been developing enterprise software applications for large and midsize companies. Mr. Leone came to Oracle via the acquisition of PeopleSoft, where he was responsible for the product management activities of the company's financial management and enterprise performance management product lines. Prior to joining PeopleSoft, Mr. Leone was Vice President of Marketing and Product Management at Hyperion, where he was responsible for product strategy for business performance management solutions.

Mr. Leone earned his bachelor's degree in accounting and finance and a master's in finance and management from Loyola Marymount University.

Recap of Oracle GoldenGate 12c Webcast with Q&A

$
0
0

Simply amazing! That’s how I would summarize last week’s webcast for Oracle GoldenGate 12c.  It was a very interactive event with hundreds of live attendees and hundreds of great questions. In the presentation part my colleagues, Doug Reid and Joe deBuzna, went over the new features of Oracle GoldenGate 12c. They explained Oracle GoldenGate 12c key new features including:

  • Integrated Delivery for Oracle Database,
  • Coordinated Delivery for non-Oracle databases,
  • Support for Oracle Database 12c multitenant architecture,
  • Enhanced high availability via integration with Oracle Data Guard Fast-Start Failover,
  • Expanded heterogeneity, i.e. support for new databases and operating systems,
  • Improved security,
  • Low-downtime database migration solutions for Oracle E-Business Suite,
  • Integration with Oracle Coherence.

We also had a nice long and live Q&A section. In the previous Oracle GoldenGate webcasts, we could not respond to all audience questions in a 10-15 minute timeframe at the end of the presentation. This time we kept the presentation part short and left more than 30 minutes forQ&A. To our surprise, we could not answer even half of the questions we received. 

If you missed this great webcast discussing the new features of Oracle GoldenGate 12c,  and more than 30 minutes of Q&A with GoldenGate Product Management, you can still watch it on demand via the link below.

On Demand Webcast: Introducing Oracle GoldenGate 12c: Extreme Performance Simplified

On this blog post I would like to provide brief answers from our PM team  for some of the questions that we were not able to answer during the live webcast.

1) Does Oracle GoldenGate replicate DDL statements or DML for Oracle Database?

    Oracle GoldenGate replicates DML and DDL operations for Oracle Database and Teradata.

2) Where do we get more info on how to setup integration with Data Guard Fast-Start Failover (FSFO)?

     Please see the following blog posts or documents on My Oracle Support:

Best Practice - Oracle GoldenGate and Oracle Data Guard - Switchover/Fail-over Operations for GoldenGate    [My Oracle Support Article ID   1322547.1] 

Best Practice - Oracle GoldenGate 11gr2 integrated extract and Oracle Data Guard - Switchover/Fail-over Operations  [My Oracle Support Article ID 1436913.1] 

3) Does GoldenGate support SQL Server 2012 extraction? In the past only apply was supported.

Yes, starting with the new 12c release GoldenGate captures from SQL Server 2012 in addition to delivery capabilities.

4) Which RDBMS does GoldenGate 12c support?

GoldenGate supports all major RDBMS. For a full list of supported platforms please see Oracle GoldenGate certification matrix.

5) Could you provide some more details please on Integrated Delivery for dynamic parallel threads at Target side?

Please check out our white papers on Oracle GoldenGate 12c resource kit for more details on the new features, and how Oracle GoldenGate 12c works with Oracle Database. 

6) What is the best way to sync partial data (based on some selection criterion) from a table between databases?

 Please refer to the article: How To Resync A Single Table With Minimum Impact To Other Tables' Replication?[Article ID 966211.1]

7) How can GoldenGate be better than database trigger to push data into custom tables?

Triggers can cause high CPU overhead, in some cases almost double compared to reading from redo or transaction logs. In addition, they are intrusive to the application and cause management overhead as application changes. Oracle GoldenGate's log-based change data capture is not only low-impact in terms of CPU utilization, but also non-intrusive to the application with low maintenance requirements.

8) Are there any customers in the manufacturing industry using GoldenGate and for which application?

We have many references in manufacturing. In fact, SolarWorld USA was our guest speaker in the executive video webcast last November. You can watch the interview here. RIM Blackberry uses Oracle GoldenGate for multi-master replication between its global manufacturing systems. Here is another manufacturing customer story from AkzoNobel.

9) Does GoldenGate 12c support compressed objects for replication? Also does it supports BLOB/CLOB columns?

Yes, GoldenGate 12c and GoldenGate 11gR2 both support compressed objects. GoldenGate has been supporting BLOB/CLOB columns since version 10.

10) Is Oracle Database 11.2.0.4 mandatory to use GoldenGate 12c Integrated Delivery? Not earlier versions?

Yes. To use GoldenGate 12c’s Integrated Delivery, for the target environment Oracle Database 11.2.04 and above is required .

11) We have Oracle Streams implementation for more than 5 years. We would like to migrate to GoldenGate, however older version of GoldenGate were not supporting filtering individual transactions. Is it supported in GoldenGate 12c?

      Yes, it is supported in GoldenGate 12c.


In future blog posts I will continue to provide answers for common questions we received in the webcast. In the meanwhile I highly recommend watching the Introducing Oracle GoldenGate 12c: Extreme Performance Simplifiedwebcast on demand.

Develop Java Applications Using a Raspberry Pi

$
0
0

Ready to dive into the Internet of Things? Take the new, free, online course "Develop Java Embedded Applications Using a Raspberry Pi." The Oracle Learning Library has created this course which provides code, examples, and experts to teach you and answer your questions.

Java experts Stephen Chin, Jim Weaver, Simon Ritter, Angela Caicedo, and Tom McGinn will lead you through basic exercises. Each week, you'll get a new set of course materials:

  • A series of short, pre-recorded videos provide the "lecture" portion of the course.
  • A homework project is linked to the video material, and applies what you have learned by working with Java ME Embedded, the Raspberry Pi, and some electronic components.
  • A graded quiz evaluates how well you have grasped the materials and the homework.

Order your equipment now so you can have it in time for the course start on March 31st!

Here are a few FAQs (You can send questions or comments to Java-MOOC-Support.)

Q: Is the course free or do we have to pay for it?

A: The course is free. There is hardware you have in order to complete the labs (homework) but the course materials are free.

Q: It starts end of March and goes on for five weeks, but how often / how long will sessions occur? 5x 1hour? Or 25 full days? At what time will the sessions occur? 

A: This course is delivered entirely on-line.  There are no set times for sessions because the training is in pre-recorded video that you can watch anytime, anywhere.  Each week Oracle will release the materials for that week, and you should expect to spend between 4 and 6 hours each week on the lessons, the labs and the quizzes.

Q: Is there anything special about the kit (e.g. parts that are just for the MOOC that you wouldn't normally be able to buy)? 

A: The parts for the MOOC are only special in that they represent the use case presented by the course, that of an asset management system designed to provide data on container shipments of fresh produce: container door access (switch), temperature (I2c), global position (UART), and of course a RaspberryPi and breadboard as the development platform. The devices are available through Adafruit and relatively inexpensive.

Register now for "Develop Java Embedded Applications Using a Raspberry Pi."

Proving and Extending Your Oracle Linux Expertise

$
0
0

If you feel you have a good level of hands-on experience and Linux system administration knowledge, prove this by taking the Oracle Linux 5 & 6 System Administrator Oracle Certified Associate exam. In this exam, you will get 150 minutes to attempt 111 multiple choice questions. To reinforce your exam preparation you can take the Oracle Linux System Administration training course.

To go to the next level of system administration expertise, learn about advanced Oracle Linux featues by taking the Oracle Linux Advanced System Administration course. In this 3 day, instructor-led course you will discover how to take advantage of Btrfs to improve system performance and to use Linux containers to increase your resource utilization by creating secure isolated environments on a single host.

You can take this course as a:

  • Live-Virtual Event: Take this course from your own desk - no travel required. Choose from a selection of events on the schedule to suit different timezones.
  • In-Class Event: Travel to an education to take this course. Below is a selection of the events already on the schedule.

 Location

 Date

 Delivery Language

 Melbourne, Australia

 13 March 2014

 English

 Sydney, Australia

 13 August 2014

 English

 Jakarta, Indonesia

 12 May 2014

 English

 Kuala Lumpur, Malaysia

 9 June 2014

 English

Lagos, Nigeria 

 26 May 2014

 English 

 Istanbul, Turkey

 17 March 2014

 Turkish

 Edison, NJ, United States

 28 May 2014

 English

 Irving, TX, United States

 16 April 2014

 English

 Caracas, Venezuela

 24 February 2014

 Spanish

To register for this course, request an additional event or learn more about the Oracle Linux curriculum, go to http://oracle.com/education/linux.

Zyme Relies on MySQL Enterprise Edition to Deliver High Quality Global Channel Insights to Customers

$
0
0

Zyme, based in Redwood Shores, California, is the global leading provider of Channel Data Management (CDM) solutions to companies selling through indirect channels. For high tech and consumer electronic products alone, over $1 trillion USD worth of goods are flowing through those indirect sales channels every year. However, when companies sell products through multi-tier channel partners and retailers around the world, it has proven to be challenging in acquiring global, standardized channel inventory and sales data cost-effectively. As a result, companies lacking of such critical information often miss the opportunities to make timely and accurate business decisions either to increase revenue, reduce costs or to prevent losses.

Having a vision to solve such channel visibility problems for customers including Symantec, Logitech, Seagate and Xerox, Zyme built its channel data management solutions that not only get reliable, high-quality channel data from thousands of partners worldwide, but also have the capability to integrate with customers’ existing on-premise or cloud CRM, Data Warehousing or Business Intelligence systems to bring such channel visibility and information to the field sales and marketing teams and drive better business results.

The Business Challenge 

Zyme was founded with a mission to improve channel visibility after witnessing the following issues:

  • Lack of a cost-effective infrastructure to capture channel activities globally
  • Lack of a global standard for channel data reporting, such as point-of-sales (POS) data
  • Poor partner compliance and low quality of data reporting 

To build a system that is capable of handling critical channel data across continents cost-effectively, Zyme was looking for a database to support its solution that automatically captures, validates, cleanses and synchronizes the channel data, which then provides a high-quality view of data that correctly reflects Zyme’s customers’ sales and inventory activities on a daily and weekly basis. In addition, the database has to support millions of transactions every day given the huge volume of channel data flowing into Zyme’s channel solution from all over the world.

The MySQL Solution 

Zyme selected MySQL since the launch of its products because it met all the following requirements Zyme needed for its mission-critical channel data solution:

  • ACID compliant
  • Ease of use and administration
  • Open source
  • Cost-effective support services backed by a well-recognized company  

Currently MySQL stores 2.5 Terabytes data, composed by 1 billion records Zyme collects from retailers and distributors across the globe. Deploying the master-slave replication topology, Zyme makes the master MySQL database in charge of receiving incoming data and processing over 50 million transactions per month, with two layers of slave databases handling reporting and backups respectively.  

To ensure the channel activities are captured consistently and correctly, one of the critical missions for Zyme’s DBA team is to minimize unplanned downtime and data corruption, and to restore the data to a previous time in the rare case that something goes wrong. The team had tried out various backup solutions, both commercial and open source ones; however those tools either provided merely file-level backup or required a lot of manual setup and configuration processes which made backup very difficult. Moreover, Zyme has a unique need of creating a lot of temporary tables, as many as 200 to 300 on top of its 600GB to 800GB database, and the other backup tools just couldn’t keep up with the volume of data Zyme needed to archive. MySQL Enterprise Backup, with its “point-in-time recovery” feature, allows Zyme to recover data to a previous time easily when an error happens, without taking the system down. Furthermore, MySQL Enterprise Backup provides many additional benefits to Zyme, including: 

  • One single utility for both backup and recovery
  • Easy-to-find, easy-to-configure backup options
  • Adjustable read, write and compression speeds for better flexibility
  • Easy automation for backup processes
  • Easy-to-access backup data which is stored right in the database – no need for a separate repository
  • Incremental backup for InnoDB tables to save disk space
  • Backward compatibility – using InnoDB Plug-in, the complete backup and recovery features can be used by databases still on MySQL 5.1

Zyme also takes advantage of the audit functionality in MySQL Enterprise Audit to audit users who log into the system. The DBA team is currently in the process of upgrading production servers from MySQL 5.1 to MySQL 5.5 so the audit plug-in, supported in MySQL 5.5.28 and above, can be used more broadly to improve overall database security. In the next phase, the production servers will be upgraded again to MySQL 5.6, the most current GA version of MySQL, to fully leverage the latest features and further enhance the performance, security and reliability of Zyme’s MySQL databases.

“As a DBA, it’s my job to always make sure we have a consistent backup, a good monitoring solution, plus an audit tool to maintain data integrity, performance and security, and that’s why I strongly advocate MySQL Enterprise Edition, where I can find all the features I need in one place, to support the MySQL environment at Zyme”. Prasad Gowda, Associate Director - DBA, Zyme

MySQL Enterprise Backup is a powerful, yet very easy-to-use tool. It offers one utility for both backup and recovery, layouts the options that are easy to find, understand and configure, and provides great flexibility with backup customizations. It’s as easy as using a cookie-cutter: just setting the parameters, pointing to the instances, taking the snapshots, and we got the backup done. More importantly, we achieved much better results with MySQL Enterprise Backup, using less than 10 percent of the time we used to spend just researching other backup tools in the market. It has become an indispensable tool for the DBA team at Zyme”. Prasad Gowda, Associate Director - DBA, Zyme

Learn More about Zyme: http://www.zymesolutions.com/ 
Read more MySQL customer stories: http://www.mysql.com/customers/ 

Interview with Authors of "NetBeans Platform for Beginners" (Part 2)

$
0
0

In part 1, Jason Wexbridge and Walter Nyland were interviewed about the book they're working on about the NetBeans Platform.

https://leanpub.com/nbp4beginners

I caught up with them again, in the final stages of working on their book, to ask them some questions.

Hi Jason and Walter, how are things going with the book?

Jason: Great. Really finishing touches time, at this point.

Walter: It's been an intense time, but we're coming to the end of it now.

It's not a pamphlet, then?

Walter: Indeed not. We're currently at page 350 and it looks like we'll end up with around 370 pages or so. It's a lot of code, a lot of samples, a lot of detailed instructions for making the most with the many solutions and components that the NetBeans Platform provides.

Jason: We've had a lot of help from several people in the NetBeans community, in terms of review comments and insights, such as from Benno Markiewicz and Michael Bishop, while Sean Phillips is also involved now. In addition, some newbies just starting out with the NetBeans Platform are also reviewing early versions of the book. In total, we have about 5 reviewers at this point, which has been immensely beneficial in helping us position the content of the book correctly.

What do you consider to be the highlights of the book?

Walter: Well, to me, the highlight is that it is exactly the book I would have wanted to have when I started working with the NetBeans Platform.

Jason: Right, that's the book we wanted to write. It genuinely aims to provide complete solutions and understandings of all the core topics, that is, module system, file system, window system, action system, nodes, explorer views, visual library, palette, project system, and a long list of miscellaneous topics, such as dialogs and wizards.

What about JavaFX? Source editor? Maven?

Jason: Out of scope for the current book, though definitely the focus of the books we'll be working on next.

Wow, awesome news. Great work. Any final thoughts?

Walter: Please simply click this link and sign up to the book, just tell us how much you think would make sense for us to charge for it, based on the two interviews we've now given, and the sample that can be downloaded there. And when you leave your e-mail address, you'll be informed as soon as we publish the book!

https://leanpub.com/nbp4beginners

Many thanks, Jason and Walter! And, NetBeans Platform developers out there, please support this great effort by going to the site above and reading the PDF, leaving comments for Jason and Walter, and by telling them how much you'd pay for the completed book.


SPARC T5-2 Produces SPECjbb2013-MultiJVM World Record for 2-Chip Systems

$
0
0

The SPECjbb2013 benchmark shows modern Java application performance. Oracle's SPARC T5-2 set a two-chip world record, which is 1.8x faster than the best two-chip x86-based server. Using Oracle Solaris and Oracle Java, Oracle delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric.

  • The SPARC T5-2 server achieved 114,492 SPECjbb2013-MultiJVM max-jOPS and 43,963 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two-chip world record.

  • The SPARC T5-2 server running SPECjbb2013 is 1.8x faster than the Cisco UCS C240 M3 server (2.7 GHz Intel Xeon E5-2697 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • The SPARC T5-2 server running SPECjbb2013 is 2x faster than the HP ProLiant ML350p Gen8 server (2.7 GHz Intel Xeon E5-2697 v2) based on SPECjbb2013-MultiJVM max-jOPS and 1.3x faster based on SPECjbb2013-MultiJVM critical-jOPS.

  • The new Oracle results were obtained using Oracle Solaris 11 along with Oracle Java SE 8 on the SPARC T5-2 server.

  • The SPARC T5-2 server running SPECjbb2013 on a per chip basis is 1.3x faster than the NEC Express5800/A040b server (2.8 GHz Intel Xeon E7-4890 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which was retired by SPEC in 2013.

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of February 18, 2014 and this report. These are the leading 2-chip SPECjbb2013 MultiJVM results.

SPECjbb2013 - 2-Chip MultiJVM Results
SystemProcessorSPECjbb2013-MultiJVMJDK
max-jOPScritical-jOPS
SPARC T5-22xSPARC T5, 3.6 GHz114,49243,963Oracle Java SE 8
Cisco UCS C240 M32xIntel E5-2697 v2, 2.7 GHz63,07923,797Oracle Java SE 7u45
HP ProLiant ML350p Gen82xIntel E5-2697 v2, 2.7 GHz62,39324,310Oracle Java SE 7u45
IBM System x3650 M4 BD2xIntel E5-2695 v2, 2.4 GHz59,12422,275IBM SDK V7 SR6 (*)
HP ProLiant ML350p Gen82xIntel E5-2697 v2, 2.7 GHz57,59432,103Oracle Java SE 7u40
HP ProLiant BL460c Gen82xIntel E5-2697 v2, 2.7 GHz56,36730,078Oracle Java SE 7u40
Sun Server X4-2, DDR3-16002xIntel E5-2697 v2, 2.7 GHz52,66420,553Oracle Java SE 7u40
HP ProLiant DL360e Gen82xIntel E5-2470 v2, 2.4 GHz48,77217,915Oracle Java SE 7u40

* IBM SDK V7 SR6 – IBM SDK, Java Technology Edition, Version 7, Service Refresh 6

The following table compares the SPARC T5 processor to the Intel E7 v2 processor.

SPECjbb2013 - Results Using JDK 8
Per Chip Comparison
SystemSPECjbb2013-MultiJVMSPECjbb2013-MultiJVM/ChipJDK
max-jOPScritical-jOPSmax-jOPScritical-jOPS
SPARC T5-2
2xSPARC T5, 3.6 GHz
114,49243,96357,24621,981Oracle Java SE 8
NEC Express5800/A040b
4xIntel E7-4890 v2, 2.8 GHz
177,75365,52944,43816,382Oracle Java SE 8

SPARC per Chip Advantage1.29x1.34x

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle Java SE 8

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

SPECjbb2013 features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Benchmark Tags

SPEC, Intel, SPECjbb2013, Solaris, Java, x86, Application, SPARC, T5

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 2/18/2014, see http://www.spec.org for more information. SPARC T5-2 114,492 SPECjbb2013-MultiJVM max-jOPS, 43,963 SPECjbb2013-MultiJVM critical-jOPS, result from blogs.oracle.com/BestPerf/entry/20140218_t5_2_specjbb2013; NEC Express5800/A040b 177,753 SPECjbb2013-MultiJVM max-jOPS, 65,529 SPECjbb2013-MultiJVM critical-jOPS, result from www.necam.com/docs/?id=9ce64b18-e7af-4d37-af35-9e67a9bff9da. The following JDK 7 results are found at www.spec.org/jbb2013/results/jbb2013.html: Cisco UCS c240 M3 63,079 SPECjbb2013-MultiJVM max-jOPS, 23,797 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 62,393 SPECjbb2013-MultiJVM max-jOPS, 24,310 SPECjbb2013-MultiJVM critical-jOPS; IBM System X3650 M4 BD 59,124 SPECjbb2013-MultiJVM max-jOPS, 22,275 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 57,594 SPECjbb2013-MultiJVM max-jOPS, 32,103 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant BL460c Gen8 56,367 SPECjbb2013-MultiJVM max-jOPS, 30,078 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant DL360e Gen8 48,772 SPECjbb2013-MultiJVM max-jOPS, 17,915 SPECjbb2013-MultiJVM critical-jOPS.

New Recognition Opportunity for Oracle Community Members

$
0
0

Oracle ACE Program Announces: Oracle ACE Associate

The Oracle ACE program recognizes individuals in the community for sharing their insight and real-world experience. It has over 400 participants in 50+ countries worldwide. The program is now comprised of 3 tiers:

  • Oracle ACE Associate
  • Oracle ACE
  • Oracle ACE Director

The Oracle ACE Associate tier is the baseline entry for community members who are just getting started with their community activism and are working on building their network and community profile. ACE Associates aspire to contribute at higher levels. The addition of this new tier will provide opportunities for further growth and activity.

The Oracle ACE and ACE Director tiers are designed for community members who have proven their dedication to the Oracle community and have a long history of sharing their technical knowledge and experience through various channels: blogs, discussion forums, social networks, event presentations, book authorship.

Participation in the program is by nomination and all levels have a set criteria that must be met to qualify. To learn more about the Oracle ACE Program visit: oracle.com/technetwork/oracleace or these quick links:

Low-Rank Matrix Factorization in Oracle R Advanced Analytics for Hadoop

$
0
0

This guest post from Arun Kumar, a graduate student in the Department of Computer Sciences at the University of Wisconsin-Madison, describes work done during his internship in the Oracle Advanced Analytics group.

Oracle R Advanced Analytics For Hadoop (ORAAH), a component of Oracle’s Big Data Connectors software suite is a collection of statistical and predictive techniques implemented on Hadoop infrastructure. In this post, we introduce and explain techniques for a popular machine learning task that has diverse applications ranging from predicting ratings in recommendation systems to feature extraction in text mining namely matrix completion and factorization. Training, scoring, and prediction phases for matrix completion and factorization are available in ORAAH. The models generated can also be transparently loaded into R for ad-hoc inspection. In this blog, post we describe implementation specifics of these two techniques available in ORAAH.

Motivation

Consider an e-commerce company that displays products to potential customers on its webpage and collects data about views, purchases, ratings (e.g., 1 to 5 stars), etc. Increasingly, such online retailers are using machine learning techniques to predict in advance which products a customer is likely to rate highly and recommend such products to the customers in the hope that they might purchase them. Users build a statistical model based on the past history of ratings by all customers on all products. One popular model to generate predictions from such a hyper-sparse matrix is the latent factor model, also known as the low-rank matrix factorization model (LMF).

The setup is the following – we are given a large dataset of past ratings (potentially in the billions), say, with the schema (Customer ID, Product ID, Rating). Here, Customer ID refers to a distinct customer, Product ID refers to a distinct product, and Rating refers to a rating value, e.g., 1 to 5. Conceptually, this dataset represents a large matrix D with m rows (number of customers) and n columns (number of products), where the entries are the available ratings. Notice that this matrix is likely to be extremely sparse, i.e., many ratings could be missing since most customers typically rate only a few products. Thus, the task here is matrix completion – we need to predict the missing ratings so that it can be used for downstream processing such as displaying the top recommendations for each customer.

The LMF model assumes that the ratings matrix can be approximately generated as a product of two factor matrices, L and R, which are much smaller than D (lower rank). The idea is that the product L * R will approximately reconstruct the existing ratings and also automatically predict the missing ratings in D. More precisely, for each available rating (i,j,v) in D, we have (L x R) [i,j] ≈ v, while for each missing rating (i',j') in D, the predicted rating is (L x R) [i',j']. The model has a parameter r, which dictates the rank of the factor matrices, i.e., L is m x r, while R is r x n.

Matrix Completion in ORAAH

LMF can be invoked out-of-the-box using the routine orch.lmf. An execution based on the above example is shown below. The dataset of ratings is in a CSV file on HDFS with the schema above (named “retail_ratings” here), while the output (the factor matrices and metadata) are written to an HDFS folder (named “retail_model” here).


input <- hdfs.attach("retail_ratings")
fit <- orch.lmf(input)

# Export the model into R memory
lr <- orch.export.fit(fit)

# Compute the prediction for the point (100, 50)

# First column of lr$L contains the userid
userid <- lr$L[,1] == 100 # find row corresponding to user id 100
L <- lr$L[, 2:(rank+1)]

#First column contains the itemid
itemid <- lr$R[,1] == 50 # find row corresponding to item id 50
R <- lr$R[, 2:(rank+1)]

# dot product as sum of terms obtained through component wise multiplication
pred <- sum(L[userid,] * R[itemid,])

The factor matrices can be transparently loaded into R for further inspection and for ad-hoc predictions of specific customer ratings using R. The algorithm we use for training the LMF model is called Incremental Gradient Descent (IGD), which has been shown to be one of the fastest algorithms for this task [1, 2].

The entire set of arguments for the function orch.lmf along with a brief description of each and their default values is given in the table below. The latin parameter configures the degree of parallelism for executing IGD for LMF on Hadoop [2]. ORAAH sets this automatically based on the dimensions of the problem and the memory available to each Mapper. Each Mapper fits its partition of the model in memory, and the multiple partitions run in parallel to learn different parts of the model. The last five parameters configure IGD and need to be tuned by the user to a given dataset since they can impact the quality of the model obtained.

ORAAH also provides routines for predicting ratings as well as for evaluating the model (computing the error of the model on a given labeled dataset) on a large scale over HDFS-resident datasets. The routine for prediction of ratings is predict and for evaluating is orch.evaluate.

Other Matrix Factorization Tasks

While LMF is primarily used for matrix completion tasks, it can also be used for other matrix factorization tasks that arise in text mining, computer vision, and bio-informatics, e.g., dimension reduction and feature extraction. In these applications, the input data matrix need not necessarily be sparse. Although many zeros might be present, they are not treated as missing values. The goal here is simply to obtain a low-rank factorization D ≈ L x R as accurately as possible, i.e., the product L x R should recover all entries in D, including the zeros. Typically, such applications use a Non-Negative Matrix Factorization (NMF) approach due to non-negativity constraints on the factor matrix entries. However, many of these applications often do not need non-negativity in the factor matrices. Using NMF algorithms for such applications leads to poorer-quality solutions. Our implementation of matrix factorization for such NMF-style tasks can be invoked out-of-the-box in ORAAH using the routine orch.nmf, which has the same set of arguments as LMF.

Experimental Results & Comparison with Apache Mahout

We now present an empirical evaluation of the performance, quality, and scalability of the ORAAH LMF tool based on IGD and compare it to the most widely used off-the-shelf tool for LMF on Hadoop – an implementation of the ALS algorithm from Apache Mahout [3].

All our experiments are run on an Oracle Big Data Appliance Hadoop cluster with nine nodes, each with Intel Xeon X5675 12-core 3.07GHz processors, 48 GB RAM, and 20 TB disk. We use 256MB HDFS blocks and 10 reducers for MapReduce jobs.

We use two standard public datasets for recommendation tasks – MovieLens10M (referred to as MLens) and Netflix – for the performance and quality comparisons (insert URL). To study scalability aspects, we use several synthetic datasets of different sizes by changing the number of rows, number of columns, and/or number of ratings. The table below presents the data set statistics.


Results: Performance and Quality

We first present end-to-end overview of the performance and quality achieved by our implementation and Mahout on MLens and Netflix. The rank parameter was set at 50 (a typical choice for such tasks) and the other parameters for both tools were chosen using a grid search. The quality of the factor matrices was determined using the standard measure of root mean square error (RMSE) [2]. We use a 70%-15%-15% Wold holdout of the datasets, i.e., 70% for training, 15% for testing, and 15% for validation of generalization error. The training was performed until 0.1% convergence, i.e., until the fractional decrease in the training RMSE after every iteration reached 0.1%. The table below presents the results.

1. ORAAH LMF has a faster performance than Mahout LMF on the overall training runtime on both datasets – 1.8x faster on MLens and 2.3x faster on Netflix.
2. The per-iteration runtime of ORAAH LMF is much lower than that of Mahout LMF – between 4.4x and 5.4x.
3. Although ORAAH LMF runs more iterations than Mahout LMF, the huge difference in the per-iteration runtimes make the overall runtime smaller for ORAAH LMF.
4. The training quality (training RMSE) achieved is comparable across both tools on both datasets. Similarly, the generalization quality is also comparable. Thus, ORAAH LMF can offer state-of-the-art quality along with faster performance.

Results: Scalability

The ability to scale along all possible dimensions of the data is key to big data analytics. Both ORAAH LMF and Mahout LMF are able to scale to billions of ratings by parallelizing and distributing computations on Hadoop. But we now show that unlike Mahout LMF, ORAAH LMF is also able to scale to hundreds of millions of customers (m) and products (n), and also scales well with the rank results along these three dimensions – m, n, and r. parameter (r, which affects the size of the factor matrices). The figure below presents the scalability.

1. Figures (A) and (B) plot the results for the Syn-row and Syn-col datasets, respectively (r = 2). ORAAH LMF scales linearly with both number of rows (m) and number of columns (n), while Mahout LMF does not show up on either plot because it crashes at all these values of m. In fact, we verified that Mahout LMF does not scale beyond even m = 20 M! The situation is similar with n. This is because Mahout LMF assumes that the factor matrices L and R fit entirely in the memory of each Mapper. In contrast, ORAAH LMF uses a clever partitioning scheme on all matrices ([2]) and can thus scale seamlessly on all dataset dimensions.
2. Figure (C) shows the impact of the rank parameter r. ORAAH LMF scales linearly with r and the per-iteration runtime roughly doubles between r = 20 and r = 100. However, the per-iteration runtime of Mahout LMF varies quadratically with r, and in fact, increases by a factor of 40x between r = 20 and r = 100! Thus, ORAAH LMF is also able to scale better with r.
3. Finally, on the tera-scale dataset Syn-tera with 1 billion rows, 10 million columns, and 20 billion ratings, ORAAH LMF (for r = 2) finishes an iteration in just under 2 hours!

Acknowledgements

The matrix factorization features in ORAAH were implemented and benchmarked by Arun Kumar during his summer internship at Oracle under the guidance of Vaishnavi Sashikanth. He is pursuing his PhD in computer science from the University of Wisconsin-Madison. This work is the result of a collaboration between Oracle and the research group of Dr. Christopher Ré, who is now at Stanford University. Anand Srinivasan helped integrate these features into ORAAH.

References

[1] Towards a Unified Architecture for in-RDBMS Analytics. Xixuan Feng, Arun Kumar, Benjamin Recht, and Christopher Ré. ACM SIGMOD 2012.

[2] Parallel Stochastic Gradient Algorithms for Large-Scale Matrix Completion. Benjamin Recht and Christopher Ré. Mathematical Programming Computation 2013.

[3] Apache Mahout. http://mahout.apache.org/.

Thank You for flying with "Hello Kitty" ;-)

$
0
0

Yesterday Roy and I flew directly after the workshop in Seoul to Taipeh in Taiwan. It's our first time in Taiwan - and the flight to Taipeh with EVA Airlines was quite surprising;-) I didn't took a picture in the bathroom ... 

"Thank You for flying with Hello Kitty!".

-Mike 

Finite Number of Fat Locks in JRockit

$
0
0
Introduction

JRockit has a hard limit on the number of fat locks that can be "live" at once. While this limit is very large, the use of ever larger heap sizes makes hitting this limit more likely. In this post, I want to explain what exactly this limit is and how you can work around it if you need to.

Background

Java locks (AKA monitors) in JRockit basically come in one of two varieties, thin and fat. (We'll leave recursive and lazy locking out of the conversation for now.) For a detailed explanation of how we implement locking in JRockit, I highly recommend reading chapter 4 of JR:TDG. But for now, all that you need to understand is the basic difference between thin and fat locks. Thin locks are lightweight locks with very little overhead, but any thread trying to acquire a thin lock must spin until the lock is available. Fat locks are heavyweight and have more overhead, but threads waiting for them can queue up and sleep while waiting, saving CPU cycles. As long as there is only very low contention for a lock, thin locks are preferred. But if there is high contention, then a fat lock is ideal. So normally a lock will begin its life as a thin lock, and only be converted to a fat lock once the JVM decides that there is enough contention to justify using a fat lock. This conversion of locks between thin and fat is known as inflation and deflation.

Limitation

One of the reasons we call fat locks "heavyweight" is that we need to maintain much more data for each individual lock. For example, we need to keep track of any threads that have called wait() on it (the wait queue) and also any threads that are waiting to acquire the lock (the lock queue). For quick access to this lock information, we store this information in an array (giving us a constant lookup time). We'll call this the monitor array. Each object that corresponds to a fat lock holds an index into this array. We store this index value in a part of the object header known as the lock word. The lock word is a 32-bit value that contains several flags related to locking (and the garbage collection system) in addition to the monitor array index value (in the case of a fat lock). After the 10 flag bits, there are 22 bits left for our index value, limiting the maximum size of our monitor array to 2^22, or space to keep track of just over 4 million fat locks.

Now for a fat lock to be considered "live", meaning it requires an entry in the monitor array, it's object must still be on the heap. If the object is garbage collected or the lock is deflated, it's slot in the array will be cleared and made available to hold information about a different lock. Note that because we depend on GC to clean up the monitor array, even if the object itself is no longer part of the live set (meaning it is eligible for collection), the lock information will still be considered "live" and can not be recycled until the object gets collected.

So what happens when we use up all of the available slots in the monitor array? Unfortunately, we abort and the JVM exits with an error message like this:

===
ERROR] JRockit Fatal Error: The number of active Object monitors has overflowed. (87)
[ERROR] The number of used monitors is 4194304, and the maximum possible monitor index 4194303
===

Want to see for yourself? Try the test case below. One way to guarantee that a lock gets inflated by JRockit is to call wait() on it. So we'll just keep calling wait() on new objects until we run out of slots.

=== LockLeak.java
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;

public class LockLeak extends Thread {

      static List<Object> list  = new LinkedList<Object>();

      public static void main(String[] arg) {
            boolean threadStarted = false;
            for (int i = 0; i < 5000000; i++) {
                  Object obj = new Object();
                  synchronized(obj) {
                      list.add(0, obj);
                      if (!threadStarted) {
                          (new LockLeak()).start();
                          threadStarted = true;
                      }
                      try {
                          obj.wait();
                      } catch (InterruptedException ie) {} // eat Exception
                  }
            }
            System.out.println("done!"); // you must not be on JRockit!
            System.exit(0);
      }

      public void run() {
            while (true) {
                  Object obj = list.get(0);
                  synchronized(obj) {
                      obj.notify();
                  }
            }
      }

}
===

(Yes, this code is not even remotely thread safe. Please don't write code like this in real life and blame whatever horrible fate that befalls you on me. Think of this code as for entertainment purposes only. You have been warned.)

Resolution

While this may seem like a very serious limitation, in practice it is very unlikely to see even the most demanding application hit this limit. The good news is, even if you do have a system that runs up against this limit, you should be able to tune around the issue without too much difficulty. The key point is that GC is required to clean up the monitor array. The more frequently you collect your heap, the quicker "stale" monitor information (lock information for an object that is no longer part of the live set) will be removed.

As an example, one of our fellow product teams here at Oracle recently hit this limit while using a 50GB heap with a single space collector. By enabling the nursery (switching to a generational collector), they were able to completely avoid the issue. By proactively collecting short-lived objects, they avoided filling up the monitor array with entries for dead objects (that would otherwise have to wait for a full GC to be removed).

One other possible solution may be to set the -XX:FatLockDeflationThreshold option to a value below the default of 50 to more aggressively deflate fat locks. While this does work well for simple test cases like LockLeak.java above, I believe that more aggressive garbage collection is more likely to resolve any issues without a negative performance impact.

Either way, we have never seen anyone hit this problem that was not able to tune around the limitation very easily. It is hard to imagine that any real system will ever need more than 4 million fat locks all at once. But in all seriousness, given JRockit's current focus on stability and the lack of a use case that requires more, we are almost certainly not going to ever make the significant (read: risky) changes that removing or expanding this limit would require. The good news is that HotSpot does not seem to have a similar limitation.

Conclusion

You are very unlikely to ever see this issue unless you are running an application with a very large heap, a lot of lock contention, and very infrequent collections. By tuning to collect dead objects that correspond to fat locks faster, for example by enabling a young collector, you should be able to avoid this limit easily. In practice, no application today (or for the near future) will really need over 4 million fat locks at once. As long as you help the JVM prune the monitor array frequently enough, you should never even notice this limit.

Internet of Things – The Oracle platform by Torsten Winterberg

$
0
0

IoT will be the next game changer. We’re entering a world, where Billions of devices will be able to play in role in higher level processes. There will be many more machines using our todays internet than humans will do. This is a summary of an Openworld session on Oracle IoT strategy.

IoT Definition from Wikipedia:
Equipping all objects in the world with minuscule identifying devices could be transformative of daily life. For instance, business may no longer run out of stock or generate waste products, as involved parties would know which products are required and consumed. One's ability to interact with objects could be altered remotely based on immediate or present needs, in accordance with existing  end-user agreements.

Some major challenges have to be addressed:

  • Complex value chain
  • Lack of standardization to build, deploy & manage IoT applications
  • No consistency in managing security of data and identity of devices
  • Need to analyze Fast data in real time
  • No integration platform to convert data into business automation

Oracle launched their Iot platform at Openworld 2013 with this focus:

  • Standardize application development for devices, enterprise, web and mobile apps
  • Analyze IoT data to achieve real-time visibility
  • Integrate IoT data with enterprise applications & cloud infrastructure
  • Secure data & identity across devices and enterprise data center

Read the complete article here.

SOA & BPM Partner Community

For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]WikiMixForum

RIB 13.2.6 installation issue

$
0
0

Just right after a brand new successful installation of RIB13.2.6 , rib instance do not start with below error on console out log.  

Issue:  

java.io.IOException: ERROR: Can't load properties from "rib-system.properteis" (the directory it is in should be in the CLASSPATH).java.lang.RuntimeException: Error: rib-system.properties no found in
class.path

The below solution worked for me. 

rib-system.properties is copied to <managed_server_path>/ rib-system.properties. And, can be loaded to runtime if “$DOMAIN_HOME/bin/startWebLogic.sh” was update with below 

CLASSPATH=$DOMAIN_HOME/servers/$SERVER_NAME:$CLASSPATH
JAVA_OPTIONS="-Dweblogic.ejb.container.MDBMessageWaitTime=2 ${JAVA_
OPTIONS}"
JAVA_VM="-server"

And, Bounce Admin server and managed servers.

You can check managed_server.out log file to see what classpath is loaded at the start of the instance.


Increase thread limit for RPM

$
0
0

Issue:

How to setup work manager to increase the thread limit for RPM.  

I used below steps to configure it for my local instance, that worked..

We will need to create this work manager ahead of time in Weblogic,  perform the steps below and then re-deploy so it gets that modified XML file in updated deployment.

For Retail J2ee app 13.2.x on weblogic,                                                                                                                                                                        

  1. Run XDoclet to generate deployment descriptors if building locally
    Do NOT run XDoclet again or the changes to the deployment descriptors will be overwritten and will need to be redone
  2. Open weblogic-ejb-jar.xml for editing
  3. For each MDB reference add the following entry inside the<weblogic-enterprise-bean> element:

<dispatch-policy>threading_workmanager</dispatch-policy>

The result should look something like this:

<weblogic-enterprise-bean>

   <ejb-name>app_nameTaskMDB</ejb-name>

    <message-driven-descriptor>

       <destination-jndi-name>jms/Queue_name</destination-jndi-name>

   </message-driven-descriptor>

    <reference-descriptor>

    </reference-descriptor>

    <dispatch-policy>threading_workmanager</dispatch-policy>

</weblogic-enterprise-bean>

  1. Now open the Weblogic Server Admin Console
  2. Navigate to Environment > Work Managers
  3. Select New, then select 'Maximum Threads Constraint', and click Next
  4. Enter a name for the constraint, set the count to the maximum number of desired threads, and then select Next
  5. Under Servers select the app server that you would like to deploy the constraint to, most likely AdminServer, and select Finish
  6. Now, select New, then select 'Minimum Threads Constraint', and click Next
  7. Enter a name for the constraint, set the count to the minimum number of desired threads, and then select Next
  8. Under Servers select the app server that you would like to deploy the constraint to, most likely AdminServer, and select Finish
  9. Now, select New, then select 'Work Manager', and click Next
  10. The name of the work manager should match the one given for the dispatch-policy element in weblogic-ejb-jar.xml ('threading_workmanager' in the example above)
  11. Under Servers select the app server that you would like to deploy the constraint to, most likely AdminServer, and select Finish
  12. From the list select the new Work Manager
  13. Open the drop down menu for 'Minimum Threads Constraint' and select the minimum threads constraint that was just created in the previous steps
  14. Open the drop down menu for 'Maximum Threads Constraint' and select the maximum threads constraint that was just created in the previous steps
  15. Select Save

Restart the server, rebuild and redeploy the application

RIB 13.2.X transaction timeout

$
0
0

For the error in console out log:

Caused by: weblogic.transaction.internal.TimedOutException: Transaction timed out after 600 seconds

BEA1-03726D4BF8BB80C2179C

It is suggested to modify RIB’s EJB timeout. 600 seconds is default ‘<trans-timeout-seconds>600</trans-timeout-seconds>’, set within weblogic-ejb-jar.xml  of rib-<app>EJB.jar. rib-rpmEJB.jar in this case, packaged inside rib-rpm.ear. You can use below steps to update transaction timeout.  For an ideal configuration, please note to keep the value of this transaction timeout property less than JTA, JDBC transaction time-out and DB timeouts.  

1) In the $RIB_HOME/application-assembly-home/rib-<app> dir

a) Backup rib-<app>.ear

b) mkdir temp

c) cd temp

d) cp ../rib-<app>.ear .

e) jar -xvf rib-<app>.ear

f)rm rib-<app>.ear

g)mkdir temp2

h)cp rib-<app>EJB.jar temp2/

i)rm rib-<app>EJB.jar

j) cd temp2

k)jar–xvf rib-<app>EJB.jar

l)rm rib-<app>EJB.jar

m)open weblogic-ejb-jar.xml

a.find all entries <trans-timeout-seconds>600</trans-timeout-seconds> change 600 to 1

b.save and quit

n)cd ..

o)jar -cvf rib-<app>EJB.jar *

p)cd ..

q)cp temp2/rib-<app>EJB.jar .

r)rm -rf temp2

s)jar -cvf rib-rms.ear *

t)cp rib-rms.ear ../

u)cd ../

v)rm -rf temp

2) Run rib app deployer for rib-<app>

3) Bounce rib­-<app>

To revert changes, restore backed up copy of rib-<app>.ear, deploy, & bounce.

What’s New: PeopleSoft 9.2 Training Update from Oracle University

$
0
0

If your organization is planning to upgrade to PeopleSoft 9.2 in the near future, then the time is now to train your project team members. Oracle University's PeopleSoft curriculum now includes new courses covering the latest release and each course is delivered by expert instructors who specialize in PeopleSoft.

Here are some of the top courses now scheduled for March and April:

PeopleSoft Human Capital Management

PeopleSoft Financials/Supply Chain Management


Need to fast-track your training? Consider one of the following Accelerated training courses which combine multiple courses into a single week of training.

View Oracle University’s full catalog and schedule of courses here.  Don’t see the course you are looking for? Submit your request to Oracle University so they can help. 

    RIB 14.0 on AIX 7.1

    $
    0
    0

    Issue : While running RIB 14 check-version-and-unpack.sh script on an AIX machine.


     Error:

    java.lang.RuntimeException: Could not instantiate serializer com.sun.org.apache.xml.internal.serialize.XMLSerializer: java.lang.ClassNotFoundException: com.sun.org.apache.xml.internal.serialize.XMLSerializer

            at org.exolab.castor.xml.BaseXercesJDK5Serializer.<init>(BaseXercesJDK5Serializer.java:59)

    ---

    Work around: Updating one property value in castor.properties available in castor-1.3.2-xml.jar

    1. Go to the location Rib1400ForAll14xxApps/rib-home/integration-lib/third-party/exolab

                    - two jar files will be available castor-1.3.2-xml.jar, castor-core-1.3.2.jar

    2. just extract castor.properties from castor-1.3.2-xml.jar , using below command will extract only property file from jar

                    - jar -xvf castor-1.3.2-xml.jar castor.properties

    3. update the caster.properties file , comment the property org.exolab.castor.xml.serializer.factory and uncomment org.exolab.castor.xml.serializer.factory, like below

    #org.exolab.castor.xml.serializer.factory=org.exolab.castor.xml.XercesJDK5XMLSerializerFactory

    org.exolab.castor.xml.serializer.factory=org.exolab.castor.xml.XercesXMLSerializerFactory

    4. Update the jar with update property file

                    - jar -uf castor-1.3.2-xml.jar castor.properties

    Once the above steps are done, go ahead with rib process.

    SIM 13.2.6 , Authentication errors while navigation after successful login

    $
    0
    0

    Issue: After successful login into SIM 13.2.6 application with SSO, users report security exception on most of the tabs except on DSD & transfers.

    Error on log:

    Caused by: java.lang.SecurityException: User: "username", failed to be authenticated.
    at weblogic.common.internal.RMIBootServiceImpl.authenticate(RMIBootServiceImpl.java:116)

    Cause and Solution: 

    This is as per application design.  SIM JNLP has configurable parameter to match SSO global timeout configured for organizations single sign on servers, on expiry servlet invalidates the session.

    Default value of SSO_TIMEOUT has been setup as 60 Seconds on installation within security.cfg in the sim-security-resources.jar, we need to match this to the global SSO timeout.

    Viewing all 19780 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>