Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Oracle Appliance Manager Version 2.9 (OAK 2.9)

$
0
0

Oracle Appliance Manager patch bundle 2.9 (OAK 2.9) was released on February 18th, 2014 and is available for download as patch 17630388.

As always there are features, enhancements, and some bug fixes included with this release.


Some notable enhancements are as follows:

1  Import of http based templates directly on Oracle Database Appliance
2. Send Key support to user VMs via xenstore to facilitate configuration of user VMs from Oracle Appliance Manager
3. Shared storage (JBODs) monitoring on X3-2 and X4-2 systems using OAKCLI
4. Out-of-place update of Grid Infrastructure from 11.2.0.3.x to 11.2.0.4.0
5. Oracle Database Patch Set Update (PSU) 11.2.0.4.1 and 11.2.0.3.9 available
6. Improved VM stack - better module level logging (TINT ID), better exception handling in oakd 
7. Mutithreading of XML rpc agent and oakd adapter to allow parallel VM commands

Refer to Oracle Database Appliance Getting Started Guide for more information about these features and enhancements.


Oracle Sales Cloud update: release 8 and beyond

$
0
0

In this 1 hour webcast, Scott Creighton, Oracle VP Sales Cloud Product Management, discuss the Oracle Sales Cloud Release 8 functionality improvements and the solution's near future strategy as well as new market differentiators such as PaaS.

My new Active Directory Provider is not Working!

$
0
0
When you create LDAP providers, an easy way to verify they are working fine is by verifying in WebLogic console, you can see the users listed.

After added Active Directory Provider, Users and Groups are listed in Admin Console:

Security Realms -> My Realm -> Users and Groups.


When attempting to login to an application that is using the users, the login is being denied, and you might not see any clues in weblogic server logs.

If you enable Atn debug, the following is observed in server log:

<Debug> <SecurityAtn> <MyDomain> <AdminServer> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)' for workmanager: consoleapp@null@consoleWorkManager> <<WLS Kernel>> <> <593625378f0917fe:-23dcaa48:143ea3e7180:-8000-0000000000000400> <1391205135889> <BEA-000000> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate authenticate failed for user MyUser>


This can occur when  default authenticator is selected as REQUIRED  by default. So the login process is denied by the default authenticator due to it is not aware of users in Active Directory.

So, to fix the issue

1. Go to Admin Console > Security Realms > <Your Realm> >Providers.
2. Make Active Directory provider is in the top of the list and set Control Flag SUFFICIENT.
3. Make default authenticator Control Flag is set to OPTIONAL.

You can read more in this My Oracle Support document:

How to Configure Active Directory as the LDAP Provider for WebLogic Server (Doc ID 1299072.1)

Enjoy!

Customizing the Axis Labels in ADF Graphs

$
0
0

The various data visualization (DVT) graphs provided as part of the ADF Faces component set provide a very rich and comprehensive set of visualizations for representing your data.  One of the issues with them, that some folks struggle with however, is the fact that not all the features are controlled in an a completely declarative manner. 

In this article I want to concentrate on labeling capabilities for a graph axis, looking first of all that the declarative approaches, but then following that up with the more advanced programmatic option. 

Managing Labels Declaritively 

Control over the labels on the axis tick points is a good example of making the simple things declarative and the advanced things possible.  For basic numeric formatting you can do everything with tags - for example formatting as currency, percentage or with a certain precision.    

This is a default (bar)graph plotting employee salary against name, notice how the Y1 Axis has defaulted to a fairly sensible representation of the salary data using 0-14K:


I can change that default scaling by setting the scaling attribute in the <dvt:y1TickLabel> tag. This allows scaling at the level of none | thousand | million | billion | trillion | quadrillion (enough to show national debt then!):

<dvt:y1TickLabel id="y1TickLabel1" scaling="none"/>

Changes the graph to:


We can then further change the pattern of the numbers themselves by embedding <af:convertNumber> inside of the <dvt:y1TickLabel> tag.

e.g.

<dvt:y1TickLabel id="y1TickLabel1" scaling="none"><af:convertNumber type="currency" currencyCode="USD"/></dvt:y1TickLabel> 

Adds currency formatting:

And using the <dvt:graphFont> we can change colors and style:

<dvt:y1TickLabel id="y1TickLabel1" scaling="none">  <dvt:graphFont name="SansSerif" size="8" color="#FF0000" bold="true" italic="true" /><af:convertNumber type="currency" currencyCode="USD"/></dvt:y1TickLabel> 

Giving:


Need More Control?  Using the TickLabelCallback...

So we can achieve quite a lot by simply using the tags.  However, what about a more complex requirement such as replacing a numerical value on an axis with a totally different string e.g. converting to a Roman Numeral (I, IV XII etc.) or maybe converting a millisecond value to a formatted date string?  To do this, ADF provides a simple callback that you can implement to map a value to whatever string you need.  Here's a simple case where I've plotted the salaries in department 100 of the standard HR EMPLOYEES demo table against the HireDate on a scatter plot.  For the sake of illustration I've actually converted the HireDate to it's time value (e.g. a long value representing the number of milliseconds since 01/01/1970) .  In a real graph I'd use the proper support that we have for representing time and date and just map a date object. Then you get to use the timeSelector and can directly set the formatting, however, bear with me because I'm just illustrating the point here.

Here's the default output with the millisecond version of the date, as you can see the millisecond value gets automatically scaled to the billions level.

To override the default representation of the millisecond value we will need to create a java class that implements the oracle.dss.graph.TickLabelCallback interface.  Here's the simple example I'll use in this case:

import java.text.SimpleDateFormat;
import oracle.dss.graph.DataTickLabelInfo;
import oracle.dss.graph.GraphConstants;
import oracle.dss.graph.TickLabelCallback;
import oracle.dss.graph.TickLabelInfo;

public class MSToDateFormatLabelCallback implements TickLabelCallback, Serializable {

  @Override  public String getTickLabel(TickLabelInfo tickLabelInfo, int axisID) {
    String label = null;  if (axisID == GraphConstants.X1TICKLABEL) {      long timeInMillis = (long) ((DataTickLabelInfo) tickLabelInfo).getValue();      SimpleDateFormat fmt = new SimpleDateFormat("MM/yy");      label = fmt.format(timeInMillis);    } else {      label = "" + ((DataTickLabelInfo) tickLabelInfo).getValue();    }    return label;  }
} 

As you can see the formatting  is applied only to the specified axis and we have to upcast the tickLabelInfo argument to a DataTickLabelInfo in order to gain access to the value that is being applied. Note that the callback class must also be Serializable. 

Once you have this class, then you need to apply it to the chart instance by calling the relevant set*TickLabelCallback. So for example I might set this in the backing bean for a page in the setter used by the binding attribute on the dvt graph component

public class GraphPageHandler {    private UIGraph scatterPlot;    public void setScatterPlot(UIGraph scatterPlot) {        this.scatterPlot = scatterPlot;        scatterPlot.setX1TickLabelCallback(new MSToDateFormatLabelCallback());    } 

Now with the callback in place and the addition of:

  1. Using  the axisMinValue attribute along with axisMinAutoScaled="false" on the <dvt:x1Axis>  to reset the start date for the plot to the year 2000 rather than the default 1970
  2. Some of the currency formatting that we've already looked at to format the Y axis

Here's the result:

Are 90% of Companies Still Failing to Execute on Strategy?

$
0
0

“90% of companies fail to execute on strategy effectively.” This statement was made over 30 years ago – but has nothing changed? Jennifer Toomey, Senior Product Marketing Director for Performance Management Applications at Oracle interviewed Denis Desroches, Director of Research for the Institute of Management Accountants (IMA), about this subject and got an update about current experiences on organizations’ ability to execute on strategy.

Denis is part of a volunteer research team called the Business Research and Analysis Group (BRAG) that, over the past 15 years, has done world-wide studies on a number of current business practices, including the adoption and use of performance scorecards, and issues in costing and profitability. The team’s results have been published in various magazines and journals and in a book called “Scorecard Best Practices; Design, Implementation and Evaluation.” In addition to Denis, Dr. Raef Lawson, Vice President of Research and Professor-in-Residence for IMA, and yours truly – Toby Hatch -  a Senior Product Marketing Director for Oracle Business Analytics, are also part of the research team.

According to Denis, the team chose to research the topic of executing effectively on strategy because during the 15 years of conducting research together, they continued to hear the same statement repeated again and again, in a number of settings, and therefore began to question its current legitimacy.  The quote, “90% of organizations fail to execute on strategy effectively,” originates from an article by Walter Kiechel III in 1982 article titled “Corporate Strategists Under Fire”. This number became a catalyst for businesses to seek improved methods for defining, articulating and, ultimately, executing strategy. This fact - less than 10% of organizations can fully implement their strategies - has been repeated, relatively unchanged, over the last 30 years.  



In our interview, Jennifer asked Denis what the BRAG team found out through their recent research activity. “Things do appear to be getting better,” said Denis. “Results of our on-line survey show that in 2012, a higher percentage of organizations were successful at executing their strategy”. In fact, about 40% of the survey respondents self declared that they were successful or very successful. “We did not define what constituted success; we let our respondents self-declare their own success level.” said Denis. Demographic characteristics like industry and company size didn’t appear to be predictive on which organizations would declare success or non-success.

“So what distinguishes successful organizations?” inquired Jennifer.

There are some cultural or organizational characteristics that appear to contribute to successful execution of strategy, and some technical issues and processes that need to be considered, Denis explained. For example, organizations who feel that they are very successful at executing strategy are more likely to have:

A supportive culture,
Effective leadership,
Clear communication to everyone about what the organization is trying to accomplish,
Clear links among strategy,  
Focus on organizational strengths,  
And align the initiatives to get it all done

Denis also offered several technical aspects of successfully executing on strategy that should be considered (hear more by listening to the complete podcast).

Although there are still a large number of companies failing to execute effectively on strategy, the number that are executing effectively is improving, and the checklist of items to consider for improving execution is fairly comprehensive. To read more about the results of this study, refer to the article called, “Are 90% of Companies Still Failing to Execute on Strategy?” in the March/April 2014 edition of the Journal of Corporate Accounting and Finance published by John Wiley and Sons.

To listen to the entire podcast, click here.

To learn more about the Business Research and Analysis group, click here.
To learn more about Oracle Scorecard and Strategy Management used to monitor the execution of strategy, click here.


Java EE 8 Community Survey: The Next Phase

$
0
0

The results are in for the Java EE 8 Community Survey.  We've had a terrific response to the survey, with over 2500 participants in Part 1 and over 1800 in Part 2!

You can find a summary of the results at https://java.net/projects/javaee-spec/downloads/download/JavaEE8_Community_Survey_Results.pdf

The next phase of this information gathering involves asking for feedback from the community in prioritizing among the most highly rated features from parts 1 and 2 of the survey.  You can read about it here:

https://blogs.oracle.com/theaquarium/entry/java_ee_community_survey_part

It would be great if you could give us your views on Java EE 8 AND also spread the word in your network. This is how the Java community builds releases!

Clustering Events

$
0
0

Setting up an Oracle Event Processing Cluster

Recently I was working with Oracle Event Processing (OEP) and needed to set it up as part  of a high availability cluster.  OEP uses Coherence for quorum membership in an OEP cluster.  Because the solution used caching it was also necessary to include access to external Coherence nodes.  Input messages need to be duplicated across multiple OEP streams and so a JMS Topic adapter needed to be configured.  Finally only one copy of each output event was desired, requiring the use of an HA adapter.  In this blog post I will go through the steps required to implement a true HA OEP cluster.

OEP High Availability Review

The diagram below shows a very simple non-HA OEP configuration:

Events are received from a source (JMS in this blog).  The events are processed by an event processing network which makes use of a cache (Coherence in this blog).  Finally any output events are emitted.  The output events could go to any destination but in this blog we will emit them to a JMS queue.

OEP provides high availability by having multiple event processing instances processing the same event stream in an OEP cluster.  One instance acts as the primary and the other instances act as secondary processors.  Usually only the primary will output events as shown in the diagram below (top stream is the primary):

The actual event processing is the same as in the previous non-HA example.  What is different is how input and output events are handled.  Because we want to minimize or avoid duplicate events we have added an HA output adapter to the event processing network.  This adapter acts as a filter, so that only the primary stream will emit events to out queue.  If the processing of events within the network depends on how the time at which events are received then it is necessary to synchronize the event arrival time across the cluster by using an HA input adapter to synchronize the arrival timestamps of events across the cluster.

OEP Cluster Creation

Lets begin by setting up the base OEP cluster.  To do this we create new OEP configurations on each machine in the cluster.  The steps are outlined below.  Note that the same steps are performed on each machine for each server which will run on that machine:

  • Run ${MW_HOME}/ocep_11.1/common/bin/config.sh.
    • MW_HOME is the installation directory, note that multiple Fusion Middleware products may be installed in this directory.
  • When prompted “Create a new OEP domain”.
  • Provide administrator credentials.
    • Make sure you provide the same credentials on all machines in the cluster.
  • Specify a  “Server name” and “Server listen port”.
    • Each OEP server must have a unique name.
    • Different servers can share the same “Server listen port” unless they are running on the same host.
  • Provide keystore credentials.
    • Make sure you provide the same credentials on all machines in the cluster.
  • Configure any required JDBC data source.
  • Provide the “Domain Name” and “Domain location”.
    • All servers must have the same “Domain name”.
    • The “Domain location” may be different on each server, but I would keep it the same to simplify administration.
    • Multiple servers on the same machine can share the “Domain location” because their configuration will be placed in the directory corresponding to their server name.
  • Create domain!

Configuring an OEP Cluster

Now that we have created our servers we need to configure them so that they can find each other.  OEP uses Oracle Coherence to determine cluster membership.  Coherence clusters can use either multicast or unicast to discover already running members of a cluster.  Multicast has the advantage that it is easy to set up and scales better (see http://www.ateam-oracle.com/using-wka-in-large-coherence-clusters-disabling-multicast/) but has a number of challenges, including failure to propagate by default through routers and accidently joining the wrong cluster because someone else chose the same multicast settings.  We will show how to use both unicast and multicast to discover the cluster. 

Multicast DiscoveryUnicast Discovery
Coherence multicast uses a class D multicast address that is shared by all servers in the cluster.  On startup a Coherence node broadcasts a message to the multicast address looking for an existing cluster.  If no-one responds then the node will start the cluster.Coherence unicast uses Well Known Addresses (WKAs). Each server in the cluster needs a dedicated listen address/port combination. A subset of these addresses are configured as WKAs and shared between all members of the cluster. As long as at least one of the WKAs is up and running then servers can join the cluster. If a server does not find any cluster members then it checks to see if its listen address and port are in the WKA list. If it is then that server will start the cluster, otherwise it will wait for a WKA server to become available.
 To configure a cluster the same steps need to be followed for each server in the cluster:
  • Set an event server address in the config.xml file.
    • Add the following to the <cluster> element:
      <cluster>
          <server-name>server1</server-name>
          <server-host-name>oep1.oracle.com</server-host-name>
      </cluster>
    • The “server-name” is displayed in the visualizer and should be unique to the server.

    • The “server-host-name” is used by the visualizer to access remote servers.

    • The “server-host-name” must be an IP address or it must resolve to an IP address that is accessible from all other servers in the cluster.

    • The listening port is configured in the <netio> section of the config.xml.

    • The server-host-name/listening port combination should be unique to each server.

 
  • Set a common cluster multicast listen address shared by all servers in the config.xml file.
    • Add the following to the <cluster> element:
      <cluster>
          …
          <!—For us in Coherence multicast only! –>
          <multicast-address>239.255.200.200</multicast-address>
          <multicast-port>9200</multicast-port>
      </cluster>
    • The “multicast-address” must be able to be routed through any routers between servers in the cluster.

  • Optionally you can specify the bind address of the server, this allows you to control port usage and determine which network is used by Coherence

    • Create a “tangosol-coherence-override.xml” file in the ${DOMAIN}/{SERVERNAME}/config directory for each server in the cluster.
      <?xml version='1.0'?>
      <coherence>
          <cluster-config>
              <unicast-listener>
                  <!—This server Coherence address and port number –>
                  <address>192.168.56.91</address>
                  <port>9200</port>
              </unicast-listener>
          </cluster-config>
      </coherence>
  • Configure the Coherence WKA cluster discovery.

    • Create a “tangosol-coherence-override.xml” file in the ${DOMAIN}/{SERVERNAME}/config directory for each server in the cluster.
      <?xml version='1.0'?>
      <coherence>
          <cluster-config>
              <unicast-listener>
                  <!—WKA Configuration –>
                  <well-known-addresses>
                      <socket-address id="1">
                          <address>192.168.56.91</address>
                          <port>9200</port>
                      </socket-address>
                      <socket-address id="2">
                          <address>192.168.56.92</address>
                          <port>9200</port>
                      </socket-address>
                  </well-known-addresses>
                  <!—This server Coherence address and port number –>
                  <address>192.168.56.91</address>
                  <port>9200</port>
              </unicast-listener>
          </cluster-config>
      </coherence>

    • List at least two servers in the <socket-address> elements.

    • For each <socket-address> element there should be a server that has corresponding <address> and <port> elements directly under <well-known-addresses>.

    • One of the servers listed in the <well-known-addresses> element must be the first server started.

    • Not all servers need to be listed in <well-known-addresses>, but see previous point.

 
  • Enable clustering using a Coherence cluster.
    • Add the following to the <cluster> element in config.xml.
      <cluster>
          …
          <enabled>true</enabled>
      </cluster>
    • The “enabled” element tells OEP that it will be using Coherence to establish cluster membership, this can also be achieved by setting the value to be “coherence”.

 
  • The following shows the <cluster> config for another server in the cluster with differences highlighted:
    <cluster>
        <server-name>server2</server-name>
        <server-host-name>oep2.oracle.com</server-host-name>
        <!—For us in Coherence multicast only! –>
        <multicast-address>239.255.200.200</multicast-address>
        <multicast-port>9200</multicast-port>
        <enabled>true</enabled>
    </cluster>

  • The following shows the <cluster> config for another server in the cluster with differences highlighted:
    <cluster>
        <server-name>server2</server-name>
        <server-host-name>oep2.oracle.com</server-host-name>
        <enabled>true</enabled>
    </cluster>

 
  • The following shows the “tangosol-coherence-override.xml” file for another server in the cluster with differences highlighted:
    <?xml version='1.0'?>
    <coherence>
        <cluster-config>
            <unicast-listener>
                <!—WKA Configuration –>
                <well-known-addresses>
                    <socket-address id="1">
                        <address>192.168.56.91</address>
                        <port>9200</port>
                    </socket-address>
                    <socket-address id="2">
                        <address>192.168.56.92</address>
                        <port>9200</port>
                    </socket-address>
                    <!—This server Coherence address and port number –>
                    <address>192.168.56.92</address>
                    <port>9200</port>
                </well-known-addresses>
            </unicast-listener>
        </cluster-config>
    </coherence>

You should now have a working OEP cluster.  Check the cluster by starting all the servers.

Look for a message like the following on the first server to start to indicate that another server has joined the cluster:

<Coherence> <BEA-2049108> <The domain membership has changed to [server2, server1], the new domain primary is "server1">

Log on to the Event Processing Visualizer of one of the servers –http://<hostname>:<port>/wlevs.  Select the cluster name on the left and then select group “AllDomainMembers”.  You should see a list of all the running servers in the “Servers of Group – AllDomainMembers” section.

Sample Application

Now that we have a working OEP cluster let us look at a simple application that can be used as an example of how to cluster enable an application.  This application models service request tracking for hardware products.  The application we will use performs the following checks:

  1. If a new service request (identified by SRID) arrives (indicated by status=RAISE) then we expect some sort of follow up in the next 10 seconds (seconds because I want to test this quickly).  If no follow up is seen then an alert should be raised.
    • For example if I receive an event (SRID=1, status=RAISE) and after 10 seconds I have not received a follow up message (SRID=1, status<>RAISE) then I need to raise an alert.
  2. If a service request (identified by SRID) arrives and there has been another service request (identified by a different SRID) for the same physcial hardware (identified by TAG) then an alert should be raised.
    • For example if I receive an event (SRID=2, TAG=M1) and later I receive another event for the same hardware (SRID=3, TAG=M1) then an alert should be raised.

Note use case 1 is nicely time bounded – in this case the time window is 10 seconds.  Hence this is an ideal candidate to be implemented entirely in CQL.

Use case 2 has no time constraints, hence over time there could be a very large number of CQL queries running looking for a matching TAG but a different SRID.  In this case it is better to put the TAGs into a cache and search the cache for duplicate tags.  This reduces the amount of state information held in the OEP engine.

The sample application to implement this is shown below:

Messages are received from a JMS Topic (InboundTopicAdapter).  Test messages can be injected via a CSV adapter (RequestEventCSVAdapter).  Alerts are sent to a JMS Queue (OutboundQueueAdapter), and also printed to the server standard output (PrintBean).  Use case 1 is implemented by the MissingEventProcessor.  Use case 2 is implemented by inserting the TAG into a cache (InsertServiceTagCacheBean) using a Coherence event processor and then querying the cache for each new service request (DuplicateTagProcessor), if the same tag is already associated with an SR in the cache then an alert is raised.  The RaiseEventFilter is used to filter out existing service requests from the use case 2 stream.

The non-HA version of the application is available to download here.

We will use this application to demonstrate how to HA enable an application for deployment on our cluster.

A CSV file (TestData.csv) and Load generator properties file (HADemoTest.prop) is provided to test the application by injecting events using the CSV Adapter.

Note that the application reads a configuration file (System.properties) which should be placed in the domain directory of each event server.

Deploying an Application

Before deploying an application to a cluster it is a good idea to create a group in the cluster.  Multiple servers can be members of this group.  To add a group to an event server just add an entry to the <cluster> element in config.xml as shown below:

<cluster>
      …
      <groups>HAGroup</groups>
   </cluster>

Multiple servers can be members of a group and a server can be a member of multiple groups.  This allows you to have different levels of high availability in the same event processing cluster.

Deploy the application using the Visualizer.  Target the application at the group you created, or the AllDomainMembers group.

Test the application, typically using a CSV Adapter.  Note that using a CSV adapter sends all the events to a single event server.  To fix this we need to add a JMS output adapter (OutboundTopicAdapter) to our application and then send events from the CSV adapter to the outbound JMS adapter as shown below:

So now we are able to send events via CSV to an event processor that in turn sends the events to a JMS topic.  But we still have a few challenges.

Managing Input

First challenge is managing input.  Because OEP relies on the same event stream being processed by multiple servers we need to make sure that all our servers get the same message from the JMS Topic.  To do this we configure the JMS connection factory to have an Unrestricted Client ID.  This allows multiple clients (OEP servers in our case) to use the same connection factory.  Client IDs are mandatory when using durable topic subscriptions.  We also need each event server to have its own subscriber ID for the JMS Topic, this ensures that each server will get a copy of all the messages posted to the topic.  If we use the same subscriber ID for all the servers then the messages will be distributed across the servers, with each server seeing a completely disjoint set of messages to the other servers in the cluster.  This is not what we want because each server should see the same event stream.  We can use the server name as the subscriber ID as shown in the below excerpt from our application:

<wlevs:adapter id="InboundTopicAdapter" provider="jms-inbound">
    …
    <wlevs:instance-property name="durableSubscriptionName"
            value="${com_bea_wlevs_configuration_server_ClusterType.serverName}" />
</wlevs:adapter>

This works because I have placed a ConfigurationPropertyPlaceholderConfigurer bean in my application as shown below, this same bean is also used to access properties from a configuration file:

<bean id="ConfigBean"
        class="com.bea.wlevs.spring.support.ConfigurationPropertyPlaceholderConfigurer">
        <property name="location" value="file:../Server.properties"/>
    </bean>

With this configuration each server will now get a copy of all the events.

As our application relies on elapsed time we should make sure that the timestamps of the received messages are the same on all servers.  We do this by adding an HA Input adapter to our application.

<wlevs:adapter id="HAInputAdapter" provider="ha-inbound">
    <wlevs:listener ref="RequestChannel" />
    <wlevs:instance-property name="keyProperties"
            value="EVID" />
    <wlevs:instance-property name="timeProperty" value="arrivalTime"/>
</wlevs:adapter>

The HA Adapter sets the given “timeProperty” in the input message to be the current system time.  This time is then communicated to other HAInputAdapters deployed to the same group.  This allows all servers in the group to have the same timestamp in their event.  The event is identified by the “keyProperties” key field.

To allow the downstream processing to treat the timestamp as an arrival time then the downstream channel is configured with an “application-timestamped” element to set the arrival time of the event.  This is shown below:

<wlevs:channel id="RequestChannel" event-type="ServiceRequestEvent">
    <wlevs:listener ref="MissingEventProcessor" />
    <wlevs:listener ref="RaiseEventFilterProcessor" />
    <wlevs:application-timestamped>
        <wlevs:expression>arrivalTime</wlevs:expression>
    </wlevs:application-timestamped>
</wlevs:channel>

Note the property set in the HAInputAdapter is used to set the arrival time of the event.

So now all servers in our cluster have the same events arriving from a topic, and each event arrival time is synchronized across the servers in the cluster.

Managing Output

Note that an OEP cluster has multiple servers processing the same input stream.  Obviously if we have the same inputs, synchronized to appear to arrive at the same time then we will get the same outputs, which is central to OEPs promise of high availability.  So when an alert is raised by our application it will be raised by every server in the cluster.  If we have 3 servers in the cluster then we will get 3 copies of the same alert appearing on our alert queue.  This is probably not what we want.  To fix this we take advantage of an HA Output Adapter.  unlike input where there is a single HA Input Adapter there are multiple HA Output Adapters, each with distinct performance and behavioral characteristics.  The table below is taken from the Oracle® Fusion Middleware Developer's Guide for Oracle Event Processing and shows the different levels of service and performance impact:

Table 24-1 Oracle Event Processing High Availability Quality of Service

High Availability OptionMissed Events?Duplicate Events?Performance Overhead
Section 24.1.2.1, "Simple Failover"Yes (many)Yes (few)Negligible
Section 24.1.2.2, "Simple Failover with Buffering"Yes (few)Foot 1Yes (many)Low
Section 24.1.2.3, "Light-Weight Queue Trimming"NoYes (few)Low-MediumFoot 2
Section 24.1.2.4, "Precise Recovery with JMS"NoNoHigh

I decided to go for the lightweight queue trimming option.  This means I won’t lose any events, but I may emit a few duplicate events in the event of primary failure.  This setting causes all output events to be buffered by secondary's until they are told by the primary that a particular event has been emitted.  To configure this option I add the following adapter to my EPN:

    <wlevs:adapter id="HAOutputAdapter" provider="ha-broadcast">
        <wlevs:listener ref="OutboundQueueAdapter" />
        <wlevs:listener ref="PrintBean" />
        <wlevs:instance-property name="keyProperties" value="timestamp"/>
        <wlevs:instance-property name="monotonic" value="true"/>
        <wlevs:instance-property name="totalOrder" value="false"/>
    </wlevs:adapter>

This uses the time of the alert (timestamp property) as the key to be used to identify events which have been trimmed.  This works in this application because the alert time is the time of the source event, and the time of the source events are synchronized using the HA Input Adapter.  Because this is a time value then it will increase, and so I set monotonic=”true”.  However I may get two alerts raised at the same timestamp and in that case I set totalOrder=”false”.

I also added the additional configuration to config.xml for the application:

<ha:ha-broadcast-adapter>
    <name>HAOutputAdapter</name>
    <warm-up-window-length units="seconds">15</warm-up-window-length>
    <trimming-interval units="millis">1000</trimming-interval>
</ha:ha-broadcast-adapter>

This causes the primary to tell the secondary's which is its latest emitted alert every 1 second.  This will cause the secondary's to trim from their buffers all alerts prior to and including the latest emitted alerts.  So in the worst case I will get one second of duplicated alerts.  It is also possible to set a number of events rather than a time period.  The trade off here is that I can reduce synchronization overhead by having longer time intervals or more events, causing more memory to be used by the secondary's or I can cause more frequent synchronization, using less memory in the secondary's and generating fewer duplicate alerts but there will be more communication between the primary and the secondary's to trim the buffer.

The warm-up window is used to stop a secondary joining the cluster before it has been running for that time period.  The window is based on the time that the EPN needs to be running to be have the same state as the other servers.  In our example application we have a CQL that runs for a period of 10 seconds, so I set the warm up window to be 15 seconds to ensure that a newly started server had the same state as all the other servers in the cluster.  The warm up window should be greater than the longest query window.

Adding an External Coherence Cluster

When we are running OEP as a cluster then we have additional overhead in the servers.  The HA Input Adapter is synchronizing event time across the servers, the HA Output adapter is synchronizing output events across the servers.  The HA Output adapter is also buffering output events in the secondary’s.  We can’t do anything about this but we can move the Coherence Cache we are using outside of the OEP servers, reducing the memory pressure on those servers and also moving some of the processing outside of the server.  Making our Coherence caches external to our OEP cluster is a good idea for the following reasons:

  • Allows moving storage of cache entries outside of the OEP server JVMs hence freeing more memory for storing CQL state.
  • Allows storage of more entries in the cache by scaling cache independently of the OEP cluster.
  • Moves cache processing outside OEP servers.

To create the external Coherence cache do the following:

  • Create a new directory for our standalone Coherence servers, perhaps at the same level as the OEP domain directory.
  • Copy the tangosol-coherence-override.xml file previously created for the OEP cluster into a config directory under the Coherence directory created in the previous step.
  • Copy the coherence-cache-config.xml file from the application into a config directory under the Coherence directory created in the previous step.
  • Add the following to the tangosol-coherence-override.xml file in the Coherence config directory:
    • <coherence>
          <cluster-config>
              <member-identity>
                  <cluster-name>oep_cluster</cluster-name>
                  <member-name>Grid1</member-name>
              </member-identity>
              …
          </cluster-config>
      </coherence>
    • Important Note: The <cluster-name> must match the name of the OEP cluster as defined in the <domain><name> element in the event servers config.xml.
    • The member name is used to help identify the server.
  • Disable storage for our caches in the event servers by editing the coherence-cache-config.xml file in the application and adding the following element to the caches:
    • <distributed-scheme>
          <scheme-name>DistributedCacheType</scheme-name>
          <service-name>DistributedCache</service-name>
          <backing-map-scheme>
              <local-scheme/>
          </backing-map-scheme>
          <local-storage>false</local-storage>
      </distributed-scheme>
    • The local-storage flag stops the OEP server from storing entries for caches using this cache schema.
    • Do not disable storage at the global level (-Dtangosol.coherence.distributed.localstorage=false) because this will disable storage on some OEP specific cache schemes as well as our application cache.  We don’t want to put those schemes into our cache servers because they are used by OEP to maintain cluster integrity and have only one entry per application per server, so are very small.  If we put those into our Coherence Cache servers we would have to add OEP specific libraries to our cache servers and enable them in our coherence-cache-config.xml, all of which is too much trouble for little or no benefit.
  • If using Unicast Discovery (this section is not required if using Multicast) then we want to make the Coherence Grid be the Well Known Address servers because we want to disable storage of entries on our OEP servers, and Coherence nodes with storage disabled cannot initialize a cluster.  To enable the Coherence servers to be primaries in the Coherence grid do the following:
    • Change the unicast-listener addresses in the Coherence servers tangosol-coherence-override.xml file to be suitable values for the machine they are running on – typically change the listen address.
    • Modify the WKA addresses in the OEP servers and the Coherence servers tangosol-coherence-override.xml file to match at least two of the Coherence servers listen addresses.
    • The following table shows how this might be configured for 2 OEP servers and 2 Cache servers
      OEP Server 1OEP Server 2Cache Server 1Cache Server 2

      <?xml version='1.0'?>
      <coherence>
        <cluster-config>








          <unicast-listener>
            <well-known-addresses>
              <socket-address id="1">
                <address>
                  192.168.56.91
               
      </address>
                <port>9300</port>
              </socket-address>
              <socket-address id="2">
                <address>
                  192.168.56.92
               
      </address>
                <port>9300</port>
              </socket-address>
            </well-known-addresses>
            <address>
              192.168.56.91
           
      </address>
            <port>9200</port>
          </unicast-listener>
        </cluster-config>
      </coherence>

      <?xml version='1.0'?>
      <coherence>
        <cluster-config>








          <unicast-listener>
            <well-known-addresses>
              <socket-address id="1">
                <address>
                  192.168.56.91
               
      </address>
                <port>9300</port>
              </socket-address>
              <socket-address id="2">
                <address>
                  192.168.56.92
               
      </address>
                <port>9300</port>
              </socket-address>
            </well-known-addresses>
            <address>
              192.168.56.92
           
      </address>
            <port>9200</port>
          </unicast-listener>
        </cluster-config>
      </coherence>

      <?xml version='1.0'?>
      <coherence>
        <cluster-config>
          <member-identity>
            <cluster-name>
              oep_cluster
            </cluster-name>
            <member-name>
              Grid1
            </member-name>
          </member-identity>
          <unicast-listener>
            <well-known-addresses>
              <socket-address id="1">
                <address>
                  192.168.56.91
               
      </address>
                <port>9300</port>
              </socket-address>
              <socket-address id="2">
                <address>
                  192.168.56.92
               
      </address>
                <port>9300</port>
              </socket-address>
            </well-known-addresses>
            <address>
              192.168.56.91
           
      </address>
            <port>9300</port>
          </unicast-listener>
        </cluster-config>
      </coherence>

      <?xml version='1.0'?>
      <coherence>
        <cluster-config>
          <member-identity>
            <cluster-name>
              oep_cluster
            </cluster-name>
            <member-name>
              Grid2
            </member-name>
          </member-identity>
          <unicast-listener>
            <well-known-addresses>
              <socket-address id="1">
                <address>
                  192.168.56.91
               
      </address>
                <port>9300</port>
              </socket-address>
              <socket-address id="2">
                <address>
                  192.168.56.92
               
      </address>
                <port>9300</port>
              </socket-address>
            </well-known-addresses>
            <address>
              192.168.56.92
           
      </address>
            <port>9300</port>
          </unicast-listener>
        </cluster-config>
      </coherence>

    • Note that the OEP servers do not listen on the WKA addresses, using different port numbers even though they run on the same servers as the cache servers.
    • Also not that the Coherence servers are the ones that listen on the WKA addresses.
  • Now that the configuration is complete we can create a start script for the Coherence grid servers as follows:
    • #!/bin/sh
      MW_HOME=/home/oracle/fmw
      OEP_HOME=${MW_HOME}/ocep_11.1
      JAVA_HOME=${MW_HOME}/jrockit_160_33
      CACHE_SERVER_HOME=${MW_HOME}/user_projects/domains/oep_coherence
      CACHE_SERVER_CLASSPATH=${CACHE_SERVER_HOME}/HADemoCoherence.jar:${CACHE_SERVER_HOME}/config
      COHERENCE_JAR=${OEP_HOME}/modules/com.tangosol.coherence_3.7.1.6.jar
      JAVAEXEC=$JAVA_HOME/bin/java
      # specify the JVM heap size
      MEMORY=512m
      if [[ $1 == '-jmx' ]]; then
          JMXPROPERTIES="-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true"
          shift
      fi
      JAVA_OPTS="-Xms$MEMORY -Xmx$MEMORY $JMXPROPERTIES"
      $JAVAEXEC -server -showversion $JAVA_OPTS -cp "${CACHE_SERVER_CLASSPATH}:${COHERENCE_JAR}" com.tangosol.net.DefaultCacheServer $1
    • Note that I put the tangosol-coherence-override and the coherence-cache-config.xml files in a config directory and added that directory to my path (CACHE_SERVER_CLASSPATH=${CACHE_SERVER_HOME}/HADemoCoherence.jar:${CACHE_SERVER_HOME}/config) so that Coherence would find the override file.
    • Because my application uses in-cache processing (entry processors) I had to add a jar file containing the required classes for the entry processor to the classpath (CACHE_SERVER_CLASSPATH=${CACHE_SERVER_HOME}/HADemoCoherence.jar:${CACHE_SERVER_HOME}/config).
    • The classpath references the Coherence Jar shipped with OEP to avoid versoin mismatches (COHERENCE_JAR=${OEP_HOME}/modules/com.tangosol.coherence_3.7.1.6.jar).
    • This script is based on the standard cache-server.sh script that ships with standalone Coherence.
    • The –jmx flag can be passed to the script to enable Coherence JMX management beans.

We have now configured Coherence to use an external data grid for its application caches.  When starting we should always start at least one of the grid servers before starting the OEP servers.  This will allow the OEP server to find the grid.  If we do start things in the wrong order then the OEP servers will block waiting for a storage enabled node to start (one of the WKA servers if using Unicast).

Summary

We have now created an OEP cluster that makes use of an external Coherence grid for application caches.  The application has been modified to ensure that the timestamps of arriving events are synchronized and the output events are only output by one of the servers in the cluster.  In event of failure we may get some duplicate events with our configuration (there are configurations that avoid duplicate events) but we will not lose any events.  The final version of the application with full HA capability is shown below:

Files

The following files are available for download:

  • Oracle Event Processing
    • Includes Coherence
  • None-HA version of application
    • Includes test file TestData.csv and Load Test property file HADemoTest.prop
    • Includes Server.properties.Antony file to customize to point to your WLS installation
  • HA version of application
    • Includes test file TestData.csv and Load Test property file HADemoTest.prop
    • Includes Server.properties.Antony file to customize to point to your WLS installation
  • OEP Cluster Files
    • Includes config.xml
    • Includes tangosol-coherence-override.xml
    • Includes Server.properties that will need customizing for your WLS environment
  • Coherence Cluster Files
    • Includes tangosol-coherence-override.xml and coherence-cache-configuration.xml
    • includes cache-server.sh start script
    • Includes HADemoCoherence.jar with required classes for entry processor

References

The following references may be helpful:

Read database performance troubleshooting note!


Lab: Oracle Solaris 11 Administration for the AIX Sysadmin

$
0
0

Start buttons belong on Tiger Wood's golf cart. Give me car keys that jangle when I insert them into a 1968 Dodge Charger. The music that engine makes ... it enters your body through your soul before your ear drums even register the vibration. And give me Save buttons on browser-based interfaces, too. This amorphous invisible background save that I'm supposed to trust is happening is the brainchild of developers who put posters of Joseph Stalin on their walls.

In spite of my Luddite tendencies, I do like new technologies. I also like a variety of them. If you ask my personal opinion, the more operating systems, the better. More jobs for sysadmins. More jobs for developers. More arm-wrestling matches in the server room. And more interesting problems. That's my idea of fun.

Unfortunately, it's not The Man's idea of fun. Forces I can't possibly understand and would never take for a joy ride in a stolen Dodge Charger push for consolidation and cost-cutting with the frenzy of a four barrel carburetor sucking air at wide open throttle (WOT). Even if, like me, you prefer a more genteel IT environment, you have to adapt. And so, we sometimes wave good-bye to our friends.

If you're facing a migration away from AIX, consider Oracle Solaris. Yeah, it's designed to handle the competitive pressures of today's IT environments...

  • Cloud-ready provisioning, security, and virtualization
  • Quick to reallocate compute, storage, and network resources
  • Zones, ZFS, Dynamic Tracing, Predictive Self Healing and Trusted Extensions reduce downtime and simplify the delivery of application deployment environments
  • Optimized to run best on Oracle hardware, and run Oracle applications best
  • Automated migration, assistance, and education for DBAs and Power/AIX administrators migrating to Oracle Solaris.

... and yeah, because the Oracle stack is optimized to run best on Oracle Solaris (and Oracle Linux), it gives you some crazy good numbers compared to AIX ...

  • Up to 2.4x greater database performance
  • Up to 3.4x faster Java application server performance
  • Increased Oracle application performance : 1.9x faster for Siebel CRM (4) and 3x faster for JD Edwards

... but it's also got soul. And it doesn't have a dumb Start button.

Below is a link to a hands-on lab and some other resources to help you understand what's involved in migrating from AIX to Oracle Solaris.

Hands-On Lab: Oracle Solaris Administration for AIX Sysadmins

by Glynn Foster

Walks an AIX sysadmin through the basic administration of Oracle Solaris 11 and how it compares to IBM AIX Enterprise in areas including installation, software packaging, file systems, user management, services, networking, and virtualization. Even makes helps you navigate your way through documentation, man pages, and online how-to articles.

More Resources

About the Photograph

Photograph of '68 Dodge Charger courtesy of Kobac via Wikipedia Commons Creative Commons License 2.0

- Rick

Follow me on:
Blog | Facebook | Twitter | YouTube | The Great Peruvian Novel

CSX Corporation Upgrades Databases 2x Faster With Oracle Real Application Testing

$
0
0

Oracle Real Application Testing Helps Premier Transportation Company Streamline and Accelerate Upgrade of 400 Oracle Databases While Maintaining Business Continuity

  • To maintain business continuity, CSX Corporation, a premier transportation company, used Oracle Real Application Testing to upgrade its Oracle Database infrastructure.
  • With more than 400 databases supporting critical commercial, packaged and proprietary business applications, including payroll, dispatching, and a customer-facing order entry system, CSX wanted to take advantage of the enhanced functionality in Oracle Database while minimizing the business impact and downtime during the migration.
  • CSX turned to Oracle Real Application Testing to streamline the upgrade process and help ensure flawless execution.
  • With Oracle Real Application Testing, CSX completed the database upgrade in less than half the time required for the company's previous database upgrade that involved a database footprint that was 30 percent smaller.
  • Oracle Real Application Testing enabled CSX to fully assess the impact of infrastructure changes and fine tune queries in a test environment before deploying the change in production, reducing risk, avoiding disruption and rework, and accelerating the overall upgrade process.
  • CSX used Oracle Enterprise Manager to analyze performance data from Oracle Real Application Testing's SQL Performance Analyzer to evaluate the impact of both prepackaged and custom SQL workloads during the Oracle Database upgrades in its Oracle E-Business Suite environment. By capturing SQL workloads for different peak periods into SQL Tuning Sets, CSX was able to create a comprehensive library of SQL queries that can be used for validation of changes. CSX plans to use Oracle Real Application Testing on an ongoing basis to test changes in the new upgraded environment.
  • In addition, CSX has implemented Oracle Enterprise Manager 12c to monitor and manage a combination of more than 500 Oracle Database and Oracle Real Application Clusters instances. Oracle Enterprise Manager provides centralized, standardized and reliable monitoring, which has allowed CSX to efficiently manage the growth in the number of servers and databases.
  • CSX is also using the Oracle Advanced Compression option of Oracle Database to enable 7x data compression rates, which has improved performance, reduced storage requirements by 21 percent, and stemmed storage growth by 19 percent. 

 Read the full press release


Announcing Oracle Mobile Security Suite: Secure Deployment of Applications and Access for Mobile

$
0
0

Today, Oracle has announced a new offering, Oracle Mobile Security Suite, which will provide access to sensitive applications and data on personal or corporate owned devices.  This new offering will give enterprises unparalleled capabilities in how they contain, control and enhance the mobile experience.


A great deal of effort has been placed into analyzing how corporations are leveraging the mobile platform today, as well as how they will use this platform in the future. Corporate IT has spoken loud and clear of the challenges they face around lengthy provisioning times for access to applications and services, as well as the need for managing the increased usage of applications.  Recent industry reports show how significant the risks can be.  1 A detailed assessment of one of the most popular application marketplaces shows that 100% of the top 100 paid apps have some form of rogue variant posted within the same marketplace. As credential theft is on the rise, one of the targets this is being achieved is on the mobile device with rogue apps or Malware with embedded keystroke recorders or collection tools that send back other critical data from the device.

One of the great new features of the Oracle Mobile Security Suite (OMSS)  is through the use of containers.  Containers allow OMSS to create a secure workspace within the device, where corporate applications, email, data and more can reside. This workspace utilizes its own secure communications back to the back end cloud or corporate systems, independent of VPN.  This means that corporate information is maintained and managed separate of the personal content on the device giving end users the added flexibility of using personal devices without impacting the corporate workspace.  Remote wipe of data now doesn't impact the entire device, rather, only the contents of the corporate workspace.  New policies and changes in access and applications can be applied whenever a user authenticates into their workspace, without having to rebuild or re-wrap any applications in the process, unlike other offerings.  This is a very unique approach for Oracle.

More details on this new release at  http://www.oracle.com/us/corporate/press/2157116

Rounding out this offering, are capabilities that enable the complete end to end provisioning of access, Single Sign-on within the container, enterprise app store and much more.  

Technical Whitepaper: Extending Enterprise Access and Governance with Oracle Mobile Security

For the latest information on Oracle's Mobile Strategy, please visit the Oracle Mobile Security Suite product page, or check back for upcoming Mobile Security postings on the Oracle IDM blog page this March. 

1 2013 X-Force Internet Threat Report


Hello Oracle Applications Cloud Enthusiasts

$
0
0

A Guest Post by Vice President Jeff Caldwell, Oracle Applications Development

We want to help you prepare for Release 8 of Oracle Applications Cloud with a Release 8 Readiness page.

This upcoming release includes more than 400 new, modern business-empowering features, which you can learn about in the following preview content:



Spotlights: These webcasts, delivered by Oracle Development, spotlight top-level messages and product themes. They are reinforced with product demos.

Release Content Documents (RCDs): These summary descriptions provide details on each new feature and product.

What's New: These are expanded discussions of each new feature and product; you'll find capability overviews, business benefits, setup considerations, usage tips, and more.

Check the Release 8 Readiness page, often, as new training material and spotlights will be added over the coming weeks. You can also access the content at https://cloud.oracle.com/ under the Resources menu.

Integration for Airlines and Cargo Hubs

$
0
0

Thank you to Krishnaprem Bhatia, Product Manager for Oracle B2B Integration for this insightful blog post on the latest B2B integration trends for airlines and cargo hubs:

Market Trend

Many airlines today are using antiquated mainframe and proprietary systems for maintaining their Passenger Service Systems (PSS). These systems are  typically mainframe systems which are old and complex with a high cost of maintenance. Airlines want to modernize these systems and reduce their costs by consolidation of numerous point solutions and legacy applications.

The need to reduce complexity, bring down IT costs and increase their flexibility is driving airlines to outsource their PSS systems to vendors such as Amadeus. Although self development of these PSS systems can be done in-house by major airlines, it can be more expensive, less flexible, and less feature rich than the outsourcing option.
Plane at gate
As airlines outsource more of their PSS systems, they need to exchange business documents such as reservations and ticketing with the outsourced provider. They also need visibility and manageability into the data flowing from outsourced systems into their enterprises.This incoming passenger data also needs to be integrated back into their internal systems. For example, different documents received from the outsourced PSS systems need to be processed and stored so that they are available to other internal systems. This has to be done using standards-based technologies for compliance and interoperability, ensuring that performance and operations SLAs are met at the same time.

How does Oracle Service Integration fit in?

Oracle B2B allows airlines to connect with their outsourced PSS systems such as Amadeus using industry standards-based technologies. Airlines can exchange different document types (typically EDI variants, non XML formats) such as passenger reservations, updates to reservations, inventory management, departure control systems and ticketing. Oracle B2B provides the ability to exchange these documents, process them, validate them, and translate them into XML for further processing by downstream components.

Couple at terminal

Airlines typically exchange information with Amadeus using two modes. In the real-time (online) mode the messages are sent 'live' by the PSS systems on an ongoing basis as they occur. In the batch mode many messages are batched together and sent at a particular time. Oracle B2B provides support for both real-time and batch modes, providing critical functionality such as document translation, validation, de-batching for these documents. It also provides the communication mechanisms such as File, FTP and MQ for exchanging these messages with outsourced systems. All this is done using standards based technologies such as standard document and exchange protocols. Once B2B is done processing the messages, these are typically sent to adjacent components within Oracle SOA Suite for message enrichment and transformations. Messages can then be stored in an enterprise warehouse where this data can be used by other internal applications. The end to end scenarios typically have high performance SLAs in terms of throughput and end to end processing time.

The products typically deployed in such scenarios include Oracle B2B, Oracle SOA Suite BPEL Process Manager for data transformation and enrichment and Oracle Data Integrator for migrating processed data into enterprise data warehouses. Customers may also choose to deploy this over Exalogic and Exadata systems for performance reasons.

Some customer examples

There are many customers who are using Oracle SOA B2B as described above today including asian airliner who went live with Oracle B2B in November 2013. Their goal was to replacing their mainframe-based passenger service system with a state of the art process that interfaces with Amadeus. The business scenarios included real-time integration, batch, no-fly list checks, integration with Amadeus via MQ. The new solution was based on Oracle SOA Suite middleware on the Exalogic and Exadata platforms. The benefits for the airline by deploying the new platform include reduction in cost, increased flexibility and increased performance (2x  for batch processing, 32x for no-fly list)

Other similar customers include Sri Lankan Airlines who went live in Dec 2013 and All Nippon Airlines (ANA) planning to go live in 2014, along with others in the pipeline.

We also see that Oracle B2B is used to provide B2B SaaS services. This is becoming more common as more enterprises move towards cloud adoption in general. We already have customers in the retail sector such as SPS Commerce who have built their SaaS solutions using Oracle B2B, but we today we also have customers in the travel segment who are providing SaaS based brokerage services. For example, Cargo Champs is providing a cloud solution for cargo management to more than 89 airlines worldwide. They are the biggest cargo broker cloud platform with airlines, frieght carriers, cargo hubs etc. With hundreds of different endpoints integrated using multiple data formats, they estimate to deploy 15,000 agreements and exchange 50 million messages over 7 data centers. Cargo Champs is using Oracle B2B for Custom, EDI and IATA documents exchanging messages over File, FTP and numerous other transport protocols. They are also using SOA Suite for message enrichment, business rules, transformations and routing.

There is a huge opportunity for the airline and cargo industry to improve their efficiency and agility as more and more airlines optimize their systems and move towards cloud adoption in general. Industry experts predict that there is going to be plenty of growth in this market for many years to come. For more information on Oracle B2B, see the following link

How to generate training and test dataset using SQL Query node in Data Miner

$
0
0

Overview

In Data Miner, the Classification and Regression Build nodes include a process that splits the input dataset into training and test dataset internally, which are then used by the model build and test processes within the nodes.This internal data split feature alleviates user from performing external data split, and then tie the split dataset into a build and test process separately as found in other competitive products.However, there are times user may want to perform an external data split.For example, user may want to generate a single training and test dataset, and reuse them in multiple workflows.The generation of training and test dataset can be done easily via the SQL Query node.

Stratified Split

The stratified split is used internally by the Classification Build node, because this technique can preserve the categorical target distribution in the resulting training and test dataset, which is important for the classification model build.The following shows the SQL statements that are essentially used by the Classification Build node to produce the training and test dataset internally:

SQL statement for Training dataset

SELECT

v1.*

FROM

(

-- randomly divide members of the population into subgroups based on target classes

SELECT a.*,

row_number() OVER (partition by {target column} ORDER BY ORA_HASH({case id column})) "_partition_caseid"

FROM {input data} a

) v1,

(

-- get the count of subgroups based on target classes

SELECT {target column},

COUNT(*)"_partition_target_cnt"

FROM {input data} GROUP BY {target column}

) v2

WHERE v1. {target column} = v2. {target column}

-- random sample subgroups based on target classes in respect to the sample size

AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) <= (v2."_partition_target_cnt" * {percent of training dataset} / 100)


SQL statement for Test dataset

SELECT

v1.*

FROM

(

-- randomly divide members of the population into subgroups based on target classes

SELECT a.*,

row_number() OVER (partition by {target column} ORDER BY ORA_HASH({case id column})) "_partition_caseid"

FROM {input data} a

) v1,

(

-- get the count of subgroups based on target classes

SELECT {target column},

COUNT(*)"_partition_target_cnt"

FROM {input data} GROUP BY {target column}

) v2

WHERE v1. {target column} = v2. {target column}

-- random sample subgroups based on target classes in respect to the sample size

AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) > (v2."_partition_target_cnt" * {percent of training dataset} / 100)

The followings describe the placeholders used in the SQL statements:

{target column} - target column.It must be categorical type.

{case id column} - case id column.It must contain unique numbers that identify the rows.

{input data} - input data set.

{percent of training dataset} - percent of training dataset.For example, if you want to split 60% of input dataset into training dataset, use the value 60.The test dataset will contain 100%-60% = 40% of the input dataset.The training and test dataset are mutually exclusive.

Random Split

The random split is used internally by the Regression Build node because the target is usually numerical type.The following shows the SQL statements that are essentially used by the Regression Build node to produce the training and test dataset:

SQL statement for Training dataset

SELECT

v1.*

FROM

{input data} v1

WHERE ORA_HASH({case id column}, 99, 0) <= {percent of training dataset}

SQL statement for Test dataset

SELECT

    v1.*

FROM

{input data} v1

WHERE ORA_HASH({case id column}, 99, 0) > {percent of training dataset}

The followings describe the placeholders used in the SQL statements:

{case id column} - case id column.It must contain unique numbers that identify the rows.

{input data} - input data set.

{percent of training dataset} - percent of training dataset.For example, if you want to split 60% of input dataset into training dataset, use the value 60.The test dataset will contain 100%-60% = 40% of the input dataset.The training and test dataset are mutually exclusive.

Use SQL Query node to create training and test dataset

Assume you want to create the training and test dataset out of the demo INSUR_CUST_LTV_SAMPLE dataset using the stratified split technique; you can create the following workflow to utilize the SQL Query nodes to execute the above split SQL statements to generate the dataset, and then use the Create Table nodes to persist the resulting dataset.

Assume the case id is CUSTOMER_ID, target is BUY_INSURANCE, and the training dataset is 60% of the input dataset.You can enter the following SQL statement to create the training dataset in the “SQL Query Stratified Training” SQL Query node:

SELECT

v1.*

FROM

(

-- randomly divide members of the population into subgroups based on target classes

SELECT a.*,

row_number() OVER (partition by"BUY_INSURANCE" ORDER BY ORA_HASH("CUSTOMER_ID")) "_partition_caseid"

FROM"INSUR_CUST_LTV_SAMPLE_N$10009" a

) v1,

(

-- get the count of subgroups based on target classes

SELECT"BUY_INSURANCE",

COUNT(*)"_partition_target_cnt"

FROM"INSUR_CUST_LTV_SAMPLE_N$10009" GROUP BY "BUY_INSURANCE"

) v2

WHERE v1."BUY_INSURANCE" = v2."BUY_INSURANCE"

-- random sample subgroups based on target classes in respect to the sample size

AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) <= (v2."_partition_target_cnt" * 60 / 100)



Likewise, you can enter the following SQL statement to create the test dataset in the “SQL Query Stratified Test” SQL Query node:

SELECT

v1.*

FROM

(

-- randomly divide members of the population into subgroups based on target classes

SELECT a.*,

row_number() OVER (partition by"BUY_INSURANCE" ORDER BY ORA_HASH("CUSTOMER_ID")) "_partition_caseid"

FROM"INSUR_CUST_LTV_SAMPLE_N$10009" a

) v1,

(

-- get the count of subgroups based on target classes

SELECT"BUY_INSURANCE",

COUNT(*)"_partition_target_cnt"

FROM"INSUR_CUST_LTV_SAMPLE_N$10009" GROUP BY "BUY_INSURANCE"

) v2

WHERE v1."BUY_INSURANCE" = v2."BUY_INSURANCE"

-- random sample subgroups based on target classes in respect to the sample size

AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) > (v2."_partition_target_cnt" * 60 / 100)

Now run the workflow to create the training and test dataset.You can find the table names of the persisted dataset in the associated Create Table nodes.


Conclusion

This blog shows how easily to create the training and test dataset using the stratified split SQL statements via the SQL Query nodes.Similarly, you can generate the training and test dataset using the random split technique by replacing SQL statements with the random split SQL statements in the SQL Query nodes in the above workflow.If a large dataset (tens of millions of rows) is used in multiple model build nodes, it may be a good idea to split the data ahead of time to optimize the overall processing time (avoid multiple internal data splits inside the model build nodes).

Oracle repeats as BI and Analytics Leader in Gartner MQ 2014

$
0
0

For the 8th consecutive year, Oracle is a Leader in Gartner’s Magic Quadrant for Business Intelligence and Analytics Platform. Gartner declares that “the BI and analytics platform market is in the middle of an accelerated transformation from Business Intelligence (BI) systems used primarily for measurement and reporting to those that also support analysis, prediction, forecasting and optimization.”Oracle offers all these wide-ranging capabilities across Business Intelligence Foundation Suite, Advanced Analytics and Real-Time Decisions.

Gartner specifically recognizes Oracle as a Leader for several key reasons. Oracle customers reported among the largest BI deployments in terms of users and data sizes.In fact, 69% of Oracle customers stated that Oracle BI is their enterprise BI standard.The broad product suite works with many heterogeneous data sources for large-scale, multi-business-unit and multi-geography deployments. The BI integration with Oracle Applications, and technology, and with Oracle Hyperion EPM simplifies deployment and administration. Not cited in the Gartner report is that Oracle BI can access and query Hadoop via a Hive Oracle Database Connector eliminating the need to write MapReduce programs for more efficient big data analysis.

“The race is on to fill the gap in governed data discovery,” professes Gartner.In this year’s MQ, all the Leaders have been moved “westward,” to the left, to open up white space in the future for vendors who address “governed data discovery” platforms that address both business users’ requirements for ease of use and enterprises’ IT-driven requirements, like security, data quality, and scalability.Although in Gartner’s view no single vendor provides governed data discovery today, Oracle Endeca Information Discovery 3.1, which became available in November 2013 after Gartner conducted the MQ report, is a complete enterprise data discovery platform that combines information of any type, from any source, empowering business user independence in balance with IT governance. Users can mash-up personal data along with IT-provisioned data into easy to use visualizations to explore what matters most to them.IT can manage the platform to meet data quality, scalability and security requirements. Users can benefit from additional subject areas and metadata provided by integration with Oracle BI.

Gartner additionally cites other Oracle strengths such as more than 80 packaged BI Analytic Applications that include pre-built data models, ETL scripts, reports, and dashboards, along with best practice, cross-functional analytics that span dozens of business roles and industries.Lastly, Oracle’s large, global network of BI application partners, implementation consultants, and customer install base provide a collaborative environment to grow and innovate with BI and analytics.Gartner also cites the large uptake in Oracle BI Mobileenabling business users to develop and deliver content on the go.


ACS - Invitation for Partners Thursday, 10th April

$
0
0


Oracle Advanced Customer Support Services (ACS) plan to run a 1 hour webcast on Thursday 10th April 2014 inviting all Oracle's EMEA Partners to understand the ACS services available for resale on the Oracle Partner Store that help to deliver successful Oracle projects, particularly for products where partners may not yet have the relevant skills and resources.

Who should attend?

  • All Partners in Customer facing roles (Sales, PreSales, Sales Consultants)

Why should you attend?

  • Learn how you can increase value as part of a product sale and increase customer satisfaction
  • Learn how you can work with ACS to fill a skills gap in existing or future Oracle projects
  • Learn what services are available for resale at point-of-sale on the Oracle Partner Store
  • Learn the value of the services and when to target the services

ACS services are complementary to your capabilities as you may:

  • choose to focus on reselling and leave services to others. ACS can fill this need.
  • choose to specialize in one area of the Oracle stack e.g Hardware. ACS covers the full Oracle product stack.
  • have a temporary gap (“empty bench”). ACS can fill this gap
  • ramp up on a new technology as you continue selling. ACS can temporarily fill this gap until ramp up is complete.

ACS is complementary to services provided by partners.


I look forward to welcoming you to this webcast

Edmundo Baires-Herrera
Director, Alliances & Channels
ACS EMEA




ACS - Invitation


April 10th, 2014
10am GMT/11am CET






“For Capgemini, the collaborative project with Oracle Advanced Customer Support Services was a win-win situation.”
Capgemini

“Oracle Advanced Customer Support Services worked seamlessly with us and our partners. The excellent cross-team collaboration ensured a timely implementation, smooth go-live and problem-free switchover.”
The Trainline.







Oracle VM 3.3 Beta

$
0
0

Oracle VM 3.3 beta is now available. The beta software and documentation are available here.

Please read the Welcome Letter to understand the requirements of the beta testing. We rely on our Beta Program participants to provide feedback on the usability, stability, and overall quality of our product release. This feedback will focus on your experience with the new features, product documentation, support services, and training materials. By providing in-depth feedback, you can help influence Oracle’s product direction.

  • Provide us information about your set-up here.
  • Periodically tell us what you have tried and how it is going here.

To learn more about Oracle's virtualization solutions, visit http://oracle.com/virtualization

Quadratic data in Oracle R Enterprise and Oracle Data Mining

$
0
0

I was working with some data which was stored in an Oracle database on a SPARC T4 server. I thought that the data had a quadratic component and I wanted to analyze the data using SQL Developer and Oracle Data Mining, a component of the Oracle Advanced Analytics Option. When I reviewed the initial analysis, I wasn't getting results that I had expected, and the fit of the model wasn't very good. I decided to feed some simple, synthetic quad data into Oracle Data Miner to ensure that I was using the tool properly.

Oracle R Enterprise was used as the tool to create and view the synthetic data.

From an R session that has the Oracle R Enterprise package installed, it is easy to access an Oracle Database:

require(ORE)
## Loading required package: ORE
## Loading required package: OREbase
## 
## Attaching package: 'OREbase'
## 
## The following object(s) are masked from 'package:base':
## 
##     cbind, data.frame, eval, interaction, order, paste, pmax,
##     pmin, rbind, table
## 
## Loading required package: OREstats
## Loading required package: MASS
## Loading required package: OREgraphics
## Loading required package: OREeda
## Loading required package: OREdm
## Loading required package: lattice
## Loading required package: OREpredict
## Loading required package: ORExml
ore.connect("SCOTT", "orcl", "sparc-T4", "TIGER", 1521)
## Loading required package: ROracle
## Loading required package: DBI

The following R function, quad1(), is used to calculate "y=ax^2 + bx + c",

where:
 - the data frame that is passed in has a column of x values.
 - a is in coefficients[feature, 1]
 - b is in coefficients[feature, 2]
 - c is in coefficients[feature, 3]

The function will simply calculate points along a parabolic line and is more complicated than it needs to be. I will leave it in this complicated format so that I can extend it to work with more interesting functions, such as a parabolic surface, later.  

quad1 <- function(df, coefficients) {
    feature <- 1

    coefficients[feature, 1] * df[, feature] * df[, feature] +
      coefficients[feature, 2] * df[, feature] +
      coefficients[feature, 3]
}

The following function, genData(), creates random "x" data points and uses func() to calculate the y values that correspond to the random x values.

genData <- function(nObservations, func, coefficients, nFeatures, scale) {
    dframe <- data.frame(x1 = rep(1, nObservations))for (feature inseq(nFeatures)) {
        name <- paste("x", feature, sep = "")
        dframe[name] <- runif(nObservations, -scale[feature], scale[feature])
    }
    dframe["y"] <- func(dframe, coefficients)return(dframe)
}

The following function, quadGraph(), is used for graphing. The points in dframe are displayed in a scatter plot. The coefficients for the known synthetic data is passed in and the corresponding line is sketched in blue. (Obviously, if you aren't working with synthetic data, it is unlikely that you will know the "true" coefficients.) The R model that is the best estimate of the data based on regression is passed in and sketched in blue.

quadGraph <- function(dframe, coefficients = NULL, model = NULL, ...) {with(dframe, plot(x1, y))title(main = "Quadratic Fit")legend("topright", inset = 0.05, c("True", "Model"), lwd = c(2.5, 2.5), 
        col = c("blue", "red"))
    xRange <- range(dframe[, "x1"])
    smoothX <- seq(xRange[1], xRange[2], length.out = 50)
    trueY <- quad1(data.frame(smoothX), coefficients)lines(smoothX, trueY, col = "blue")
    new = data.frame(x1 = smoothX)
    y_estimated <- predict(model, new)lines(smoothX, y_estimated, col = "red")
}

Here are the settings that will be used.

nFeatures <- 1  # one feature can sketch a line, 2 a surface, ...
nObservations <- 20  # How many rows of data to create for modeling
degree <- 2  # 2 is quadratic, 3 is cubic, etcset.seed(2)  # I'll get the same coefficients every time I run
coefficients <- matrix(rnorm(nFeatures * (degree + 1)), nFeatures, degree + 1)
scale <- (10^rpois(nFeatures, 2)) * rnorm(nFeatures, 3)

Here, synthetic data is created that matches the quadratic function and the random coefficients.

modelData <- genData(nObservations, quad1, coefficients, nFeatures, scale)

We can make this exercise at least slightly more realistic by adding some irreducible error for the regression algorithm to deal with. Add noise.

yRange <- range(modelData[, "y"])
yJitter <- (yRange[2] - yRange[1])/10
modelData["y"] <- modelData["y"] + rnorm(nObservations, 0, yJitter)

Great. At this point I have good quadratic synthetic data which can be analyzed. Feed the synthetic data to the Oracle Database.

oreDF <- ore.push(modelData)
tableName <- paste("QuadraticSample_", nObservations, "_", nFeatures, sep = "")ore.drop(table = tableName)ore.create(oreDF, table = tableName)

The Oracle R Enterprise function to fit the linear model works as expected.

m = ore.lm(y ~ x1 + I(x1 * x1), dat = oreDF)summary(m)
## 
## Call:
## ore.lm(formula = y ~ x1 + I(x1 * x1), data = oreDF)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -2.149 -0.911 -0.156  0.888  1.894 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   1.3264     0.4308    3.08   0.0068 ** 
## x1           -0.0640     0.1354   -0.47   0.6428    
## I(x1 * x1)   -0.8392     0.0662  -12.68  4.3e-10 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 '' 1 
## 
## Residual standard error: 1.28 on 17 degrees of freedom
## Multiple R-squared: 0.912,	Adjusted R-squared: 0.901 
## F-statistic: 87.7 on 2 and 17 DF,  p-value: 1.1e-09
coefficients
##         [,1]   [,2]  [,3]
## [1,] -0.8969 0.1848 1.588

Notice that the "true" coefficients, that were used to create the synthetic data are close to the values from the regression. For example, the true "a" is stored in coefficients[1,1] = -0.8969 and is close to the model's I(x1 * x1) = -0.8392. Not bad given that the model was created from only 20 data points.

quadGraph(modelData, coefficients, m)

The 20 data points, which were calculated from the "true" equation, but with noisy irreducible error added, are shown in the graph. The model, estimated by ore.lm() from the 20 noisy data points, is close to true.

plot of chunk unnamed-chunk-10

At this point, my job is either complete, or ready to start, depending on your perspective. I'm happy that ore.lm() does a nice job of fitting, so maybe I'm done. But if you remember that my initial goal was to validate that SQL Developer and Oracle Data Miner work with quadratic data, my job has just begun. Now that I have known good quadratic synthetic data in the database, I'll use SQL Developer and the Oracle Data Mining to verify that everything is fine.

One more step in R. Create a second Oracle Database table that will be used to test the regression model. 

testData <- genData(nObservations, quad1, coefficients, nFeatures, scale)
oreTestData <- ore.push(testData)
tableName <- paste("QuadraticTest_", nObservations, "_", nFeatures, sep = "")ore.drop(table = tableName)ore.create(oreTestData, table = tableName)  

Here is the SQL Developer workflow that will be used. The synthetic data is in the Oracle Database table "QuadraticSample_20_1". The "Regress Build" node will run linear regression on the synthetic data. The test data which was generated using R in the previous paragraph, is stored in a Oracle Database table named "QuadraticTest_20_1". The Apply node will use the regression model that has been created and use the "x1" values from the test data, storing the y values in an Oracle Database table named "QUADTESTRESULTS". 

SQL Developer Data Mining Work Flow

So how did it work? A PhD in statistics would quickly tell you, "not well", and might look at you like you're an idiot if you don't know that a Model F Value Statistic of 3.25 isn't good. My more pedestrian approach is to plot the results of applying the model to the test data. 

Pull the test result data into R for viewing:

ore.sync()ore.attach()
testResults <- ore.pull(QUADTESTRESULTS)
## Warning: ORE object has no unique key - using random order
colnames(testResults)[1] <- "y"
with(testResults, plot(x1, y))title(main = "Results of Applying Model to Test Data")

 Hmm, that doesn't look parabolic to me:

Linear fit, not quadratic

Now that I'm quite sure that SQL Developer and Oracle Data Mining isn't giving an expected fit, check through the advanced settings: 

Advanced Setting

There it is!!

Feature Selection

 Set the feature generation to use quadratic candidates, re-run the model, and bring the new results back into R:

Data viewed from R looks parabolic

And your statistician friends will be happy because the new model has a Model F Value Statistic of 124. Exciting, right? 

Now, off to work on parabolic surfaces...

千代田化工建設様事例:「Oracle Primaveraで世界中のプロジェクトを見える化、迅速な打ち手・意思決定が可能に」(PROFIT記事)

$
0
0

今日はとても素晴らしい情報を皆さんにお伝えします。

2013年4月にサービスインした千代田化工建設様のGBM(Global Business Management)システム導入に際して、千代田化工建設様がOracle Primaveraの採用にいたった経緯・目的・効果について、今回オラクルが記事にまとめました。

  • 千代田化工建設株式会社 執行役員 グローバルプロジェクトマネジメント本部 本部長代行 大木英介氏
  • 千代田化工建設株式会社 上席理事 グローバルプロジェクトマネジメント本部 ITマネジメントユニットGM 増川順一氏
  • 千代田システムテクノロジーズ株式会社 IT事業本部 N-IT事業ユニット GM代行 兼 EPMソリューションセクションSL 櫻井泰氏

の3名から、これからのプロジェクト型ビジネス企業を支えるデータマネジメントのあり方、グローバルビジネスに求められているIT要件やその実現方法などについて、貴重な意見を引き出すことができました。

皆様もぜひ当記事に目を通していただき、グローバル市場での競争優位獲得に向けて役立てていただければと思います。

記事はこちらからダウンロードしてお読みいただけます。


尚、当期記事はオラクル社発行の「PROFIT JAPAN Volume 22, February 2014」からの抜粋です。PROFIT誌もぜひご覧になってください。PROFIT誌については、オラクル営業または弊社ホームページからお問い合わせください。

KPI Risk Assessment at bpmNext2013 – Manoj Das, Oracle

$
0
0

imageIn many scenarios, such as in Call Center, business users want to be alerted if a KPI threshold has not yet been violated but is at risk because the KPI has been trending up. Oracle Business Process Management’s BAM Composer allows non-technical business users to create temporal BAM queries including trending measures. Through simple point-click selection in a browser window, BAM Composer automatically generates the complex CQL statement implementing the business query. It also supports creation of mashups combining strategic BI data with operational BAM data and external sources. - read more here.

SOA & BPM Partner Community

For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]WikiMixForum

Viewing all 19780 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>