Oracle Appliance Manager patch bundle 2.9 (OAK 2.9) was released on February 18th, 2014 and is available for download as patch 17630388.
As always there are features, enhancements, and some bug fixes included with this release.
Oracle Appliance Manager patch bundle 2.9 (OAK 2.9) was released on February 18th, 2014 and is available for download as patch 17630388.
As always there are features, enhancements, and some bug fixes included with this release.
In this 1 hour webcast, Scott Creighton, Oracle VP Sales Cloud Product Management, discuss the Oracle Sales Cloud Release 8 functionality improvements and the solution's near future strategy as well as new market differentiators such as PaaS.
<Debug> <SecurityAtn> <MyDomain> <AdminServer> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)' for workmanager: consoleapp@null@consoleWorkManager> <<WLS Kernel>> <> <593625378f0917fe:-23dcaa48:143ea3e7180:-8000-0000000000000400> <1391205135889> <BEA-000000> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate authenticate failed for user MyUser>
This can occur when default authenticator is selected as REQUIRED by default. So the login process is denied by the default authenticator due to it is not aware of users in Active Directory.
So, to fix the issue
1. Go to Admin Console > Security Realms > <Your Realm> >Providers.
2. Make Active Directory provider is in the top of the list and set Control Flag SUFFICIENT.
3. Make default authenticator Control Flag is set to OPTIONAL.
You can read more in this My Oracle Support document:
How to Configure Active Directory as the LDAP Provider for WebLogic Server (Doc ID 1299072.1)
Enjoy!
The various data visualization (DVT) graphs provided as part of the ADF Faces component set provide a very rich and comprehensive set of visualizations for representing your data. One of the issues with them, that some folks struggle with however, is the fact that not all the features are controlled in an a completely declarative manner.
In this article I want to concentrate on labeling capabilities for a graph axis, looking first of all that the declarative approaches, but then following that up with the more advanced programmatic option.
Control over the labels on the axis tick points is a good example of making the simple things declarative and the advanced things possible. For basic numeric formatting you can do everything with tags - for example formatting as currency, percentage or with a certain precision.
This is a default (bar)graph plotting employee salary against name, notice how the Y1 Axis has defaulted to a fairly sensible representation of the salary data using 0-14K:
I can change that default scaling by setting the scaling attribute in the <dvt:y1TickLabel> tag. This allows scaling at the level of none | thousand | million | billion | trillion | quadrillion (enough to show national debt then!):
<dvt:y1TickLabel id="y1TickLabel1" scaling="none"/>
Changes the graph to:
We can then further change the pattern of the numbers themselves by embedding <af:convertNumber> inside of the <dvt:y1TickLabel> tag.
e.g.
<dvt:y1TickLabel id="y1TickLabel1" scaling="none"><af:convertNumber type="currency" currencyCode="USD"/></dvt:y1TickLabel>
Adds currency formatting:
And using the <dvt:graphFont> we can change colors and style:
<dvt:y1TickLabel id="y1TickLabel1" scaling="none"> <dvt:graphFont name="SansSerif" size="8" color="#FF0000" bold="true" italic="true" /><af:convertNumber type="currency" currencyCode="USD"/></dvt:y1TickLabel>
Giving:
So we can achieve quite a lot by simply using the tags. However, what about a more complex requirement such as replacing a numerical value on an axis with a totally different string e.g. converting to a Roman Numeral (I, IV XII etc.) or maybe converting a millisecond value to a formatted date string? To do this, ADF provides a simple callback that you can implement to map a value to whatever string you need. Here's a simple case where I've plotted the salaries in department 100 of the standard HR EMPLOYEES demo table against the HireDate on a scatter plot. For the sake of illustration I've actually converted the HireDate to it's time value (e.g. a long value representing the number of milliseconds since 01/01/1970) . In a real graph I'd use the proper support that we have for representing time and date and just map a date object. Then you get to use the timeSelector and can directly set the formatting, however, bear with me because I'm just illustrating the point here.
Here's the default output with the millisecond version of the date, as you can see the millisecond value gets automatically scaled to the billions level.
To override the default representation of the millisecond value we will need to create a java class that implements the oracle.dss.graph.TickLabelCallback interface. Here's the simple example I'll use in this case:
import java.text.SimpleDateFormat; import oracle.dss.graph.DataTickLabelInfo; import oracle.dss.graph.GraphConstants; import oracle.dss.graph.TickLabelCallback; import oracle.dss.graph.TickLabelInfo; public class MSToDateFormatLabelCallback implements TickLabelCallback, Serializable { @Override public String getTickLabel(TickLabelInfo tickLabelInfo, int axisID) { String label = null; if (axisID == GraphConstants.X1TICKLABEL) { long timeInMillis = (long) ((DataTickLabelInfo) tickLabelInfo).getValue(); SimpleDateFormat fmt = new SimpleDateFormat("MM/yy"); label = fmt.format(timeInMillis); } else { label = "" + ((DataTickLabelInfo) tickLabelInfo).getValue(); } return label; } }
As you can see the formatting is applied only to the specified axis and we have to upcast the tickLabelInfo argument to a DataTickLabelInfo in order to gain access to the value that is being applied. Note that the callback class must also be Serializable.
Once you have this class, then you need to apply it to the chart instance by calling the relevant set*TickLabelCallback. So for example I might set this in the backing bean for a page in the setter used by the binding attribute on the dvt graph component
public class GraphPageHandler { private UIGraph scatterPlot; public void setScatterPlot(UIGraph scatterPlot) { this.scatterPlot = scatterPlot; scatterPlot.setX1TickLabelCallback(new MSToDateFormatLabelCallback()); }
Now with the callback in place and the addition of:
Here's the result:
The results are in for the Java EE 8 Community Survey. We've
had a terrific response to the survey, with over 2500 participants in
Part 1 and over 1800 in Part 2!
You can find a summary of the results at https://java.net/projects/javaee-spec/downloads/download/JavaEE8_Community_Survey_Results.pdf
The next phase of this information gathering involves asking for feedback from the community in prioritizing among the most highly rated features from parts 1 and 2 of the survey. You can read about it here:
https://blogs.oracle.com/theaquarium/entry/java_ee_community_survey_partIt would be great if you could give us your views on Java EE 8 AND also spread the word in your network. This is how the Java community builds releases!
Recently I was working with Oracle Event Processing (OEP) and needed to set it up as part of a high availability cluster. OEP uses Coherence for quorum membership in an OEP cluster. Because the solution used caching it was also necessary to include access to external Coherence nodes. Input messages need to be duplicated across multiple OEP streams and so a JMS Topic adapter needed to be configured. Finally only one copy of each output event was desired, requiring the use of an HA adapter. In this blog post I will go through the steps required to implement a true HA OEP cluster.
The diagram below shows a very simple non-HA OEP configuration:
Events are received from a source (JMS in this blog). The events are processed by an event processing network which makes use of a cache (Coherence in this blog). Finally any output events are emitted. The output events could go to any destination but in this blog we will emit them to a JMS queue.
OEP provides high availability by having multiple event processing instances processing the same event stream in an OEP cluster. One instance acts as the primary and the other instances act as secondary processors. Usually only the primary will output events as shown in the diagram below (top stream is the primary):
The actual event processing is the same as in the previous non-HA example. What is different is how input and output events are handled. Because we want to minimize or avoid duplicate events we have added an HA output adapter to the event processing network. This adapter acts as a filter, so that only the primary stream will emit events to out queue. If the processing of events within the network depends on how the time at which events are received then it is necessary to synchronize the event arrival time across the cluster by using an HA input adapter to synchronize the arrival timestamps of events across the cluster.
Lets begin by setting up the base OEP cluster. To do this we create new OEP configurations on each machine in the cluster. The steps are outlined below. Note that the same steps are performed on each machine for each server which will run on that machine:
Now that we have created our servers we need to configure them so that they can find each other. OEP uses Oracle Coherence to determine cluster membership. Coherence clusters can use either multicast or unicast to discover already running members of a cluster. Multicast has the advantage that it is easy to set up and scales better (see http://www.ateam-oracle.com/using-wka-in-large-coherence-clusters-disabling-multicast/) but has a number of challenges, including failure to propagate by default through routers and accidently joining the wrong cluster because someone else chose the same multicast settings. We will show how to use both unicast and multicast to discover the cluster.
Multicast Discovery | Unicast Discovery | |||
---|---|---|---|---|
Coherence multicast uses a class D multicast address that is shared by all servers in the cluster. On startup a Coherence node broadcasts a message to the multicast address looking for an existing cluster. If no-one responds then the node will start the cluster. | Coherence unicast uses Well Known Addresses (WKAs). Each server in the cluster needs a dedicated listen address/port combination. A subset of these addresses are configured as WKAs and shared between all members of the cluster. As long as at least one of the WKAs is up and running then servers can join the cluster. If a server does not find any cluster members then it checks to see if its listen address and port are in the WKA list. If it is then that server will start the cluster, otherwise it will wait for a WKA server to become available. | |||
| ||||
|
| |||
| ||||
|
| |||
|
You should now have a working OEP cluster. Check the cluster by starting all the servers.
Look for a message like the following on the first server to start to indicate that another server has joined the cluster:
<Coherence> <BEA-2049108> <The domain membership has changed to [server2, server1], the new domain primary is "server1">
Log on to the Event Processing Visualizer of one of the servers –http://<hostname>:<port>/wlevs. Select the cluster name on the left and then select group “AllDomainMembers”. You should see a list of all the running servers in the “Servers of Group – AllDomainMembers” section.
Now that we have a working OEP cluster let us look at a simple application that can be used as an example of how to cluster enable an application. This application models service request tracking for hardware products. The application we will use performs the following checks:
Note use case 1 is nicely time bounded – in this case the time window is 10 seconds. Hence this is an ideal candidate to be implemented entirely in CQL.
Use case 2 has no time constraints, hence over time there could be a very large number of CQL queries running looking for a matching TAG but a different SRID. In this case it is better to put the TAGs into a cache and search the cache for duplicate tags. This reduces the amount of state information held in the OEP engine.
The sample application to implement this is shown below:
Messages are received from a JMS Topic (InboundTopicAdapter). Test messages can be injected via a CSV adapter (RequestEventCSVAdapter). Alerts are sent to a JMS Queue (OutboundQueueAdapter), and also printed to the server standard output (PrintBean). Use case 1 is implemented by the MissingEventProcessor. Use case 2 is implemented by inserting the TAG into a cache (InsertServiceTagCacheBean) using a Coherence event processor and then querying the cache for each new service request (DuplicateTagProcessor), if the same tag is already associated with an SR in the cache then an alert is raised. The RaiseEventFilter is used to filter out existing service requests from the use case 2 stream.
The non-HA version of the application is available to download here.
We will use this application to demonstrate how to HA enable an application for deployment on our cluster.
A CSV file (TestData.csv) and Load generator properties file (HADemoTest.prop) is provided to test the application by injecting events using the CSV Adapter.
Note that the application reads a configuration file (System.properties) which should be placed in the domain directory of each event server.
Before deploying an application to a cluster it is a good idea to create a group in the cluster. Multiple servers can be members of this group. To add a group to an event server just add an entry to the <cluster> element in config.xml as shown below:
<cluster>
…
<groups>HAGroup</groups>
</cluster>
Multiple servers can be members of a group and a server can be a member of multiple groups. This allows you to have different levels of high availability in the same event processing cluster.
Deploy the application using the Visualizer. Target the application at the group you created, or the AllDomainMembers group.
Test the application, typically using a CSV Adapter. Note that using a CSV adapter sends all the events to a single event server. To fix this we need to add a JMS output adapter (OutboundTopicAdapter) to our application and then send events from the CSV adapter to the outbound JMS adapter as shown below:
So now we are able to send events via CSV to an event processor that in turn sends the events to a JMS topic. But we still have a few challenges.
First challenge is managing input. Because OEP relies on the same event stream being processed by multiple servers we need to make sure that all our servers get the same message from the JMS Topic. To do this we configure the JMS connection factory to have an Unrestricted Client ID. This allows multiple clients (OEP servers in our case) to use the same connection factory. Client IDs are mandatory when using durable topic subscriptions. We also need each event server to have its own subscriber ID for the JMS Topic, this ensures that each server will get a copy of all the messages posted to the topic. If we use the same subscriber ID for all the servers then the messages will be distributed across the servers, with each server seeing a completely disjoint set of messages to the other servers in the cluster. This is not what we want because each server should see the same event stream. We can use the server name as the subscriber ID as shown in the below excerpt from our application:
<wlevs:adapter id="InboundTopicAdapter" provider="jms-inbound">
…
<wlevs:instance-property name="durableSubscriptionName"
value="${com_bea_wlevs_configuration_server_ClusterType.serverName}" />
</wlevs:adapter>
This works because I have placed a ConfigurationPropertyPlaceholderConfigurer bean in my application as shown below, this same bean is also used to access properties from a configuration file:
<bean id="ConfigBean"
class="com.bea.wlevs.spring.support.ConfigurationPropertyPlaceholderConfigurer">
<property name="location" value="file:../Server.properties"/>
</bean>
With this configuration each server will now get a copy of all the events.
As our application relies on elapsed time we should make sure that the timestamps of the received messages are the same on all servers. We do this by adding an HA Input adapter to our application.
<wlevs:adapter id="HAInputAdapter" provider="ha-inbound">
<wlevs:listener ref="RequestChannel" />
<wlevs:instance-property name="keyProperties"
value="EVID" />
<wlevs:instance-property name="timeProperty" value="arrivalTime"/>
</wlevs:adapter>
The HA Adapter sets the given “timeProperty” in the input message to be the current system time. This time is then communicated to other HAInputAdapters deployed to the same group. This allows all servers in the group to have the same timestamp in their event. The event is identified by the “keyProperties” key field.
To allow the downstream processing to treat the timestamp as an arrival time then the downstream channel is configured with an “application-timestamped” element to set the arrival time of the event. This is shown below:
<wlevs:channel id="RequestChannel" event-type="ServiceRequestEvent">
<wlevs:listener ref="MissingEventProcessor" />
<wlevs:listener ref="RaiseEventFilterProcessor" />
<wlevs:application-timestamped>
<wlevs:expression>arrivalTime</wlevs:expression>
</wlevs:application-timestamped>
</wlevs:channel>
Note the property set in the HAInputAdapter is used to set the arrival time of the event.
So now all servers in our cluster have the same events arriving from a topic, and each event arrival time is synchronized across the servers in the cluster.
Note that an OEP cluster has multiple servers processing the same input stream. Obviously if we have the same inputs, synchronized to appear to arrive at the same time then we will get the same outputs, which is central to OEPs promise of high availability. So when an alert is raised by our application it will be raised by every server in the cluster. If we have 3 servers in the cluster then we will get 3 copies of the same alert appearing on our alert queue. This is probably not what we want. To fix this we take advantage of an HA Output Adapter. unlike input where there is a single HA Input Adapter there are multiple HA Output Adapters, each with distinct performance and behavioral characteristics. The table below is taken from the Oracle® Fusion Middleware Developer's Guide for Oracle Event Processing and shows the different levels of service and performance impact:
High Availability Option | Missed Events? | Duplicate Events? | Performance Overhead |
---|---|---|---|
Section 24.1.2.1, "Simple Failover" | Yes (many) | Yes (few) | Negligible |
Section 24.1.2.2, "Simple Failover with Buffering" | Yes (few)Foot 1 | Yes (many) | Low |
Section 24.1.2.3, "Light-Weight Queue Trimming" | No | Yes (few) | Low-MediumFoot 2 |
Section 24.1.2.4, "Precise Recovery with JMS" | No | No | High |
I decided to go for the lightweight queue trimming option. This means I won’t lose any events, but I may emit a few duplicate events in the event of primary failure. This setting causes all output events to be buffered by secondary's until they are told by the primary that a particular event has been emitted. To configure this option I add the following adapter to my EPN:
<wlevs:adapter id="HAOutputAdapter" provider="ha-broadcast">
<wlevs:listener ref="OutboundQueueAdapter" />
<wlevs:listener ref="PrintBean" />
<wlevs:instance-property name="keyProperties" value="timestamp"/>
<wlevs:instance-property name="monotonic" value="true"/>
<wlevs:instance-property name="totalOrder" value="false"/>
</wlevs:adapter>
This uses the time of the alert (timestamp property) as the key to be used to identify events which have been trimmed. This works in this application because the alert time is the time of the source event, and the time of the source events are synchronized using the HA Input Adapter. Because this is a time value then it will increase, and so I set monotonic=”true”. However I may get two alerts raised at the same timestamp and in that case I set totalOrder=”false”.
I also added the additional configuration to config.xml for the application:
<ha:ha-broadcast-adapter>
<name>HAOutputAdapter</name>
<warm-up-window-length units="seconds">15</warm-up-window-length>
<trimming-interval units="millis">1000</trimming-interval>
</ha:ha-broadcast-adapter>
This causes the primary to tell the secondary's which is its latest emitted alert every 1 second. This will cause the secondary's to trim from their buffers all alerts prior to and including the latest emitted alerts. So in the worst case I will get one second of duplicated alerts. It is also possible to set a number of events rather than a time period. The trade off here is that I can reduce synchronization overhead by having longer time intervals or more events, causing more memory to be used by the secondary's or I can cause more frequent synchronization, using less memory in the secondary's and generating fewer duplicate alerts but there will be more communication between the primary and the secondary's to trim the buffer.
The warm-up window is used to stop a secondary joining the cluster before it has been running for that time period. The window is based on the time that the EPN needs to be running to be have the same state as the other servers. In our example application we have a CQL that runs for a period of 10 seconds, so I set the warm up window to be 15 seconds to ensure that a newly started server had the same state as all the other servers in the cluster. The warm up window should be greater than the longest query window.
When we are running OEP as a cluster then we have additional overhead in the servers. The HA Input Adapter is synchronizing event time across the servers, the HA Output adapter is synchronizing output events across the servers. The HA Output adapter is also buffering output events in the secondary’s. We can’t do anything about this but we can move the Coherence Cache we are using outside of the OEP servers, reducing the memory pressure on those servers and also moving some of the processing outside of the server. Making our Coherence caches external to our OEP cluster is a good idea for the following reasons:
To create the external Coherence cache do the following:
OEP Server 1 | OEP Server 2 | Cache Server 1 | Cache Server 2 |
<?xml version='1.0'?> | <?xml version='1.0'?> | <?xml version='1.0'?> | <?xml version='1.0'?> |
We have now configured Coherence to use an external data grid for its application caches. When starting we should always start at least one of the grid servers before starting the OEP servers. This will allow the OEP server to find the grid. If we do start things in the wrong order then the OEP servers will block waiting for a storage enabled node to start (one of the WKA servers if using Unicast).
We have now created an OEP cluster that makes use of an external Coherence grid for application caches. The application has been modified to ensure that the timestamps of arriving events are synchronized and the output events are only output by one of the servers in the cluster. In event of failure we may get some duplicate events with our configuration (there are configurations that avoid duplicate events) but we will not lose any events. The final version of the application with full HA capability is shown below:
The following files are available for download:
The following references may be helpful:
Dont forget to read database performance troubleshooting note before logging an sr:
Start buttons belong on Tiger Wood's golf cart. Give me car keys that jangle when I insert them into a 1968 Dodge Charger. The music that engine makes ... it enters your body through your soul before your ear drums even register the vibration. And give me Save buttons on browser-based interfaces, too. This amorphous invisible background save that I'm supposed to trust is happening is the brainchild of developers who put posters of Joseph Stalin on their walls.
In spite of my Luddite tendencies, I do like new technologies. I also like a variety of them. If you ask my personal opinion, the more operating systems, the better. More jobs for sysadmins. More jobs for developers. More arm-wrestling matches in the server room. And more interesting problems. That's my idea of fun.
Unfortunately, it's not The Man's idea of fun. Forces I can't possibly understand and would never take for a joy ride in a stolen Dodge Charger push for consolidation and cost-cutting with the frenzy of a four barrel carburetor sucking air at wide open throttle (WOT). Even if, like me, you prefer a more genteel IT environment, you have to adapt. And so, we sometimes wave good-bye to our friends.
If you're facing a migration away from AIX, consider Oracle Solaris. Yeah, it's designed to handle the competitive pressures of today's IT environments...
- Cloud-ready provisioning, security, and virtualization
- Quick to reallocate compute, storage, and network resources
- Zones, ZFS, Dynamic Tracing, Predictive Self Healing and Trusted Extensions reduce downtime and simplify the delivery of application deployment environments
- Optimized to run best on Oracle hardware, and run Oracle applications best
- Automated migration, assistance, and education for DBAs and Power/AIX administrators migrating to Oracle Solaris.
... and yeah, because the Oracle stack is optimized to run best on Oracle Solaris (and Oracle Linux), it gives you some crazy good numbers compared to AIX ...
- Up to 2.4x greater database performance
- Up to 3.4x faster Java application server performance
- Increased Oracle application performance : 1.9x faster for Siebel CRM (4) and 3x faster for JD Edwards
... but it's also got soul. And it doesn't have a dumb Start button.
Below is a link to a hands-on lab and some other resources to help you understand what's involved in migrating from AIX to Oracle Solaris.
by Glynn Foster
Walks an AIX sysadmin through the basic administration of Oracle Solaris 11 and how it compares to IBM AIX Enterprise in areas including installation, software packaging, file systems, user management, services, networking, and virtualization. Even makes helps you navigate your way through documentation, man pages, and online how-to articles.
Photograph of '68 Dodge Charger courtesy of Kobac via Wikipedia Commons Creative Commons License 2.0
- Rick
Follow me on:
Blog | Facebook | Twitter | YouTube | The Great Peruvian Novel
Oracle Real Application Testing Helps Premier Transportation Company Streamline and Accelerate Upgrade of 400 Oracle Databases While Maintaining Business Continuity
Read the full press release
Today, Oracle has announced a new offering, Oracle Mobile Security Suite, which will provide access to sensitive applications and data on personal or corporate owned devices. This new offering will give enterprises unparalleled capabilities in how they contain, control and enhance the mobile experience.
A great deal of effort has been placed into analyzing how corporations are leveraging the mobile platform today, as well as how they will use this platform in the future. Corporate IT has spoken loud and clear of the challenges they face around lengthy provisioning times for access to applications and services, as well as the need for managing the increased usage of applications. Recent industry reports show how significant the risks can be. 1 A detailed assessment of one of the most popular application marketplaces shows that 100% of the top 100 paid apps have some form of rogue variant posted within the same marketplace. As credential theft is on the rise, one of the targets this is being achieved is on the mobile device with rogue apps or Malware with embedded keystroke recorders or collection tools that send back other critical data from the device.
One of the great new features of the Oracle Mobile Security Suite (OMSS) is through the use of containers. Containers allow OMSS to create a secure workspace within the device, where corporate applications, email, data and more can reside. This workspace utilizes its own secure communications back to the back end cloud or corporate systems, independent of VPN. This means that corporate information is maintained and managed separate of the personal content on the device giving end users the added flexibility of using personal devices without impacting the corporate workspace. Remote wipe of data now doesn't impact the entire device, rather, only the contents of the corporate workspace. New policies and changes in access and applications can be applied whenever a user authenticates into their workspace, without having to rebuild or re-wrap any applications in the process, unlike other offerings. This is a very unique approach for Oracle.
More details on this new release at http://www.oracle.com/us/corporate/press/2157116
Rounding out this offering, are capabilities that enable the complete end to end provisioning of access, Single Sign-on within the container, enterprise app store and much more.
Technical Whitepaper: Extending Enterprise Access and Governance with Oracle Mobile Security
For the latest information on Oracle's Mobile Strategy, please visit the Oracle Mobile Security Suite product page, or check back for upcoming Mobile Security postings on the Oracle IDM blog page this March.
1 2013 X-Force Internet Threat Report
A Guest Post by Vice President Jeff Caldwell, Oracle Applications Development
We want to help you prepare for Release 8 of Oracle Applications Cloud with a Release 8 Readiness page.
This upcoming release includes more than 400 new, modern business-empowering features, which you can learn about in the following preview content:
Spotlights: These webcasts, delivered by Oracle Development, spotlight top-level messages and product themes. They are reinforced with product demos.
Release Content Documents (RCDs): These summary descriptions provide details on each new feature and product.
What's New: These are expanded discussions of each new feature and product; you'll find capability overviews, business benefits, setup considerations, usage tips, and more.
Check the Release 8 Readiness page, often, as new training material and spotlights will be added over the coming weeks. You can also access the content at https://cloud.oracle.com/ under the Resources menu.
Thank you to Krishnaprem Bhatia, Product Manager for Oracle B2B Integration for this insightful blog post on the latest B2B integration trends for airlines and cargo hubs:
Market Trend
Many airlines today are using antiquated mainframe and proprietary systems for maintaining their Passenger Service Systems (PSS). These systems are typically mainframe systems which are old and complex with a high cost of maintenance. Airlines want to modernize these systems and reduce their costs by consolidation of numerous point solutions and legacy applications.
The need to reduce complexity, bring down IT costs and increase their flexibility is driving airlines to outsource their PSS systems to vendors such as Amadeus. Although self development of these PSS systems can be done in-house by major airlines, it can be more expensive, less flexible, and less feature rich than the outsourcing option.
As airlines outsource more of their PSS systems, they need to exchange business documents such as reservations and ticketing with the outsourced provider. They also need visibility and manageability into the data flowing from outsourced systems into their enterprises.This incoming passenger data also needs to be integrated back into their internal systems. For example, different documents received from the outsourced PSS systems need to be processed and stored so that they are available to other internal systems. This has to be done using standards-based technologies for compliance and interoperability, ensuring that performance and operations SLAs are met at the same time.
How does Oracle Service Integration fit in?
Oracle B2B allows airlines to connect with their outsourced PSS systems such as Amadeus using industry standards-based technologies. Airlines can exchange different document types (typically EDI variants, non XML formats) such as passenger reservations, updates to reservations, inventory management, departure control systems and ticketing. Oracle B2B provides the ability to exchange these documents, process them, validate them, and translate them into XML for further processing by downstream components.
Airlines typically exchange information with Amadeus using two modes. In the real-time (online) mode the messages are sent 'live' by the PSS systems on an ongoing basis as they occur. In the batch mode many messages are batched together and sent at a particular time. Oracle B2B provides support for both real-time and batch modes, providing critical functionality such as document translation, validation, de-batching for these documents. It also provides the communication mechanisms such as File, FTP and MQ for exchanging these messages with outsourced systems. All this is done using standards based technologies such as standard document and exchange protocols. Once B2B is done processing the messages, these are typically sent to adjacent components within Oracle SOA Suite for message enrichment and transformations. Messages can then be stored in an enterprise warehouse where this data can be used by other internal applications. The end to end scenarios typically have high performance SLAs in terms of throughput and end to end processing time.
The products typically deployed in such scenarios include Oracle B2B, Oracle SOA Suite BPEL Process Manager for data transformation and enrichment and Oracle Data Integrator for migrating processed data into enterprise data warehouses. Customers may also choose to deploy this over Exalogic and Exadata systems for performance reasons.
Some customer examples
There are many customers who are using Oracle SOA B2B as described above today including asian airliner who went live with Oracle B2B in November 2013. Their goal was to replacing their mainframe-based passenger service system with a state of the art process that interfaces with Amadeus. The business scenarios included real-time integration, batch, no-fly list checks, integration with Amadeus via MQ. The new solution was based on Oracle SOA Suite middleware on the Exalogic and Exadata platforms. The benefits for the airline by deploying the new platform include reduction in cost, increased flexibility and increased performance (2x for batch processing, 32x for no-fly list)
Other similar customers include Sri Lankan Airlines who went live in Dec 2013 and All Nippon Airlines (ANA) planning to go live in 2014, along with others in the pipeline.
We also see that Oracle B2B is used to provide B2B SaaS services. This is becoming more common as more enterprises move towards cloud adoption in general. We already have customers in the retail sector such as SPS Commerce who have built their SaaS solutions using Oracle B2B, but we today we also have customers in the travel segment who are providing SaaS based brokerage services. For example, Cargo Champs is providing a cloud solution for cargo management to more than 89 airlines worldwide. They are the biggest cargo broker cloud platform with airlines, frieght carriers, cargo hubs etc. With hundreds of different endpoints integrated using multiple data formats, they estimate to deploy 15,000 agreements and exchange 50 million messages over 7 data centers. Cargo Champs is using Oracle B2B for Custom, EDI and IATA documents exchanging messages over File, FTP and numerous other transport protocols. They are also using SOA Suite for message enrichment, business rules, transformations and routing.
There is a huge opportunity for the airline and cargo industry to improve their efficiency and agility as more and more airlines optimize their systems and move towards cloud adoption in general. Industry experts predict that there is going to be plenty of growth in this market for many years to come. For more information on Oracle B2B, see the following link
Overview
In Data Miner, the Classification and Regression Build nodes include a process that splits the input dataset into training and test dataset internally, which are then used by the model build and test processes within the nodes.This internal data split feature alleviates user from performing external data split, and then tie the split dataset into a build and test process separately as found in other competitive products.However, there are times user may want to perform an external data split.For example, user may want to generate a single training and test dataset, and reuse them in multiple workflows.The generation of training and test dataset can be done easily via the SQL Query node.
The stratified split is used internally by the Classification Build node, because this technique can preserve the categorical target distribution in the resulting training and test dataset, which is important for the classification model build.The following shows the SQL statements that are essentially used by the Classification Build node to produce the training and test dataset internally:
SQL statement for Training dataset
SELECT
v1.*
FROM
(
-- randomly divide members of the population into subgroups based on target classes
SELECT a.*,
row_number() OVER (partition by {target column} ORDER BY ORA_HASH({case id column})) "_partition_caseid"
FROM {input data} a
) v1,
(
-- get the count of subgroups based on target classes
SELECT {target column},
COUNT(*)"_partition_target_cnt"
FROM {input data} GROUP BY {target column}
) v2
WHERE v1. {target column} = v2. {target column}
-- random sample subgroups based on target classes in respect to the sample size
AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) <= (v2."_partition_target_cnt" * {percent of training dataset} / 100)
SQL statement for Test dataset
SELECT
v1.*
FROM
(
-- randomly divide members of the population into subgroups based on target classes
SELECT a.*,
row_number() OVER (partition by {target column} ORDER BY ORA_HASH({case id column})) "_partition_caseid"
FROM {input data} a
) v1,
(
-- get the count of subgroups based on target classes
SELECT {target column},
COUNT(*)"_partition_target_cnt"
FROM {input data} GROUP BY {target column}
) v2
WHERE v1. {target column} = v2. {target column}
-- random sample subgroups based on target classes in respect to the sample size
AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) > (v2."_partition_target_cnt" * {percent of training dataset} /
100)
The followings describe the placeholders used in the SQL
statements:
{target column} - target column.It must be categorical type.
{case id column} - case id column.It must contain unique numbers that identify the rows.
{input data} - input data set.
{percent of training dataset} - percent of training dataset.For example, if you want to split 60% of input dataset into training dataset, use the value 60.The test dataset will contain 100%-60% = 40% of the input dataset.The training and test dataset are mutually exclusive.
The random split is used internally by the Regression Build node because the target is usually numerical type.The following shows the SQL statements that are essentially used by the Regression Build node to produce the training and test dataset:
SQL statement for Training dataset
SELECT
v1.*
FROM
{input data} v1
WHERE ORA_HASH({case id column},
99, 0) <= {percent of training dataset}
SQL statement for Test dataset
SELECT
v1.*
FROM
{input data} v1
WHERE ORA_HASH({case id column},
99, 0) > {percent of training dataset}
The followings describe the placeholders used in the SQL
statements:
{case id column} - case id column.It must contain unique numbers that identify the rows.
{input data} - input data set.
{percent of training dataset} - percent of training dataset.For example, if you want to split 60% of input dataset into training dataset, use the value 60.The test dataset will contain 100%-60% = 40% of the input dataset.The training and test dataset are mutually exclusive.
Assume you want to create the training and test dataset out of the demo INSUR_CUST_LTV_SAMPLE dataset using the stratified split technique; you can create the following workflow to utilize the SQL Query nodes to execute the above split SQL statements to generate the dataset, and then use the Create Table nodes to persist the resulting dataset.
Assume the case id is CUSTOMER_ID, target is BUY_INSURANCE, and the training dataset is 60% of the input dataset.You can enter the following SQL statement to create the training dataset in the “SQL Query Stratified Training” SQL Query node:
SELECT
v1.*
FROM
(
-- randomly divide members of the population into subgroups based on target classes
SELECT a.*,
row_number() OVER (partition by"BUY_INSURANCE" ORDER BY ORA_HASH("CUSTOMER_ID")) "_partition_caseid"
FROM"INSUR_CUST_LTV_SAMPLE_N$10009" a
) v1,
(
-- get the count of subgroups based on target classes
SELECT"BUY_INSURANCE",
COUNT(*)"_partition_target_cnt"
FROM"INSUR_CUST_LTV_SAMPLE_N$10009" GROUP BY "BUY_INSURANCE"
) v2
WHERE v1."BUY_INSURANCE" = v2."BUY_INSURANCE"
-- random sample subgroups based on target classes in respect to the sample size
AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) <= (v2."_partition_target_cnt" * 60 / 100)
Likewise, you can enter the following SQL statement to create the test dataset in the “SQL Query Stratified Test” SQL Query node:
SELECT
v1.*
FROM
(
-- randomly divide members of the population into subgroups based on target classes
SELECT a.*,
row_number() OVER (partition by"BUY_INSURANCE" ORDER BY ORA_HASH("CUSTOMER_ID")) "_partition_caseid"
FROM"INSUR_CUST_LTV_SAMPLE_N$10009" a
) v1,
(
-- get the count of subgroups based on target classes
SELECT"BUY_INSURANCE",
COUNT(*)"_partition_target_cnt"
FROM"INSUR_CUST_LTV_SAMPLE_N$10009" GROUP BY "BUY_INSURANCE"
) v2
WHERE v1."BUY_INSURANCE" = v2."BUY_INSURANCE"
-- random sample subgroups based on target classes in respect to the sample size
AND ORA_HASH(v1."_partition_caseid", v2."_partition_target_cnt"-1, 0) > (v2."_partition_target_cnt" * 60 / 100)
Now run the workflow to create the training and test dataset.You can find the table names of the persisted dataset in the associated Create Table nodes.
This blog shows how easily to create the training and test dataset using the stratified split SQL statements via the SQL Query nodes.Similarly, you can generate the training and test dataset using the random split technique by replacing SQL statements with the random split SQL statements in the SQL Query nodes in the above workflow.If a large dataset (tens of millions of rows) is used in multiple model build nodes, it may be a good idea to split the data ahead of time to optimize the overall processing time (avoid multiple internal data splits inside the model build nodes).
For the 8th consecutive year, Oracle is a Leader in Gartner’s Magic Quadrant for Business Intelligence and Analytics Platform. Gartner declares that “the BI and analytics platform market is in the middle of an accelerated transformation from Business Intelligence (BI) systems used primarily for measurement and reporting to those that also support analysis, prediction, forecasting and optimization.”Oracle offers all these wide-ranging capabilities across Business Intelligence Foundation Suite, Advanced Analytics and Real-Time Decisions.
Gartner specifically recognizes Oracle as a Leader for several key reasons. Oracle customers reported among the largest BI deployments in terms of users and data sizes.In fact, 69% of Oracle customers stated that Oracle BI is their enterprise BI standard.The broad product suite works with many heterogeneous data sources for large-scale, multi-business-unit and multi-geography deployments. The BI integration with Oracle Applications, and technology, and with Oracle Hyperion EPM simplifies deployment and administration. Not cited in the Gartner report is that Oracle BI can access and query Hadoop via a Hive Oracle Database Connector eliminating the need to write MapReduce programs for more efficient big data analysis.
“The race is on to fill the gap in governed data discovery,” professes Gartner.In this year’s MQ, all the Leaders have been moved “westward,” to the left, to open up white space in the future for vendors who address “governed data discovery” platforms that address both business users’ requirements for ease of use and enterprises’ IT-driven requirements, like security, data quality, and scalability.Although in Gartner’s view no single vendor provides governed data discovery today, Oracle Endeca Information Discovery 3.1, which became available in November 2013 after Gartner conducted the MQ report, is a complete enterprise data discovery platform that combines information of any type, from any source, empowering business user independence in balance with IT governance. Users can mash-up personal data along with IT-provisioned data into easy to use visualizations to explore what matters most to them.IT can manage the platform to meet data quality, scalability and security requirements. Users can benefit from additional subject areas and metadata provided by integration with Oracle BI.
Gartner additionally cites other Oracle strengths such as more than 80 packaged BI Analytic Applications that include pre-built data models, ETL scripts, reports, and dashboards, along with best practice, cross-functional analytics that span dozens of business roles and industries.Lastly, Oracle’s large, global network of BI application partners, implementation consultants, and customer install base provide a collaborative environment to grow and innovate with BI and analytics.Gartner also cites the large uptake in Oracle BI Mobileenabling business users to develop and deliver content on the go.
Why should you attend?
ACS services are complementary to your capabilities as you may:
ACS is complementary to services provided by partners. |
|
Oracle VM 3.3 beta is now available. The beta software and documentation are available here.
Please read the Welcome Letter to understand the requirements of the beta testing. We rely on our Beta Program participants to provide feedback on the usability, stability, and overall quality of our product release. This feedback will focus on your experience with the new features, product documentation, support services, and training materials. By providing in-depth feedback, you can help influence Oracle’s product direction.
To learn more about Oracle's virtualization solutions, visit http://oracle.com/virtualization
I was working with some data which was stored in an Oracle database on a SPARC T4 server. I thought that the data had a quadratic component and I wanted to analyze the data using SQL Developer and Oracle Data Mining, a component of the Oracle Advanced Analytics Option. When I reviewed the initial analysis, I wasn't getting results that I had expected, and the fit of the model wasn't very good. I decided to feed some simple, synthetic quad data into Oracle Data Miner to ensure that I was using the tool properly.
Oracle R Enterprise was used as the tool to create and view the synthetic data.
From an R session that has the Oracle R Enterprise package installed, it is easy to access an Oracle Database:
require(ORE)
## Loading required package: ORE ## Loading required package: OREbase ## ## Attaching package: 'OREbase' ## ## The following object(s) are masked from 'package:base': ## ## cbind, data.frame, eval, interaction, order, paste, pmax, ## pmin, rbind, table ## ## Loading required package: OREstats ## Loading required package: MASS ## Loading required package: OREgraphics ## Loading required package: OREeda ## Loading required package: OREdm ## Loading required package: lattice ## Loading required package: OREpredict ## Loading required package: ORExml
ore.connect("SCOTT", "orcl", "sparc-T4", "TIGER", 1521)
## Loading required package: ROracle ## Loading required package: DBI
The following R function, quad1(), is used to calculate "y=ax^2 + bx + c",
where:
- the data frame that is passed in has a column of x values.
- a is in coefficients[feature, 1]
- b is in coefficients[feature, 2]
- c is in coefficients[feature, 3]
The function will simply calculate points along a parabolic line and is more complicated than it needs to be. I will leave it in this complicated format so that I can extend it to work with more interesting functions, such as a parabolic surface, later.
quad1 <- function(df, coefficients) {
feature <- 1
coefficients[feature, 1] * df[, feature] * df[, feature] +
coefficients[feature, 2] * df[, feature] +
coefficients[feature, 3]
}
The following function, genData(), creates random "x" data points and uses func() to calculate the y values that correspond to the random x values.
genData <- function(nObservations, func, coefficients, nFeatures, scale) { dframe <- data.frame(x1 = rep(1, nObservations))for (feature inseq(nFeatures)) { name <- paste("x", feature, sep = "") dframe[name] <- runif(nObservations, -scale[feature], scale[feature]) } dframe["y"] <- func(dframe, coefficients)return(dframe) }
The following function, quadGraph(), is used for graphing. The points in dframe are displayed in a scatter plot. The coefficients for the known synthetic data is passed in and the corresponding line is sketched in blue. (Obviously, if you aren't working with synthetic data, it is unlikely that you will know the "true" coefficients.) The R model that is the best estimate of the data based on regression is passed in and sketched in blue.
quadGraph <- function(dframe, coefficients = NULL, model = NULL, ...) {with(dframe, plot(x1, y))title(main = "Quadratic Fit")legend("topright", inset = 0.05, c("True", "Model"), lwd = c(2.5, 2.5), col = c("blue", "red")) xRange <- range(dframe[, "x1"]) smoothX <- seq(xRange[1], xRange[2], length.out = 50) trueY <- quad1(data.frame(smoothX), coefficients)lines(smoothX, trueY, col = "blue") new = data.frame(x1 = smoothX) y_estimated <- predict(model, new)lines(smoothX, y_estimated, col = "red") }
Here are the settings that will be used.
nFeatures <- 1 # one feature can sketch a line, 2 a surface, ... nObservations <- 20 # How many rows of data to create for modeling degree <- 2 # 2 is quadratic, 3 is cubic, etcset.seed(2) # I'll get the same coefficients every time I run coefficients <- matrix(rnorm(nFeatures * (degree + 1)), nFeatures, degree + 1) scale <- (10^rpois(nFeatures, 2)) * rnorm(nFeatures, 3)
Here, synthetic data is created that matches the quadratic function and the random coefficients.
modelData <- genData(nObservations, quad1, coefficients, nFeatures, scale)
We can make this exercise at least slightly more realistic by adding some irreducible error for the regression algorithm to deal with. Add noise.
yRange <- range(modelData[, "y"]) yJitter <- (yRange[2] - yRange[1])/10 modelData["y"] <- modelData["y"] + rnorm(nObservations, 0, yJitter)
Great. At this point I have good quadratic synthetic data which can be analyzed. Feed the synthetic data to the Oracle Database.
oreDF <- ore.push(modelData) tableName <- paste("QuadraticSample_", nObservations, "_", nFeatures, sep = "")ore.drop(table = tableName)ore.create(oreDF, table = tableName)
The Oracle R Enterprise function to fit the linear model works as expected.
m = ore.lm(y ~ x1 + I(x1 * x1), dat = oreDF)summary(m)
## ## Call: ## ore.lm(formula = y ~ x1 + I(x1 * x1), data = oreDF) ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.149 -0.911 -0.156 0.888 1.894 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 1.3264 0.4308 3.08 0.0068 ** ## x1 -0.0640 0.1354 -0.47 0.6428 ## I(x1 * x1) -0.8392 0.0662 -12.68 4.3e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 '' 1 ## ## Residual standard error: 1.28 on 17 degrees of freedom ## Multiple R-squared: 0.912, Adjusted R-squared: 0.901 ## F-statistic: 87.7 on 2 and 17 DF, p-value: 1.1e-09
coefficients
## [,1] [,2] [,3] ## [1,] -0.8969 0.1848 1.588
Notice that the "true" coefficients, that were used to create the synthetic data are close to the values from the regression. For example, the true "a" is stored in coefficients[1,1] = -0.8969 and is close to the model's I(x1 * x1) = -0.8392. Not bad given that the model was created from only 20 data points.
quadGraph(modelData, coefficients, m)
The 20 data points, which were calculated from the "true" equation, but with noisy irreducible error added, are shown in the graph. The model, estimated by ore.lm() from the 20 noisy data points, is close to true.
At this point, my job is either complete, or ready to start, depending on your perspective. I'm happy that ore.lm() does a nice job of fitting, so maybe I'm done. But if you remember that my initial goal was to validate that SQL Developer and Oracle Data Miner work with quadratic data, my job has just begun. Now that I have known good quadratic synthetic data in the database, I'll use SQL Developer and the Oracle Data Mining to verify that everything is fine.
One more step in R. Create a second Oracle Database table that will be used to test the regression model.
testData <- genData(nObservations, quad1, coefficients, nFeatures, scale) oreTestData <- ore.push(testData) tableName <- paste("QuadraticTest_", nObservations, "_", nFeatures, sep = "")ore.drop(table = tableName)ore.create(oreTestData, table = tableName)
Here is the SQL Developer workflow that will be used. The synthetic data is in the Oracle Database table "QuadraticSample_20_1". The "Regress Build" node will run linear regression on the synthetic data. The test data which was generated using R in the previous paragraph, is stored in a Oracle Database table named "QuadraticTest_20_1". The Apply node will use the regression model that has been created and use the "x1" values from the test data, storing the y values in an Oracle Database table named "QUADTESTRESULTS".
So how did it work? A PhD in statistics would quickly tell you, "not well", and might look at you like you're an idiot if you don't know that a Model F Value Statistic of 3.25 isn't good. My more pedestrian approach is to plot the results of applying the model to the test data.
Pull the test result data into R for viewing:
ore.sync()ore.attach() testResults <- ore.pull(QUADTESTRESULTS)
## Warning: ORE object has no unique key - using random order
colnames(testResults)[1] <- "y"
with(testResults, plot(x1, y))title(main = "Results of Applying Model to Test Data")
Hmm, that doesn't look parabolic to me:
Now that I'm quite sure that SQL Developer and Oracle Data Mining isn't giving an expected fit, check through the advanced settings:
There it is!!
Set the feature generation to use quadratic candidates, re-run the model, and bring the new results back into R:
And your statistician friends will be happy because the new model has a Model F Value Statistic of 124. Exciting, right?
Now, off to work on parabolic surfaces...
今日はとても素晴らしい情報を皆さんにお伝えします。
2013年4月にサービスインした千代田化工建設様のGBM(Global Business Management)システム導入に際して、千代田化工建設様がOracle Primaveraの採用にいたった経緯・目的・効果について、今回オラクルが記事にまとめました。
の3名から、これからのプロジェクト型ビジネス企業を支えるデータマネジメントのあり方、グローバルビジネスに求められているIT要件やその実現方法などについて、貴重な意見を引き出すことができました。
皆様もぜひ当記事に目を通していただき、グローバル市場での競争優位獲得に向けて役立てていただければと思います。
記事はこちらからダウンロードしてお読みいただけます。
尚、当期記事はオラクル社発行の「PROFIT JAPAN Volume 22, February 2014」からの抜粋です。PROFIT誌もぜひご覧になってください。PROFIT誌については、オラクル営業または弊社ホームページからお問い合わせください。
In many scenarios, such as in Call Center, business users want to be alerted if a KPI threshold has not yet been violated but is at risk because the KPI has been trending up. Oracle Business Process Management’s BAM Composer allows non-technical business users to create temporal BAM queries including trending measures. Through simple point-click selection in a browser window, BAM Composer automatically generates the complex CQL statement implementing the business query. It also supports creation of mashups combining strategic BI data with operational BAM data and external sources. - read more here.
For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.
BlogTwitterLinkedInFacebookWikiMixForum