Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Top 10 Oracle Solaris How To Articles

$
0
0

Recording Available: November 2012 Quarterly Customer Update Webcast

$
0
0

Missed the recent Quarterly Customer Update Webcast? We presented the product roadmaps for WebCenter Portal, WebCenter Content and WebCenter Sites.

VIEW WEBCAST RECORDING:
Access the webcast recording and presentation by going to:
My Oracle Support Site Note: 568127.1

We'll announce the next Quarterly Customer Update Webcast here on the WebCenter Alerts blog.

Configuring UCM content cache invalidation for a custom portal application

$
0
0

Recently, I had blogged about enabling the UCM content cache invalidator for Spaces (found here).  This can also be enabled for a WebCenter Custom Portal application as well.  The much overlooked setting is done through the Content Repository connection definition in the JDeveloper Application Resources section.

 

Enabling the cache invalidator "sweeper" can be invaluable, where UCM content is being updated from within UCM (console) and not within the portal.

EBS Applications Technology Group (ATG) Advisor Webcast December 2012

$
0
0

Webcast.jpg

Invitation : Advisor Webcast December 2012

In December 2012 we have scheduled an Advisor Webcast, where we want to give you a closer look into the invalid objects in an E-Business Suite Environment.


E-Business Suite - Troubleshooting invalid objects

Agenda :
  • Introduction
  • Activities that generate invalid objects
  • EBS Architecture
  • EBS Patching Concepts
  • Troubleshooting Invalid Objects
  • References

EMEA Session :
Tuesday December 11th, 2012 at 09:00 AM UK / 10:00 AM CET / 13:30 India / 17:00 Japan / 18:00 Australia

REGISTER.jpgDetails & Registration : Doc ID 1501696.1
Direct link to register in WebEx


US Session :
Wednesday December 12th, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern

REGISTER.jpgDetails & Registration : Doc ID 1501697.1
Direct link to register in WebEx


If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

Finding "Stuff" In OUM

$
0
0

One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?”

Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind.

Here are three quick tips I give people:

1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word.

For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture

This is fast and easy option to use, but it only searches the current OUM page

2. Use the PDF view of OUM

Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack.

This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel.

3.Use your operating systems file index to search for key words

This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC.

All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6

Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing

The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files.

Happy searching!

It could be worse....

$
0
0

As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher:

...
#define SIZE 1024

void test_write()
{
  starttime();
  int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR);
...

Running this gave the following results:

Time per iteration   0.000065606310 MB/s
Time per iteration   2.709711563906 MB/s
Time per iteration   0.178590114758 MB/s

Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system.

It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

Basic is Best

$
0
0

Fellow foodies will recognize the recent movement towards "farm-to-table" restaurants. These venues attempt to simplify their menus and source ingredients as close to the source as possible. I had the opportunity to dine at such a restaurant the other evening. I was gushing about the appetizer to my server when she described the preparation for the item and then punctuated her comments with "basic is best". I reminded my fellow enterprise architect diners there was an architecture lesson in that statement. They rolled their eyes and chuckled. But they also knew I was right.

I'm reminded of Frederick Brooks' book The Mythical Man Month and his latest The Design of Design. The former must read book talks about complexity. But he refrains from damning all complexity. The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body. But sometimes a simple solution is the best approach. Fewer applications (think: portfolio rationalization). Fewer components. Fewer lines of code. Whatever level of abstraction you are working at, less is more.

I'm reminded of the enterprise architecture principle "Control Technical Diversity". At one firm I created pithy catch phrases for each principles. I named this one "Less is More". But perhaps another variation is what my server said the other night, "Basic is Best".

Retrieve Performance Data from SOA Infrastructure Database

$
0
0

My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting:

  • The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore.
  • The BPEL engine performance statistics only includes latencies. It does not provide throughputs.

Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted.

Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system.

1. Asynchronous/one-way messages incoming rates

The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states

select composite_name composite, state, count(*) Count from dlv_message
       where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS')
         and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS')
      group by composite_name, state
      order by Count;

2. Throughput of BPEL process instances

The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions

select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance
       where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS')
          and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS')
        group by composite_name, component_name, componenttype 
          order by count(*) desc;

3. Throughput and latencies of BPEL process instances

This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances.

select composite_name Composite, component_name Process, componenttype, state,
       count(*) Count,
      trunc(Max(extract(day    from (modify_date-creation_date))*24*60*60 + 
                extract(hour   from (modify_date-creation_date))*60*60 + 
                extract(minute from (modify_date-creation_date))*60 + 
                extract(second from (modify_date-creation_date))),4) MaxTime, 
      trunc(Min(extract(day    from (modify_date-creation_date))*24*60*60 + 
                extract(hour   from (modify_date-creation_date))*60*60 +  
                extract(minute from (modify_date-creation_date))*60 + 
                extract(second from (modify_date-creation_date))),4) MinTime, 
      trunc(AVG(extract(day    from (modify_date-creation_date))*24*60*60 +  
                extract(hour   from (modify_date-creation_date))*60*60 +  
                extract(minute from (modify_date-creation_date))*60 + 
                extract(second from (modify_date-creation_date))),4) AvgTime       
       from cube_instance
       where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS')
          and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS')
        group by composite_name, component_name, componenttype, state
          order by count(*) desc; 

 

4. Combine all together

Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database:

accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)'
accept endTime   prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)'

Prompt "==== Rejected Messages ====";
REM 2012-10-24 21:00:00
REM 2012-10-24 21:59:59
select count(*), composite_dn from rejected_message
       where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS')
         and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS')
    group by composite_dn;
Prompt "";
Prompt "==== Throughput of one-way/asynchronous messages ====";
select state, count(*) Count, composite_name composite from dlv_message
       where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS')
         and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS')
      group by composite_name, state
      order by Count;

Prompt "";
Prompt "==== Throughput and latency of BPEL process instances ===="
select state,
       count(*) Count,
      trunc(Max(extract(day    from (modify_date-creation_date))*24*60*60 + 
                extract(hour   from (modify_date-creation_date))*60*60 + 
                extract(minute from (modify_date-creation_date))*60 + 
                extract(second from (modify_date-creation_date))),4) MaxTime, 
      trunc(Min(extract(day    from (modify_date-creation_date))*24*60*60 + 
                extract(hour   from (modify_date-creation_date))*60*60 +  
                extract(minute from (modify_date-creation_date))*60 + 
                extract(second from (modify_date-creation_date))),4) MinTime, 
      trunc(AVG(extract(day    from (modify_date-creation_date))*24*60*60 +  
                extract(hour   from (modify_date-creation_date))*60*60 +  
                extract(minute from (modify_date-creation_date))*60 + 
                extract(second from (modify_date-creation_date))),4) AvgTime,
       composite_name Composite, component_name Process, componenttype
       from cube_instance
       where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS')
          and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS')
        group by composite_name, component_name, componenttype, state
          order by count(*) desc; 

 


CVE-2012-0882 Buffer Overflow vulnerability in yaSSL

$
0
0
CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution
CVE-2012-0882 Buffer overflow vulnerability7.5yaSSL
MySQL 5.15.1.62
MySQL 5.55.5.22

This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.
Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

$
0
0
The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites:

  1. How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic?
  2. How to make sure the messages are evenly distributed (load balanced) to SOA cluster members?

Here we will walk through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication.

2. The typical configuration

In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics.

Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps:
  • StepA. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic
  • Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster
  • Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file)

Here are more details of each step:

Step A. Configure JMS Topic to be UDD and PDT,

You do this at the WebLogic cluster that house the JMS Topic.

You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively




Step B: Configure ServerProperties of JCA Connection Factory

You do this step at the SOA cluster.

This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client".

When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list:

ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false

You can refer to Chapter 8.4.10Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties.

Please note:
  • Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID.
  • The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory.
Here is the example setting:







Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file)

In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber':
    <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata"><connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/><endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message"><activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec"><property name="DurableSubscriber" value="MySubscriberID-1"/><property name="PayloadType" value="TextMessage"/><property name="UseMessageListener" value="false"/><property name="DestinationName" value="jms/MyTestUDDTopic"/></activation-spec></endpoint-activation></adapter-config>

    You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project.


    2.The "atypical" configurations:

    For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here:

    Configuration A: The JMS Topic is NOT PDT

    In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded'in the adapter's *.jca file, to filter out those "replicated" messages.

    Configuration B. The ClientIDPolicy=RESTRICTED

    In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter.

    References:

    Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    $
    0
    0

    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance.

    Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck.

    Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions.

    This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience.

    Overview of BPEL Engine Performance Statistics 

    SOA BPEL has a feature of collecting some performance statistics and store them in memory.

    One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data.

    Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.>

    My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful.

    Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze

    Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned.

    Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager

    First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics

    • EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties
    • Click on "More BPEL Configuration Properties"
    • Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more.

    Step 2: Download and Deploy bpelstat.war File to Admin Server,

    Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done.
    • Download the bpelstat.war to your local PC
    • At WebLogic Console, Go to Deployments -> Install
    • Click on the "upload your file(s)"
    • Click the "Browse" button to upload the deployment to Admin Server
    • Accept the uploaded file as the path, click next
    • Check the default option "Install this deployment as an application"
    • Check "AdminServer" as the target server
    • Finish the rest of the deployment with default settings


    • Console -> Deployments
    • Check the box next to "bpelstat" application
    • Click on the "Start" button. It will change the state of the app from "prepared" to "active"

    Step 3: Invoke the BPEL Statistic Tool


    • The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads.
    • Go tohttp://<admin-server-host>:<admin-server-port>/bpelstat
    • Enter the correct admin hostname, port, username and password
    • Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc.
    • Click Submit
    • Keep doing the same for all SOA servers.

    Step 3: Interpret the BPEL Engine Statistics

    You will see a few categories of BPEL Statistics from the JSP Page.

    First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down".

    1. Overall latency of BPEL processes

    The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown.

    Synchronous process statistics
    <statistics>
        <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">
        </stats>
    </statistics>



    Asynchronous process statistics
    <statistics>
        <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">
        </stats>
    </statistics>


    2. Request break down

    Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics.

    Request breakdown
    <statistics>
        <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">
            <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">
                <stats key="populate-context" min="0" max="0" average="0.0" count="248">



    Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below.


    2.1 BPEL process activity latencies

    A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform"

    Example 1: 
    Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:


                                   <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">
                                        <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">
                                        </stats>
                                        <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">
                                        </stats>
                                        <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">
                                        </stats>
                                    </stats>

    Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key).

    Example 2:
    Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service". 

                                    <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">
                                        <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">
                                        </stats>
                                        <stats key="invoke-service" min="1" max="13" average="3.08" count="153">
                                            <stats key="prep-call" min="0" max="1" average="0.04" count="153">
                                            </stats>
                                        </stats>
                                        <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">
                                        </stats>
                                        <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">
                                        </stats>
                                        <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">
                                        </stats>
                                        <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">
                                        </stats>
                                        <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">
                                        </stats>
                                    </stats>



    2.2 BPEL engine activity latency

    Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration.

    My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available.

    Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    How to Achieve OC4J RMI Load Balancing

    $
    0
    0
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public.

    Here is the tech note:

    Overview

    A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions:

    1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster?

    2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function?

    That is the topic of how to achieve load balancing with OC4J RMI client.

    Solutions

    You need to configure and code RMI load balancing in two places:

    1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs.

    2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI)

    More details below:

    About the PROVIDER_URL

    The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context

    The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs.
    • A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1
    • A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName

    When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query.

    When the PROVIDER URL reference a single OPMN server

    Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup.

    For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts:

    • appName on oc4j_instance1 on host1
    • appName on oc4j_instance1 on host2,
    • appName on oc4j_instance1 on host3, 
    (provided that host1, host2, host3 are all in the same cluster)

    Please note that
    • One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3.
    When the PROVIDER URL reference a comma separated list of multiple OPMN servers


    When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster.

    The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server.


    About the oracle.j2ee.rmi.loadBalance Property

    After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list.

    Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance.

    ValueDescription
    client
    If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation.
    context
    Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment.
    If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked.
    lookup
    Used for a standalone client that will access EJBs in a clustered OC4J environment.
    If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup().


    Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node.

    About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster)

    What we have discussed above is about load balancing. Let's also discuss high availability.

    This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    GeoToolkit Demo Embedded in an Application Framework via Maven

    $
    0
    0

    As a follow on to yesterday's blog entry, here's the equivalent starter application for GeoToolkit (also known as Geotk) on the NetBeans Platform, which ends up looking like this:

    The above is a border.shp file I found on-line, while here's a USA states shape file rendered in the application:

    Note that the navigation bar is also included, though that could later be migrated into the menu bar of the NetBeans Platform. 

    Download the Maven based NetBeans Platform application with GeoToolkit integration here:

    http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotoolkit/MyGeospatialSystem

    It was quite tricky getting this sample together, parts of it, especially the installer, which creates the database, comes from the Puzzle GIS project, while the files come from on-line locations, with the JAI-related dependencies providing problems of their own. I was able to solve the vendorName==null problem by downloading the jai_imageio.jar JAR found here and replacing the equivalent JAR in my POM with that one. But it's definitely a starting point and you now have the basic Maven structure needed for getting started with GeoToolkit in the context of all the services and components provided by the NetBeans Platform. 

    Many thanks to Johann Sorel for his patience and help. 

    Web Service Example - Part 3: Asynchronous

    $
    0
    0

    In this edition of the ADF Mobile blog we'll tackle part 3 of our Web Service examples.  In this posting we'll take a look at firing the web service asynchronously and then filling in the UI when it completes.  This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data.


    Getting the sample code:

    Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part2


    What's different?

    In this example, when you click the Search button on the Forecast By Zip option, now it takes you directly to the results page, which is initially blank.  When the web service returns a second or two later the data pops into the UI.  If you go back to the search page and hit Search it will again clear the results and invoke the web service asynchronously.  This isn't really that useful for this particular example but it shows an important technique that can be used for other use cases.


    How it was done

    1)  First we created a new class, ForecastWorker, that implements the Runnable interface.  This is used as our worker class that we create an instance of and pass to a new thread that we create when the Search button is pressed inside the retrieveForecast actionListener handler.  Once the thread is started, the retrieveForecast returns immediately. 

    2)  The rest of the code that we had previously in the retrieveForecast method has now been moved to the retrieveForecastAsync.  Note that we've also added synchronized specifiers on both these methods so they are protected from re-entrancy.

    3)  The run method of the ForecastWorker class then calls the retrieveForecastAsync method.  This executes the web service code that we had previously, but now on a separate thread so the UI is not locked.  If we had already shown data on the screen it would have appeared before this was invoked.  Note that you do not see a loading indicator either because this is on a separate thread and nothing is blocked.

    4)  The last but very important aspect of this method is that once we update data in the collections from the data we retrieve from the web service, we call AdfmfJavaUtilities.flushDataChangeEvents().   We need this because as data is updated in the background thread, those data change events are not propagated to the main thread until you explicitly flush them.  As soon as you do this, the UI will get updated if any changes have been queued.


      Summary of Fundamental Changes In This Application

      The most fundamental change is that we are invoking and handling our web services in a background thread and updating the UI when the data returns.  This allows an application to provide a better user experience in many cases because data that is already available locally is displayed while lengthy queries or web service calls can be done in the background and the UI updated when they return.  There are many different use cases for background threads and this is just one example of optimizing the user experience and generating a better mobile application. 


      ハンズオンセミナーのご案内 (12月19日)

      $
      0
      0

      【Oracle Application Testing Suiteを実機で体験】
      テストツールを活用してWebシステムの機能テストや負荷テストを効率的・効果的に実践しよう! 

      Oracle Functional Testing および Oracle Load Testing を実際に操作していただき、いかにテストを効率良く行えるか体感していただけます。

      日時: 2012年12月19日(水) 13:30~18:00
      会場: 日本オラクル株式会社本社

      詳しくはこちら
      http://www.oracle.com/goto/jpm121219_2


      JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

      $
      0
      0
      JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

      Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were:

      1. JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g
      2. JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue
      3. JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue
      4. JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

      Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue.

      1. Recap and Prerequisites

      In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly.

      WebLogic Server Objects

      Object Name

      Type

      JNDI Name

      TestConnectionFactory

      Connection Factory

      jms/TestConnectionFactory

      TestJMSQueue

      JMS Queue

      jms/TestJMSQueue

      eis/wls/TestQueue

      Connection Pool

      eis/wls/TestQueue

      Schema XSD File

      The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process.

      stringPayload.xsd

      <?xml version="1.0" encoding="windows-1252" ?>

      <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"

                     xmlns="http://www.example.org"

                     targetNamespace="http://www.example.org"

                     elementFormDefault="qualified">

        <xsd:element name="exampleElement" type="xsd:string">

        </xsd:element>

      </xsd:schema>

      JMS Message

      After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue:

      <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement>

      JDeveloper Connection

      You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to.

      2. Create a BPEL Composite with a JMS Adapter Partner Link

      In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema.

      There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.)

      We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message.

      The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one.

      Create a SOA Project

      Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite.

      Create a JMS Adapter Partner Link

      In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services.

      This will start the JMS Adapter Configuration Wizard. Use the following entries:

      Service Name: JmsAdapterRead

      Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS

      AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located.

      Adapter Interface > Interface: Define from operation and schema (specified later)

      Operation Type: Consume Message
      Operation Name:
      Consume_message

      Consume Operation Parameters

      Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue, which is the queue created in a previous example.

      JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue. (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.)


      Messages/Message Schema
      URL:
      We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project.

      Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button.


      Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file.

      Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup.

      Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string.

      Press Next and Finish, which will complete the JMS Adapter configuration.
      Save the project.

      Create a BPEL Component

      Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK.

      Wire the JMS Adapter to the BPEL Component

      Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist.

      This completes the steps at the composite level.

      3. Complete the BPEL Process Design

      Invoke the BPEL Flow via the JMS Adapter

      Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane.

      Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received.

      At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager.

      There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor.

      Add a Java Embedding Activity

      First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable> body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one.

      Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit.


      Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.:

      System.out.println("JmsAdapterReadSchema process picked up a message");

      oracle.xml.parser.v2.XMLElement inputPayload =  

       (oracle.xml.parser.v2.XMLElement)getVariableData(

                               "Receive1_Consume_Message_InputVariable",

                               "body",

                               "/ns2:exampleElement");  

      String inputString = inputPayload.getFirstChild().getNodeValue();

      System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue());

      Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer.

      This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server.

      3. Test the Composite

      Shut Down the JmsAdapterReadSchema Composite

      After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first

      Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0]. Press the Shut Down button to disable the composite and confirm the following popup.

      Monitor Messages in the JMS Queue

      In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail.

      Send a Test Message

      In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example“Message from JmsAdapterWriteSchema”.

      Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console.

      Monitor the SOA Server’s Output

      A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected.

      Re-Enable the JmsAdapterReadSchema Composite

      In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output:

      JmsAdapterReadSchema process picked up a message.

      Input String is Message from JmsAdapterWriteSchema

      Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too.

      4. Troubleshooting

      This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as

      Message handle error
      ...
      ORABPEL-09500
      ...
      XPath expression failed to execute.
      An error occurs while processing the XPath expression; the expression is 
           /ns2:exampleElement.
      ...
      etc.

      check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter.

      This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database.

      Best regards
      John-Brown Evans
      Oracle Technology Proactive Support Delivery

      Oracle Developer Day: Die Oracle Datenbank in der Praxis

      $
      0
      0

      Im neuen Jahr finden wieder Oracle Developer Days in verschiedenen Städten statt! In dieser speziell von den Database-Kollegen zusammengestellten Veranstaltung erfahren Sie viele Tipps und Tricksaus der Praxis und werden zu folgenden Themen auf den neuesten Stand gebracht:

      • Die Unterschiede der Editionen und ihre Geheimnisse
      • Umfangreiche Basisausstattung auch ohne Option
      • Performance und Skalierbarkeit in den einzelnen Editionen
      • Kosten- und Ressourceneinsparung leicht gemacht
      • Sicherheit in der Datenbank
      • Steigerung der Verfügbarkeit mit einfachen Mitteln
      • Der Umgang mit großen Datenmengen
      • Cloud Technologien in der Oracle Datenbank
      Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab.
      Termine, Agenda,Veranstaltungsorte und Anmeldung finden Sie hier. Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos!

      Free Oracle Special Edition eBooks - Cloud Architecture & Enterprise Cloud

      $
      0
      0
      Cloud computing can improve your business agility, lower operating costs, and speed innovation. The key to making it work is the architecture.
      Learn how to define your architectural requirements and get started on your path to cloud computing with the free oracle special edition e-book, Cloud Architecture for Dummies.

       

      Topics covered in this quick reference guide include:

      • Cloud architecture principles and guidelines
      • Scoping your project and choosing your deployment model
      • Moving toward implementation with vertically integrated engineered systems

      Learn how to architect and model your cloud implementation to drive efficiency and leverage economies of scale.

      For more information, visit oracle.com/cloud and our cloud services at cloud.oracle.com



      Specifically Infrastructure as a Service (IaaS) is critical to the success of many enterprises. Want to build a private Cloud infrastructure and cut down IT costs? Learn more about Oracle's highly integrated infrastructure software and hardware to help you architect and deploy a cloud infrastructure that is optimized for the needs of your enterprise from day one. Download the free e-book of Enterprise Cloud Infrastructure for Dummies to:

      • Realize the benefits of consolidation with the added cloud capabilities
      • Simplify deployments and reduce risks with tested and proven guidelines
      • Achieve up to 50% lower TCO than comparable multi-vendor alternatives

      Choosing the right infrastructure technologies is essential to capitalizing on the benefits of cloud computing. Oracle Optimized Solution for Enterprise Cloud Infrastructure helps identify the right hardware and software stack and provides configuration guidelines for your cloud.

      With this book, you come to understand Enterprise Cloud Infrastructure and find out how to jumpstart your IaaS cloud plans. You also discover Oracle Optimized Solutions and learn how integration testing and proven best practices maximize your IT investments. In addition, you see how to architect and deploy your IaaS cloud to drive down costs and improve performance, how to understand and select the right private cloud strategy for you, what key cloud infrastructure elements are and how to use them to achieve your business goals, and more.

      For more information, visit oracle.com/oos.

      The Simplicity of the Oracle Stack

      $
      0
      0

      For many retailers, technology is something they know they need to optimise business operations, but do they really understand it and how can they select the solutions they need from the many vendors on the market?

      Retail is a data heavy industry, with the average retailer managing thousands of SKUs and hundreds of categories through multiple channels. Add to this the exponential growth in data driven by social media and mobile activities, and the process can seem overwhelming. Handling data of this magnitude and analyzing it effectively to gain actionable insight is a huge task, and needs several IT components to work together harmoniously to make the best use of the data available and make smarter decisions.

      With this in mind, Oracle has produced a video to make it easier for businesses to understand its global data IT solutions and how they integrate seamlessly with Oracle’s other solutions to enable organisations to operate as effectively as possible. The video uses an orchestra as an analogy for IT solutions and clever illustration to demonstrate the value of the Oracle brand.

      Watch the video now

      To find out more about how Oracle’s products and services can help retailers to deliver better results, visit the Oracle Retail website.

      reminder - Data Relationship Management with EPM-Webcast today

      $
      0
      0

      For all those that have not brushed up their shoes for the
      DRM Webcast later today.

      About NOW is the time to register for it.

      You can enjoy this pre-season session with Matt Lontchar free of cost by simply attending the webcast.

      We look forward to welcome you in the session



      Viewing all 19780 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>