Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Oracle Linux Friday Spotlight - January 3, 2014

$
0
0

Happy Friday and happy new year! Our spotlight this week is on an excellent webcast from our archives titled "Oracle Linux Management Demystified." It describes the integration between Oracle Linux and Oracle Enterprise Manager 12c, allowing you to do provisioning, patching, monitoring, and administration all from a unified console. This is an on-demand webcast, so it will play as soon as you enter your details. Enjoy and we'll see you next week!

View the webcast

-Chris 


Getting Started with EL 3

$
0
0

EL 3 is one the APIs that has gone through a major overhaul in Java EE 7. In fact, EL is now finally a specification on its own right after long being an important API for JSTL, JSP, JSF and CDI. Most folks in the ecosystem are just beginning to realize the full significance of this. EL 3 opens up the possibility of using the power of a standard expression language in new and innovative ways in frameworks and applications much like the way Bean Validation 1.1 now utilizes EL. Just some of the changes in EL 3 includes a stand-alone API, powerful new operators, static field and method references, lambda support (essentially ahead of Java SE 8) and much, much more. Servlet 3.1 specification lead Shing Wai Chan handily demonstrates how some of the EL 3 features fit together in an excellent blog post. The code example calculates a standard deviation three different ways using various EL 3 features inside a Servlet. The blog post is really a great place to get started with learning EL 3. You can also check out Ed Burns and Kin-man Chung's JavaOne 2013 session on EL 3 via Parleys:

The slide deck for the session is available on the JavaOne content builder. It may be particularly interesting to relate the content of the slide deck back to Shing Wai Chan's blog entry.

Friday Spotlight: Virtualization Management Podcast

UseLargePages on Linux

$
0
0

There is a JVM option UseLargePages (introduced in JDK 5.0u5) that can be used to request large memory pages from the system if large pages memory is supported by the system. The goal of the large page support is to optimize processor Translation-Lookaside Buffers and hence increase performance.

Recently we saw few instances of HotSpot crashes with JDK7 on the Linux platform when using the large memory pages.

8013057: assert(_needs_gc || SafepointSynchronize::is_at_safepoint()) failed: only read at safepoint
https://bugs.openjdk.java.net/browse/JDK-8013057

8007074: SIGSEGV at ParMarkBitMap::verify_clear()
https://bugs.openjdk.java.net/browse/JDK-8007074

Cause: The cause of these crashes is the way mmap works on the Linux platform. If the large page support is enabled on the system, commit_memory() implementation of HotSpot on Linux platform tries to commit the earlier reserved memory with 'mmap' call using the large pages. If there are not enough number of large pages available, the mmap call fails releasing the reserved memory, allowing the same memory region to be used for other allocations. This causes the same memory region to be used for different purposes and leads to unexpected behaviors.

Symptoms: With the above mentioned issue, we may see crashes with stack traces something like this:
 V  [libjvm.so+0x759a1a]  ParMarkBitMap::mark_obj(HeapWord*, unsigned long)+0x7a
 V  [libjvm.so+0x7a116e]  PSParallelCompact::MarkAndPushClosure::do_oop(oopDesc**)+0xce
 V  [libjvm.so+0x485197]  frame::oops_interpreted_do(OopClosure*, RegisterMap const*, bool)+0xe7
 V  [libjvm.so+0x863a4a]  JavaThread::oops_do(OopClosure*, CodeBlobClosure*)+0x15a
 V  [libjvm.so+0x77c97e]  ThreadRootsMarkingTask::do_it(GCTaskManager*, unsigned int)+0xae
 V  [libjvm.so+0x4b7ec0]  GCTaskThread::run()+0x130
 V  [libjvm.so+0x748f90]  java_start(Thread*)+0x100

Here the crash happens while writting to an address 0x00007f2cf656eef0 in the mapped region of ParMarkBitMap. And that memory belongs to the rt.jar (from hs_err log file):
7f2cf6419000-7f2cf65d7000 r--s 039dd000 00:31 106601532                  /jdk/jdk1.7.0_21/jre/lib/rt.jar

Due to this bug, the same memory region got mapped for two different allocations and caused this crash.

Fixes:

8013057 strengthened the error handling of mmap failures on Linux platform and also added some diagnostic information for these failures. It is fixed in 7u40.

8007074 fixes the reserved memory mapping loss issue when using the large pages on the Linux platform. Details on this fix: http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-July/010117.html. It is fixed in JDK 8 and will also be included into 7u60, scheduled to be released in May 2014.

Workarounds:

1. Disable the use of large pages with JVM option -XX:-UseLargePages.

2. Increase the number of large pages available on the system. By having the sufficient number of large pages on the system, we can reduce the risk of memory commit failures and thus reduce the chances of hitting the large pages issue. Please see the details on how to configure the number of large pages here:
http://www.oracle.com/technetwork/java/javase/tech/largememory-jsp-137182.html

Other related fixes:

8026887: Make issues due to failed large pages allocations easier to debug
https://bugs.openjdk.java.net/browse/JDK-8026887

With the fix of 8013057, diagnostic information for the memory commit failures was added. It printed the error messages something like this:
os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
error='Cannot allocate memory' (errno=12)

With this fix of 8026887, this error message has been modified to suggest that the memory commit failed due to the lack of large pages, and it now looks like the following:
os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
error='Cannot allocate large pages, falling back to small pages' (errno=12)

This change has been integrated into 7u51.

The fix for 8007074 will be available in 7u60 and could not be included into 7u51, so this change (JDK-8026887) makes the error messages printed for the large pages related commit memory failures more informative. If we see these messages in the JVM logs that indicates that we have the risk of hitting the unexpected behaviors due to the bug 8007074.

8024838: Significant slowdown due to transparent huge pages
https://bugs.openjdk.java.net/browse/JDK-8024838

With the fix of 8007074, significant performance degradation was detected. This regression has been fixed with JDK-8024838 in JDK 8 and will also be included in JDK 7u60.

Invoking R scripts via Oracle Database: Theme and Variation, Part 2

$
0
0

In part 1 of Invoking R scripts via Oracle Database: Theme and Variation, we introduced feature of Oracle R Enterprise embedded R execution, focusing on the functions ore.doEval and rqEval. In this blog post, we’ll cover the next in our theme and variation series involving ore.tableApply and rqTableEval.

The variation on embedded R execution for ore.tableApply involves passing an ore.frame to the function such that the first parameter of your embedded R function receives a data.frame. The rqTableEval in SQL allows users to specify a data cursor to be delivered to your embedded R function as a data.frame.

Let’s look at a few examples.


R API

In the following example, we’re using ore.tableApply to build a Naïve Bayes model on the iris data set. Naïve Bayes is found in the e1071 package, which must be installed on both the client and database server machine R engines.

library(e1071)
mod <- ore.tableApply(
ore.push(iris),
function(dat) {
library(e1071)
dat$Species <- as.factor(dat$Species)
naiveBayes(Species ~ ., dat)
})
class(mod)
mod

A few points to highlight:
• To use the CRAN package e1071 on the client, we first load the library.
• The iris data set is pushed to the database to create an ore.frame as the first argument to ore.tableApply. This would normally refer to an ore.frame that refers to a table that exists in Oracle Database. If not obvious, note that we could have previously assigned dat <- ore.push(iris) and passed dat as the argument as well.
• The embedded R function is supplied as the second argument to ore.tableApply as a function object. Recall from Part 1 that we could have alternatively assigned this function to a variable and passed the variable as argument, or stored the function in the R script repository and passed the argument FUN.NAME with the assigned function name.
• The user-defined embedded R function takes dat as its first argument which will contain a data.frame derived from the ore.frame supplied.
• The model itself is returned from the function.
• The result of the ore.tableApply execution will be an ore.object.

SQL API

We can invoke the function through the SQL API by storing the function in the R script repository. Recall that the call to sys.rqScriptCreate must be wrapped in a BEGIN-END PL/SQL block.

library(e1071)
begin
sys.rqScriptCreate('myNaiveBayesModel',
'function(dat) {
library(e1071)
dat$Species <- as.factor(dat$Species)
naiveBayes(Species ~ ., dat)
}');
end;
/

Invoking the function myNaiveBayesModel occurs in a SQL SELECT statement as shown below. The first argument to rqTableEval specifies a cursor that retrieves the IRIS table. Note that the IRIS table could have been created earlier using ore.create(iris,"IRIS"). The second argument, NULL, indicates that no arguments are supplied to the function.

The function returns an R object of type naiveBayes, but as a serialized object that is chunked into a table. This likely is not useful to most users.

library(e1071)
select *
from table(rqTableEval(cursor(select * from IRIS), NULL, NULL, 'myNaiveBayesModel'));

If we want to keep the model in a more usable form, we can store it in an ORE datastore in Oracle Database. For this, we require a change to the user-defined R function and the SQL invocation.

library(e1071)
begin
sys.rqScriptCreate('myNaiveBayesModel',
'function(dat) {
library(e1071)
dat$Species <- as.factor(dat$Species)
mod <- naiveBayes(Species ~ ., dat)
ore.save(mod, name='myNaiveBayesDatastore')
TRUE

}');
end;
/
select *
from table(rqTableEval(cursor(select * from IRIS), NULL, 'XML', 'myNaiveBayesModel'));

Highlighted in red, we’ve stored the model in the datastore named ‘myNaiveBayesDatastore’. We’ve also returned TRUE to have a simple value that can show up as the result of the function execution. In the SQL query, we changed the third parameter to ‘XML’ to return an XML string containing “TRUE”. The name of the datastore could be passed as an argument as follows:

library(e1071)
begin
sys.rqScriptCreate('myNaiveBayesModel',
'function(dat, datstoreName) {
library(e1071)
dat$Species <- as.factor(dat$Species)
mod <- naiveBayes(Species ~ ., dat)
ore.save(mod, name=datastoreName)
TRUE
}');
end;
/
select *
from table(rqTableEval(
cursor(select * from IRIS),
cursor(select 'myNaiveBayesDatastore' datastoreName from dual),
'XML',
'myNaiveBayesModel'));

Memory considerations with ore.tableApply and rqTableEval

The input data provided as the first argument to a user-defined R function invoked using ore.tableApply or rqTableEval is physically being moved from Oracle Database to the database server R engine. It’s important to realize that R’s memory limitations still apply. If your database server machine has 32 GB RAM and your data table is 64 GB, ORE will not be able to load the data into the R function’s.
You may see errors like:

library(e1071)
Error : vector memory exhausted (limit reached)

or
library(e1071)
ORA-28579: network error during callback from external procedure agent

See the blog post on Managing Memory Limits and Configuring Exadata for Embedded R Execution where we discuss setting memory limits for the database server R engine. This can be necessary to load reasonable-sized data tables.

Parallelism

As with ore.doEval / rqEval, user-defined R functions invoked using ore.tableApply / rqTableEval are not executed in parallel, i.e., a single R engine is used to execute the user-defined R function.

Invoking certain ORE advanced analytics functions

In the current ORE release, some advanced analytics functions, like ore.lm or ore.glm, which use the embedded R execution framework, cannot be used within other embedded R calls such as ore.doEval / rqEval and ore.tableApply / rqTableEval.

You can expect to see an error like the following:

library(e1071)
ORA-28580: recursive external procedures are not supported

In the next post in this series, I’ll discuss ore.groupApply and the corresponding definitions required for SQL execution, since there is no rqGroupApply function. I’ll also cover the relationship of various “group apply” constructs to map-reduce paradigm.

Phaser and NetBeans IDE

Phaser and NetBeans IDE (Part 2)

$
0
0

More fun with Phaser.

And here's the code:

var game = new Phaser.Game(800, 270, Phaser.AUTO, '', {preload: preload, create: create, update: update});
function preload() {
    game.load.image('sky', 'assets/sky.png');
    game.load.spritesheet('cat', 'assets/runningcat.png', 512, 256, 8);
}
function create() {
    game.add.sprite(0, 0, 'sky');
    player = game.add.sprite(40, 0, 'cat');
    cursors = game.input.keyboard.createCursorKeys();
    player.animations.add('run');
    player.animations.add('walk');
    player.animations.play('walk', 5, true);
}
function update() {
    if (cursors.right.isDown) {
        player.animations.play('run', 70, false);
    }
    else  {
        player.animations.play('walk', 5, true);
    }
}

Two (silent) YouTube movies with NetBeans and Phaser:

http://www.youtube.com/watch?v=WXKh6UcoKhU

http://www.youtube.com/watch?v=3Ju-p1xaFn8

Happy New Year 2014!

$
0
0

Wishing everyone around the world all the best in the New Year!

Frederic Pariente
ISV Engineering
Oracle Corp


Repost from Washington Post: 5 Myths About the Cloud

$
0
0
Interesting article in the 5 JAN 2014 Washington Post, 5 Myths About the Cloud. Some funny survey results from the tech-challenged public, and some good points about cloud security, reliability, and environmental impact. And 2014 looks to be a very "cloudy" year, with more companies and government agencies deploying mission critical applications and data in both public and private clouds. And although many potential cloud computing customers still don't think of Oracle as a major player in this growing technology services market, they will if they do their homework.

If you want to play around with your own Hadoop-based cloud, be sure to check out these How-To articles on the Oracle Technology Network:

Hey! You! Get onto my cloud!

Invoking R scripts via Oracle Database: Theme and Variation, Part 3

$
0
0

In the first two parts of Invoking R scripts via Oracle Database: Theme and Variation, we introduced features of Oracle R Enterprise embedded R execution, focusing on the functions ore.doEval / rqEval and ore.tableApply / rqTableEval. In this blog post, we’ll cover the next in our theme and variation series involving ore.groupApply and the corresponding definitions required for SQL execution. The “group apply” function is one of the parallel-enabled embedded R execution functions. It supports data-parallel execution, where one or more R engines perform the same R function, or task, on different partitions of data. This functionality is essential to enable the building of potentially 10s or 100s of thousands of predictive models, e.g., one per customer, and for taking advantage of high-performance computing hardware like Exadata.

Oracle Database handles the management and control of potentially multiple R engines at the database server machine, automatically partitioning and passing data to parallel executing R engines. It ensures that all R function executions for all partitions complete, or the ORE function returns an error. The result from the execution of each user-defined embedded R function is gathered in an ore.list. This list remains in the database until the user requires the result.

The variation on embedded R execution for ore.groupApply involves passing not only an ore.frame to the function such that the first parameter of your embedded R function receives a data.frame, but also an INDEX argument that specifies the name of a column by which the rows will be partitioned for processing by a user-defined R function.

Let’s look at an example. We’re going to use the C50 package to build a C5.0 decision tree model on the churn data set from C50. The goal is to build one churn model on the data for each state.


library(C50)
data(churn)

ore.create(churnTrain, "CHURN_TRAIN")

modList <- ore.groupApply(
  CHURN_TRAIN,
  INDEX=CHURN_TRAIN$state,
    function(dat) {
      library(C50)
      dat$state <- NULL
      dat$churn <- as.factor(dat$churn)
      dat$area_code <- as.factor(dat$area_code)
      dat$international_plan <- as.factor(dat$international_plan)
      dat$voice_mail_plan <- as.factor(dat$voice_mail_plan)
      C5.0(churn ~ ., data = dat, rules = TRUE)
    });
mod.MA <- ore.pull(modList$MA)
summary(mod.MA)

A few points to highlight:
• As noted in Part 2 of this series, to use the CRAN package C50 on the client, we first load the library, and then the churn data set.
• Since the data is a data.frame, we’ll create a table in the database with this data. Notice that if you compare the results of str(churnTrain) with str(CHURN_TRAIN), you will see that the factor columns have been retained. This becomes relevant later.
• The function ore.groupApply will return a list of models stored as ore.object instances. The first argument is the ore.frame CHURN_TRAIN and the second argument indicates to partition the data on column state such that the user-defined function is invoked on each partition of the data.
• The next argument specifies the function, which could alternatively have been the function name if the FUN.NAME argument were used and the function saved explicitly in the R script repository. The function’s first argument (whatever its name) will receive one partition of data, e.g., all data associated with a single state.
• Regarding the user-defined function body, we explicitly load the package we’re using, C50 so the function body has access to it. Recall that this function will execute at the database server in a separate R engine from the client.
• Since we don’t need to know which state we’re working with and we don’t want this included in the model, we delete the column from the data.frame.
• Although the ore.frame defined functions, when they are loaded to the user-defined embedded R function, factors appear as character vectors. As a result, we need to convert them back to factors explicitly.
• The model is built and returned from the function.
• The result from ore.groupApply is a list containing the results from the execution of the user-defined function on each partition of the data. In this case, it will be one C5.0 model per state.
• To view the model, we first use ore.pull to retrieve it from the database and then invoke summary on it. The class of mod.MA is “C5.0”.

SQL API

We can invoke the function through the SQL API by storing the function in the R script repository. Previously we showed doing this using the SQL API, however, we can also do this using the R API , but we’re going to modify the function to store the resulting models in an ORE datastore by state name.:


ore.scriptCreate("myC5.0Function",
  function(dat,datastorePrefix) {
    library(C50)
    datastoreName <- paste(datastorePrefix,dat[1,"state"],sep="_")
    dat$state <- NULL
    dat$churn <- as.factor(dat$churn)
    dat$area_code <- as.factor(dat$area_code)
    dat$international_plan <- as.factor(dat$international_plan)
    dat$voice_mail_plan <- as.factor(dat$voice_mail_plan)
    mod <- C5.0(churn ~ ., data = dat, rules = TRUE)
    ore.save(mod, datastoreName)
    TRUE
  })

Just for comparison, we could invoke this from the R API as follows:


res <- ore.groupApply( CHURN_TRAIN, INDEX=CHURN_TRAIN$state,
          FUN.NAME="myC5.0Function",
          datastorePrefix="myC5.0model", ore.connect=TRUE)
res
res <- ore.pull(res)
all(as.logical(res) == TRUE)

Since we’re using a datastore, we need to connect to the database setting ore.connect to TRUE. We also pass the datastorePrefix. The result res is an ore.list of logical. To test if all are TRUE, we first pull the result and use the R all function.

Back to the SQL API…Now that we can refer to the function in the SQL API, we invoke the function that places one model per datastore, each with the given prefix and state.


select *
from table(churnGroupEval(
  cursor(select * from CHURN_TRAIN),
  cursor(select 1 as "ore.connect",' myC5.0model2' as "datastorePrefix" from dual),
  'XML', 'state', 'myC5.0Function'));

There’s one thing missing, however. We don’t have the function churnGroupEval. There is no generic “rqGroupEval” in the API – we need to define our own table function that matches the data provided. Due to this and the parallel nature of the implementation, we need to create a PL/SQL FUNCTION and supporting PACKAGE:


CREATE OR REPLACE PACKAGE churnPkg AS
  TYPE cur IS REF CURSOR RETURN CHURN_TRAIN%ROWTYPE;
END churnPkg;
/
CREATE OR REPLACE FUNCTION churnGroupEval(
  inp_cur churnPkg.cur,
  par_cur SYS_REFCURSOR,
  out_qry VARCHAR2,
  grp_col VARCHAR2,
  exp_txt CLOB)
RETURN SYS.AnyDataSet
PIPELINED PARALLEL_ENABLE (PARTITION inp_cur BY HASH ("state"))
CLUSTER inp_cur BY ("state")
USING rqGroupEvalImpl;
/

The highlights in red indicate the specific parameters that need to be changed to create this function for any particular data set. There are other variants, but this will get you quite far.

To validate that our datastores were created, we invoke ore.datastore(). This returns the datastores present and we will see 51 such entries – one for each state and the District of Columbia.

Parallelism

Above, we mentioned that “group apply” supports data parallelism. By default, parallelism is turned off. To enable parallelism, the parameter to ore.groupApply needs to be set to TRUE.


ore.groupApply( CHURN_TRAIN, INDEX=CHURN_TRAIN$state,
          FUN.NAME="myC5.0Function",
          datastorePrefix="myC5.0model",
          ore.connect=TRUE,
          parallel=TRUE
)

In the case of the SQL API, the parallel hint can be provided with the input cursor. This indicates that degree of parallelism up to 4 should be enabled.


select *
from table(churnGroupEval(
  cursor(select * /*+ parallel(t,4) */ from CHURN_TRAIN t),
  cursor(select 1 as "ore.connect",' myC5.0model2' as "datastorePrefix" from dual),
  'XML', 'state', 'myC5.0Function'));
Map Reduce

The “group apply” functionality can be thought of in terms of the map-reduce paradigm where the mapper performs the partitioning by outputting the INDEX value as key and the data.frame as value. Then, each reducer receives the rows associated with one key. In our example above, INDEX was the column state and so each reducer would receive rows associated with a single state.

Memory and performance considerations

While the data is partitioned by the INDEX column, it is still possible that a given partition is quite large, such that either the partition of data will not fit in the R engine memory or the user-defined embedded R function will not be able to execute to completion. The usual remedial measures can be taken regarding setting memory limits – as noted in Part 2.

If the partitions are not balanced, you would have to configure the system’s memory for the largest partition. This will also have implications for performance, obviously, since smaller partitions of data will likely complete faster than larger ones.

The blog post Managing Memory Limits and Configuring Exadata for Embedded R Execution discusses how to instrument your code to understand the memory usage of your R function. This is done in the context of ore.indexApply (to be discussed later in this blog series), but the approach is analogous for “group apply.”

"Windows Search" with SQL

$
0
0

This is a 2nd in a series of "How I use SQL daily on my Windows7". 1st one was about Firefox download history.

Today's SQL does sort of the equivalent of this GUI.



Firstly, I downloaded C# source and compiled WSSQL.exe executable file.

Windows Search Sample Code - Home

Windows Search through Structured Query Language (SQL)


Next, I run this WSSQL.exe with long SQL text given as 1st arg.

Below is an example of SQL and results.
I searched for files with the word "snmpSubscriber" in full path or its contents and
the SQL returned 3 columns where 2nd column shows indexed time and 3rd column
shows how many times the word appears in file content.

I could find 5 pdf files which are Exadata docs.

$WSSQL.exe \>"SELECT System.ItemPathDisplay,System.Search.GatherTime,System.Search.HitCount \> FROM SystemIndex       \> where contains(*,'snmpSubscriber') And System.Kind !='email'   \> order by System.Search.HitCount"
Query=SELECT System.ItemPathDisplay,System.Search.GatherTime,System.Search.HitCount  FROM SystemIndex        where contains(*,'snmpSubscriber') And System.Kind !='email'    order by System.Search.HitCount
C:\kinoue\materials\Exadata.Admin.Partners\e13862.pdf;2014/01/04 18:33:49;4;
C:\kinoue\materials\Exadata.Admin.Partners\e23333.pdf;2014/01/04 0:48:43;8;
C:\kinoue\materials\Exadata.Admin.Partners\e27442.pdf;2014/01/04 0:48:37;11;
C:\kinoue\materials\Exadata.Admin.Partners\e13874.pdf;2014/01/04 0:50:17;12;
C:\kinoue\materials\Exadata.Admin.Partners\e13861.pdf;2014/01/04 0:50:32;16;

BTW, Oracle Database has full text search capability for over many releases and CONTAINS() operator works similarly.

Oracle Text SQL Statements and Operators

Use the CONTAINS operator in the WHERE clause of a SELECT statement to specify the query expression for a Text query.

ADF Architecture TV in 2014 - bonus episodes on designing for accessibility

$
0
0

The ADF Architecture TV channel on YouTube kicks off for 2014 with two bonus episodes this week on designing your ADF applications to be accessible.  You can view the first bonus episode here.

In 2013 the channel aired just on 24 weekly episodes.  To date the channel has just under 2000 subscribers and 18000 views.  As far as we're concerned, that's not too shabby for a TV series launched mid 2013, recorded with nothing but a couple of webcams, some homemade lightning, and a considerable amount of coffee.

With an eye to continuing our support of customers, and giving all those YouTube cat videos a run for their money in 2014, the ADF product management team has approximately another 50 episodes of ADF topics to go.  You can catch the entire episode index, both current and future episodes on the ADF Architecture Square.

We hope you find the content useful and we wish you the best for 2014. 

Image courtesy of Danilo Rizzuti / FreeDigitalPhotos.net

BPM Auditing Demystified by Mark Foster

$
0
0

I have heard from a couple of customers recently asking about BPM audit table growth, specifically BPM_AUDIT_QUERY. It led me to investigate the impact of the various audit levels in SOA/BPM on these table and to propose options to them.

It is important to note up-front that BPM is a human-centric workflow application and therefore should be expected to audit often and in detail the reality is that business users probably will want to know who did what and when, and also who did not do what when they were supposed to. BPM auditing is very rich and can provide this kind of information and more. The “downside” of this is that audit tables can grow at a faster rate than expected, and BPM_AUDIT_QUERY is normally the most prominent of these.

Clearly there are well documented strategies for archiving/purging and partitioning which can control/limit the impact of table growth but there may also be simple changes to the BPM audit settings which can prove beneficial in certain business situations.

Audit Settings

There are essentially three places where the auditing of BPM applications can be controlled Read the full article here.

SOA & BPM Partner Community

For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]WikiMixForum

WebLogic 12.1.2 Installation in VirtualBox with 0 MHz by Frank Munz

$
0
0

bios settingThe Situation

I just looked at a problem at a customer site installing WebLogic 12.1.2 in a VirtualBox environment. WebLogic 12.1.2 uses - unlike all previous versions of WebLogic - the Oracle Unified Installer (OUI) which first of all seemed to be the problem.

The Problem
Here is what happened in detail:

  • The installation failed because the OUI installer is checking the prerequisites and reports all CPUs with 0 MHz. For sure, this is not good enough (even with all the clever energy saving techniques in modern CPUs)
  • The key question is whether OUI is buggy or the problem is somewhere else, e.g. in the OS or virtualization layer.

Solution
$ grep MHz /proc/cpuinfo

reports indeed 0 MHz for all CPUs. So the problem is not related to OUI but a VirtualBox issue. Still a weird problem. Why would CentOS be running with 0 MHz??

  • VirtualBox might report 0 MHz for your CPU in case you selected the wrong chipset for your machine (in our case the developer’s HP laptop). Note, this means that exactly the same VirtualBox image will run fine on different hardware!
    To fix it, stop the running guest and change the chipset setting in VirtualBox Manager from the default PIX3 to the non-default ICH9.


WebLogic Partner Community

For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInMixForumWiki

Need to Maintain Assets Better? Join Us at the 2014 Oracle Value Chain Summit

$
0
0

The 2014 Oracle Value Chain Summit will be held Monday through Wednesday, February 3 – 5, 2014, in San Jose, California. This event provides a venue to learn about all things value chain, including asset maintenance, inventory management, and procurement. In particular, facility and equipment maintenance users will have the chance to network and share ideas about administering asset maintenance and effectively utilizing the PeopleSoft Maintenance Management and Supplier Relationship Management applications. We have a strong lineup of experienced PeopleSoft customers slated to share their knowledge in a series of interactive sessions so that you can gain insight into how to best implement, use, and manage the aplications. This is the only event in which your facility and asset maintenance teams can talk nothing but maintenance for 3 days. If you would like more information about the event, please visit the event site at http://www.oracle.com/goto/ovcs.  Customers, prospective customers, Oracle partners, and Oracle employees can attend.

Two other important points are that on Thursday, February 6th, the next day after the Summit, we will host an in-person meeting of the PeopleSoft Maintenance Management Focus Group in the morning and a complimentary, half-day training on the use of the PeopleSoft Maintenance Management application in the afternoon. The Focus Group will review product roadmap ideas and validate designs, and the in-person training will serve as a good refresher on product functionality as well as a way to learn about features that you don't currently use. These events are for existing PeopleSoft Maintenance Management customers only and are by invitation. If you are an existing customer of PeopleSoft Maintenance Management and would like more information about the focus group and training, please contact either Loida Chez, Mike Madden, or Mark Rosenberg.

We look forward to seeing you there!


Specialization Catalog – Recently Updated!

$
0
0

The updated Specialization Catalog offers a good overview of all qualifying specializations launched and planned as part of the Specialization Program. Highlights:

  • Oracle Database Performance and Tuning - specialization criteria added
  • Oracle Application Development Framework 12c - specialization criteria added 

December in Review

$
0
0

Content Highlights

The blog series on how to integrate with Fusion Applications using web services focused on using PL/SQL as well as .NET via both HttpWebRequest and WebReference. Similarly the Fusion Concepts series continued by explaining the essentials for both Lookups and Profiles for those newer to Oracle Applications.

YouTube Channel videos were posted on more custom BI reporting, using profile options, setting up conditional formatting, and using custom security - an area that will have its own playlist soon!

From Other Teams

Our colleagues in Support provided the latest installment of their Customizations Advisor Webcast (available for replay), and we summarized the great content for Fusion Applications Developers on the Oracle Learning Library in this post.

The Apps User Experience folks posted several interesting articles in December, including insights into how Oracle Social Network is embedded in Fusion Applications and how they're using real feedback in developing features to support tailoring the Simplified UI in future releases. They also worked closely with the ADF Product Management team on a new set of ADF Mobile design patterns and guidelines which were recently released in a rich wiki format here.

OTN provided some great new OnDemand content in December, including ADF Development - Web, Mobile and Beyond, and Going Mobile with ADF: Programmatically Invoking SOAP Web Services with Complex Types. So whilst not Fusion Applications specific, it's interesting new content for us working in this field.

Also in the integration realm, a new OTN podcast covering Oracle SOA B2B Integration may also be of interest, especially to on-premises developers. Also a new Oracle Support KM note that explains all the Oracle Fusion CRM Web Services is now available (Doc ID 1354841.1).

Events

With the holidays December was a bit quieter for events, however in the UK the Tech13 conference was held by the Oracle User Group with over 1000 attendees from around Europe. Our blog post here summarizes the highlights and their great infographic below gives a nice overview.


Oracle Buys Responsys

$
0
0

On December 20, 2013, Oracle announced that it has entered into an agreement to acquire Responsys, the leading provider of enterprise-scale cloud-based business to consumer (B2C) marketing software. The proposed transaction is subject to Responsys stockholders tendering a majority of Responsys' shares and vested equity incentive awards in the tender offer, certain regulatory approvals and other customary closing conditions, and is expected to close in the first half of 2014. Until the transaction closes, Oracle and Responsys will continue to operate independently, and it is business as usual. Learn More

P6 EPPMとP6 Professional

$
0
0

皆様、あけましておめでとうございます。本年もよろしくお願いいたします。

さて、2014年最初の記事は改めてPrimavera P6の製品の細かい種類(パッケージ)について説明したいと思います。

Primavera製品の歴史は30年の歴史がありますが、P6という製品は2007年に登場し、2014年初での最新リリースはR8.3です。最初のP6はクライアント/サーバー型のアークテクチャを採用していて、R7から既存のC/S型に加えてWeb型のアプリケーションが追加されました。

まだWeb型のアーキテクチャを見た事、試した事がない方のために詳しくお話しますと、この2つのタイプは背反するものではなく、混在させることも可能です。つまり、C/S型のクライアントからもWeb型のクライアントからも同じプロジェクトのデータにアクセスすることができます。C/S型とWeb型はクライアントのユーザーインターフェイス(UI)が異なるので、それぞれのUIの特性に合わせて選んでいただくことが可能になっています。



このクライアントの混在は、エンタープライズレベルでの利用においてPrimaveraが意図した形態であり、特にマネジメント系の方向けにWeb型のアプリケーションが使われることが期待されていました。とはいえその後の度重なる改良で、Web型にもプロジェクトマネージャ、プロジェクトメンバーでも利用できるように細かい機能も次々と提供されてきました。

現在、この2つのアプリケーションはそれぞれのライセンスで提供されており、以下の2つに整理されます。

  • Web型:製品名称「P6 Enterprise Project Portfolio Management (通称:P6 EPPM)」
  • C/S型:製品名称「P6 Professional Project Management(通称:P6 Professional)」

尚、P6 EPPMにはP6 Professionalの使用権限が含まれています。

従来からP6をお使いのお客様は、ほとんどが旧来のアプリケーション=C/S型=P6 Professionalをお使いいただいていますが、ここ数年で海外ではP6 EPPMを合わせた混在型が増えてきており、日本でも徐々に事例が出始めてきています。

それぞれに長所があり、互いに強みを活かしながら、今でも双方が機能拡張を続けています。今後の更なる拡張にご期待ください。

MySQL Cluster, Shared-Nothing Clustering and Auto-Sharding

$
0
0

MySQL Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability.

To learn more about MySQL Cluster, take the 3-day MySQL Cluster training course. Below is a selection of events already on the schedule for this course.

 Location

 Date

 Delivery Language

 Berlin, Germany

 10 February 2014

 German

 Munich, Germany

 14 April 2014

 German

Jakarta Barat, Indonesia 

 27 January 2014

 English

 Seoul, Korean

 24 February 2014

 Korean

 Petaling Jaya, Malaysia

20 October 2014 

 English

 Warsaw, Poland

 12 March 2014

 Polish

 Bangkok, Thailand

 28 January 2014

 English

 San Francisco, CA, United States

 28 May 2014

 English

To register for an event, to request an additional event, to learn more about this course or about other courses on the authentic MySQL curriculum, go to http://education.oracle.com/mysql.

Viewing all 19780 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>