Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Social Listening: China’s Talking, Can You Hear Them?

$
0
0

ChinaHopefully, we’ve come to understand the value of social listening and social monitoring. It’s how we as brands and organizations learn what people are saying about us across the social web, and how we get to know our customers intimately, learning their values and expectations. It’s what allows us to respond in timely, relevant ways, driving new customers, referrals, loyalty, and increased sales.

Naturally, those are the kinds of benefits you’d like to apply to the largest, most socially active and fastest growing market on the planet, right? That would be China. And if you think you can’t listen to what’s being said about you there…you can.

China has the most active social media base plus the biggest Internet, mobile and social media population on the globe. 4 million additional Internet users are added per month, pushing that population to an estimated 800 million in 2015. There are 547 million estimated social users and 420 million estimated mobile web users. Much of the growth is fueled by rural and middle class users, where 97% of the Chinese middle class now owns a smartphone.

Back to the “active” part. A McKinsey report shows 91% of Internet-connected Chinese visited a social site. Compare that to 30% in Japan, 67% in the US, and 70% in South Korea. Social sharing in China went up 60% in 2012. During the 2012 Olympic Games opening ceremony, Twitter recorded almost 10 million related mentions. But China’s Twitter-like micro-blogging network Sina Weibo recorded 119 million. Incitez found that Chinese consumers spend more time (46 minutes a day) on social sites than any other country.

So yeah, it’s big. But does that represent a legit social opportunity for brands? Socially-connected consumer behavior in China isn’t much different from what we see elsewhere. They’re more likely to think about buying a product if it’s mentioned on social, and more likely to buy if a connection recommends it. On average, 66% of Chinese social users follow brands. The averaging user follows 6.7 of them. And yes, brands are well aware; over a thousand already have a presence on Sina Weibo.

And don’t forget that “active” part. An oral care product that executed a campaign on Chinese location-based network Jiepang gained over 846,000 branded user generated posts, creating 2.54 million earned media impressions…for $60k US. Monthly sales increased 23% during the campaign. Put that in your social ROI folder.

So if the opportunities are huge, and the social users there are highly active, how will you listen across social in China to surface those opportunities? The answer is powerful social listening technology that spans global languages and social sites. Oracle's Social Engagement & Monitoring (SE&M) product, part of the overall Social Relationship Management (SRM) platform, now lets you listen in Simplified Chinese, Portuguese and Spanish, with support and planned support for Chinese social networks/sources, and Latin America's Reclame Aqui and Vostu social networks. It’s the only product you’ll find with Latent Semantic Analysis (LSA) in multiple languages. LSA lets you identify messages you want to see, filer irrelevant posts, and get a clear picture of the social content you’re examining. That way, you can spot and do something about the messages that matter.

SE&M also gives you a deeper look into a conversation, like consumer interest, intent or psychographics. If you’re multinational or based in the Chinese and Latin American markets, that’s potential gold. Of course, the whole SRM offers a fully translated user interface in 31 languages, now including Chinese, Portuguese and Spanish. We’re global that way. And even more listening languages are on the way to help you mine fans and leads.

For a good first step, how about a few infographics on getting started with social relationship management? Pick a language.

English
Spanish
Brazilian Portuguese
Portuguese

In her recent presentation at Oracle OpenWorld Shanghai, VP Development, Oracle Social Cloud Meg Bear pointed out how crucial it is for global brands to connect, listen, learn and engage with China, home to over half the world’s top 15 social networks. Eyes and ears are turning to digital places like Tencent Weibo, Sina Weibo, Renren, Qzone, and fast-growing mobile messaging platform WeChat.

The volume of potential data is significant. And just like Americans, the Chinese fully expect you as brands to listen to that data, understand their needs, and deliver stellar user experiences in return.

@mikestiles
Photo: stock.xchng


No DB access? Use REST

$
0
0
The problem.   There's an arbitrary sized image in a database which needed to be made square, sized into 4 sizes, and have a border to color match the picture ( like iTunes albums ) .  Also, do it in batches so as not to cause unneeded load on the database.  Then a minor detail of there's no direct sqlnet access to the db.   Here's the current result.  The original picture was an now quite

SQL Developer v4 Quick Hit: Split Worksheets

$
0
0

One of the cooler features SQL Developer offers is the ‘split document.’ If you don’t know what I’m talking about, please take 90 seconds and read this short post first.

Ok, well that worked well for pretty much every type of editor in SQL Developer save one – the SQL Worksheet.

Let’s say you wanted more ‘type and read’ space for your script or statement, but you also didn’t want to lose your results grid or explain plan. Well now in version 4.0, you can split your worksheet too!

Right click on the worksheet tab to split it

Right click on the worksheet tab to split it

Once it’s split, you’ll either have a left-right or top-bottom clone of the editor portion of the worksheet to work on your code. In the ‘guts’ of the application, that editor space is tied to the same buffer or file. What you type in one, shows in the other. What you save in one, is saved to the other – it’s just an identical view of the same text.

Ta-da! I've hidden the editor in the original worksheet so I can see the full result set while I'm type in the new worksheet space.

Ta-da!

What’s NOT ‘cloned’ is the output panels below, e.g. query results, explain plans, auto trace output, etc.

There’s a lot of different use cases for this feature, and I’m going to let your imagination figure that out for yourself. Besides, I spoon-feed y’all too much as it is!

Here’s a quick demo for you to see for yourself in case you haven’t downloaded version 4 yet.

The text gets replicated but the output panels do not.

The text gets replicated but the output panels do not.

Operational Considerations and Troubleshooting Oracle Enterprise Manager 12c

$
0
0

Oracle Enterprise Manager (EM) 12c has become a valuable component in monitoring and administrating an enterprise environment. The more critical the application, servers and services that are monitored and maintained via EM, the more critical the EM environment becomes. Therefore, EM must be as available as the most critical target it manages.


There are many areas that need to be discussed when talking about managing Enterprise Manager in a data center. Some of these are as follows:

• Recommendations for staffing roles and responsibilities for EM administration

• Understanding the components that make up an EM environment

• Backing up and monitoring EM itself

• Maintaining a healthy EM system

• Patching the EM components

• Troubleshooting and diagnosing guidelines

The Operational Considerations and Troubleshooting Oracle Enterprise Manager 12c whitepaper available on the  Enterprise Manager Maximum Availability Architecture (MAA) site will help define administrator requirements and responsibilities.  It provides guidance in setting up the proper monitoring and maintenance activities to keep Oracle Enterprise Manager 12c healthy and to ensure that EM stays highly available.

Meet the Oracle WebCenter Team: Michael Snow, Director of Product Marketing

$
0
0

Michael SnowMeet the Oracle WebCenter Team: Michael Snow, Director of Product Marketing

Oracle WebCenter is at the front lines of business transformation, bringing together leading-edge Web, content, social, and collaboration technologies into a single, integrated portfolio. Given such rich terrain for innovation, Oracle WebCenter has attracted some of the best minds in the industry. To help readers get to know the Oracle WebCenter team, each issue we turn the spotlight on one of its members. This month, we talk to Michael Snow. As senior principal product marketing director, he and his team are focused on cross-pillar Oracle WebCenter initiatives.

Q. How did you land your role as director of product marketing for Oracle WebCenter?
A. My background is unusual in this industry, since my original career path was to be a photographer and fine artist. I studied at the California Institute of the Arts as well as Rhode Island School of Design. In fact, early in my career, one of my first gigs involved working on the stills for the original Star Wars trilogy at a lab in San Francisco in the early 80s.

I made the transition to technology back in the days of desktop publishing. Eventually, my technology experience, together with a passion for images, led to a gig as a sales engineer, first at MediaBin and subsequently, after an acquisition, at Interwoven, a leader in the burgeoning area of content and digital asset management. I spent the next eight years at Interwoven, which was eventually acquired by Autonomy. When I was recruited by Oracle in 2011, I hit it off instantly with my hiring manager and the other Oracle WebCenter team members, and I jumped at the chance to work with them.

Q. What is the thing you love most about your job?
A. I love talking to customers and partners and discovering the amazingly creative ways they're using Oracle WebCenter. There is a huge variety of use cases out there—and still lots of room for innovation. I also have to say I love the team I work with. We work collaboratively, solve problems as they arise, and then just keep moving onto the next challenge.

Q. How can our readers learn more about innovative Oracle WebCenter use cases?
A. One great way is via Oracle WebCenter in Action. It's a webcast series that focuses on real-world use cases told by Oracle WebCenter customers and partners.

Q. How do you keep up with your Oracle WebCenter colleagues?
A. Since our team is dispersed all over the world, I find myself increasingly looking to our own social initiatives, such as the Oracle WebCenter blog, as well as Twitter, Facebook, and LinkedIn. We all take turns contributing content, so there's always a fresh voice to hear. You can check us out at the blog or on Facebook.

Q. What blogger, analyst, or other thought leader do you consider a “must-read” in order to stay on top of your field?
A. I love to follow John Maeda, the president of Rhode Island School of Design. He's a hybrid designer/technologist who is passionate about bringing art and creativity back into science, math, and technology curriculums. I also keep my eye on Brian Solis. He's is a digital analyst, sociologist, and futurist. He focuses on disruptive technologies, and he does a great job of stepping back to reconsider marketing and engagement in light of all the tools that are available today. You can see Brian in an Oracle WebCenter webcast, Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology.

Q. What do you like to do when you’re not thinking about enterprise hardware and software?
A. I'm blessed that both my wife and I have lots of family nearby us in the Boston area. Besides hanging out with family—including my two kids and our Labradoodle—I also try to get in fair amount of photography, though it's always a struggle to find the time.

To learn more, watch a short video about Oracle WebCenter and visit Oracle WebCenter on oracle.com.

Tyrus 1.2

$
0
0

Another release cycle is finished which allows me to present Tyrus in version 1.2. This version brings some bugfixes and features, for example improved Servlet integration, correct ByteBuffer handling when sending messages, improved client-side SSL ("wss://...") support and important fixed for handling of huge messages (client and server side).

As previously - I will follow with more blog posts about selected features later. Some of them are already described in updated User guide.

Complete list of bugfixes and new features

TYRUS-216
TYRUS-206
TYRUS-207
TYRUS-208
TYRUS-209
TYRUS-211
TYRUS-106
TYRUS-199
TYRUS-215
TYRUS-154
TYRUS-217
TYRUS-218
TYRUS-197
TYRUS-210
TYRUS-146
TYRUS-219

(You might see some of these as still open, but that's due to some issues with JIRA which should be hopefully fixed soon).

Tyrus 1.2 is already integrated in Glassfish trunk – you can download nightly build or upgrade to newer Tyrus manually (replace all Tyrus jars).

Related links:

E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 2 Now Available

$
0
0

Oracle E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 2 (CUP2) is now available in My Oracle Support. This patch includes fixes and performance improvements for the scripts used to upgrade an EBS 11i environment to 12.1.1.

This patch is mandatory for customers who are upgrading to Release 12.1.1 from the following releases:

  • Oracle E-Business Suite Release 11i version 11.5.9 (base, CU1, CU2)
  • Oracle E-Business Suite Release 11i version 11.5.10 (base, CU1, CU2)

This patch includes all of the upgrade-related fixes released previously in the Consolidated Upgrade Patch 1 (CUP1, Patch 7303029) and many additional upgrade related fixes released since March 2010. 

You can download it here:

Link to download EBS CUP2 patch 16791553



What is Oracle E-Business Suite Consolidated Upgrade Patch 2 for Release 12.1.1?

The Consolidated Upgrade Patch 2 (CUP2) for Release 12.1.1 combines critical upgrade error corrections and upgrade performance improvements from Release 11i into a consolidated suite-wide patch.

Who should use it?

Customers who are upgrading to Release 12.1.1 from Release 11.5.9 (base, CU1, CU2) or Release 11.5.10 (base, CU1, CU2) should apply Release 12.1.1 CUP2.

How does it differ from the Family Consolidated Upgrade Patch (FCUP) in Release 11i?

In Release 11i, Family Consolidated Upgrade Patches (FCUP) were the release vehicles used to ship consolidated upgrade-related patches from all products within a product family.  In R12, the term Consolidated Upgrade Patch (CUP) has been coined to ship critical upgrade error corrections and upgrade performance improvements across all the product families in Oracle E-Business suite.

How do you apply Release 12.1.1 CUP2?

For instructions on applying this patch, see the "Notes for Upgrade Customers" section in:
Can this patch be applied by customers who are upgrading to Release 12.1.1 from an earlier version of Release 12?

No.  Release 12.1.1 CUP2 is applicable only if you are upgrading your E-Business Suite Release 11i instance to Release 12.1.1.  If your Oracle E-Business Suite instance is already at Release 12 or higher (e.g. Release 12.0.4, 12.0.6), you should not apply Release 12.1.1 CUP2.

Can I apply Release 12.1.1 CUP2 to Release 12.1.1?

No.  If your environment is already at the Release 12.1.1 level, you do not need this patchset.  You should apply Release 12.1.1 CUP2 only while upgrading a Release 11i Oracle E-Business Suite instance to Release 12.1.1

Is Release 12.1.1 CUP2 mandatory for upgrading to Release 12.1.1 if I have done multiple test upgrades and am close to"Go-Live"?

If you have already performed multiple test upgrades without Release 12.1.1 CUP2 and are close to completing User Acceptance Testing prior to your actual production upgrade, it is not mandatory to apply the patch.

Oracle will continue to provide patches for Oracle E-Business Suite Release 12.1.1 environments that do not have the Release 12.1.1 CUP2 patchset.

How is the Consolidated Upgrade Patch (CUP) different from other release vehicles?

With the introduction of this patchset, there are now five types of release vehicles for the E-Business Suite:
  1. Rapid Install
  2. Maintenance Pack
  3. Product Family Release Update Pack
  4. E-Business Suite Release Update Pack
  5. Consolidated Upgrade Patch
Rapid Install

With Rapid Install (RI), you can install a fully configured Oracle E-Business suite system, lay down the file system and configure server processes for an upgraded system, or install a new database tier or application tier technology stack.

Release 12 Rapid Install versions are
  • Release 12.0.0
  • Release 12.0.4
  • Release 12.1.1
Maintenance Pack

A Maintenance Pack (MP) is an aggregation of patches for all products in Oracle E-Business Suite. It is a feature-rich release which combines new functionalities along with error corrections, statutory/regulatory updates, and functionality enhancements, etc.

The Release 12.1.1 Maintenance Pack can be used to upgrade an existing Oracle E-Business Suite Release 12.0.x environment to Release 12.1.1

Product Family Release Update Pack

Product Family Release Update Pack (RUP) is an aggregation of patches on a given codeline created for all products in a specific product family for a specific point release. RUPs are generally cumulative in nature.

Examples of Product Family Release Update Packs released in Release 12.0:
  • R12.ATG_PF.A.Delta.4
  • R12.FIN_PF.A.Delta.5
  • R12.ATG_PF.A.Delta.6
  • R12.HR_PF.A.Delta.7
Examples of Product Family Release Update Packs released in Release 12.1:
  • R12.AD_PF.B.Delta.2
  • R12.ATG_PF.B.Delta.2
  • R12.CC_PF.B.Delta.2
  • R12.SCM_PF.B.Delta.2
E-Business Suite Release Update Pack

An E-Business Suite Release Update Pack (RUP) is an aggregation of product or product family RUPs on a given codeline created across Oracle E-Business Suite after the initial release. Like product family Release Update Pack, E-Business suite Release Update Pack is cumulative in nature.

Examples of E-Business Suite Release Update Packs
  • Release 12.0.4
  • Release 12.0.6
  • Release 12.1.2
Consolidated Upgrade Patch

A Consolidated Upgrade Patch is a collection of critical fixes that improve the performance and stability of the upgrade process from Release 11i to Release 12.1.1.
References
Related Articles

InfoQ looking for users to categorize NoSQL Tech use

$
0
0

Often, when people talk about NoSQL technologies, there is an attempt to categorize the solutions.  In a new Adoption Trends breakdown by InfoQ they also take this tact, providing the following categorizations:   Columnar, Document, Graph, In-Memory Grid, Key-Value.  I think this definitely has some utility, yet in another respect, misses the main point about this technology segment.  These technologies have come to existence to fulfill the needs of 1 primary and 1 ancillary core ability.  The primary ability, surfaced by Amazon, LinkedIn, Facebook, etc is the ability to scale and remain available in the face of tremendous data and concurrent users.  The ancillary ability is to provide more agility for the line of business which is constantly adjusting its solutions with changing needs and understanding of the consumer.  What considerations should drive the thought process of adoption for NoSQL? 

Each of the NoSQL technologies from that list of categories has, from its database perspective, a key underlying principle that enables those core NoSQL abilities, a break away from server side relationship management, moving responsibility of data constraints to the application tier ( -vs- the database ) via the implementation of a key-value based distribution model.  It is that key-value paradigm that enables the scale-out architecture thru key distribution and so in some sense, they are all key-value stores.  In fact, it is for that reason that we've seen for instance Cassandra evolve it's value implementation several times over the last couple of year from binary to super column, to table structure.  If it had not been for the underlying key value nature of the implementation, they could have never undergone those drastic changes in data storage format in such a short period of time.  This is why the Oracle NoSQL Database (OnDB) was implemented as a key-value store.  It provides the ability to layer multiple value abstractions under the core key based distribution and scale-model.  Today it supports 2 value abstractions, Binary and JSON with a 3rd abstraction on the way, Table.  Each of the value abstractions provide different utility in application implementation and feature the best run-time / storage characteristics for a particular data use.  For example, when storing image and video data, the binary abstraction is best suited, especially when it is overlayed with the OnDB streaming interfaces.  However, when you want to store nested data structures with internal value inter-dependencies and sub-field updates, JSON is a great value abstraction.  Finally, if you have the need to model data in a format amenable to integrated systems and capable of supporting a richer set of query semantics, the Table abstraction does the job best.   Btw - I might argue that the Graph category of NoSQL database is really an application layer above a NoSQL database.  Its the reason we've seen NoSQL databases like Objectivity enter the Graph database category and why you will find that OnDB supports Graph storage and retrieval for Oracle Spatial and Graph ...but this is a different Blog topic all together. 

Anyway, the point I am trying to make is that companies use of data will vary greatly.  The real category to which all of the NoSQL database implementations belong, is the key-value category.  Bringing in NoSQL technology that provides a range of value options that can be selected and intermixed to achieve the optimal solution for a given application will provide the greatest flexibility and reduction of risk.  The scalability of a key based distribution architecture should be ever present, but the application of the value abstraction will most surely vary for each solution space.  This is something project leads and managers adopting these new technologies should reflect on as they invest their resources and time in learning and adopting a particular product.  The repeatability and applicability of that investment for unforeseen future work.

Btw - in the InfoQ Adoption Trends article, there is a survey on current and future use of the many vendor technologies.  I encourage everyone to take the time to visit the site and share your position on this important area of data management.


Friday Tips #38

$
0
0

Happy Friday! This week's tip is on Oracle VM Server for SPARC, and it's just a quick pointer to a really great paper. 

A common scenario for Oracle Database users is to run the database on Oracle Solaris and use Oracle VM Server for SPARC as the virtualization technology. Of course, one of the great benefits of virtualization is portability, but how do you move an instance of the database to take a physical server offline (to add more memory, for example)? The Oracle white paper Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature: An Oracle Database Example will tell you the hows and whys of this important process.

See you all next week!

-Chris

How to Load Oracle Tables From Hadoop Tutorial (Part 3 - Direct Path)

$
0
0

Oracle Loader for Hadoop: OCI Direct Path

In the previous tutorial post we discussed the basic mechanics and structure of an OLH job using JDBC. In this post we move on to the more mainstream method used for OLH, specifically OCI Direct Path. The focus here is on loading Oracle tables with really big data, and we will discuss how to do this efficiently, and provide some basic rules for optimizing load performance. We will discuss the mechanics of submitting an OLH job, and then take a dive into why this OLH load method is what you want to use for most situations.

The Structure of an OLH Command using OCI Direct Path

The structure of an OLH command using OCI Direct Path is very similar to the structure we described for submitting a JDBC load:

$HADOOP_HOME/bin/hadoop jar
$OLH_HOME/jlib/oraloader.jar oracle.hadoop.loader.OraLoader
  -D oracle.hadoop.loader.jobName=OLHP_fivdti_dtext_oci_10_723
  -Doracle.hadoop.loader.loaderMapFile=file:/tmp/loaderMap_fivdti.xml
  -D mapred.reduce.tasks=10
 -D mapred.input.dir=/user/olh_performance/fivdti/56000000_90
 -D mapred.output.dir=/user/oracle/olh_test/results/fivdti/723">
  -conf /tmp/oracle_connection.xml
  -conf /tmp/dtextInput.xml
  -conf /tmp/dlOutput.xml

Aside from cosmetic changes (e.g. the job name) the key differences between this and the JDBC command discussed in lesson 2, is a non-zero value for “mapred.reduce.tasks” property and a different conf file for specifying the type of output (i.e. the bold italic lines above).

The new file we are using, “dlOutput.xml”, specifies the output format is OCI Direct Path (and not JDBC):

<configuration>
  <property>
    <name>mapreduce.outputformat.class</name>
    <value>oracle.hadoop.loaderlib.output.OCIOutputFormat</value>
  </property>
</configuration>

So switching from JDBC to OCI Direct Path is trivial. A little less trivial is why OCI Direct Path is preferred and what rules you should know to make this type of loading perform well and to maximize efficiency.

Rule 1: When using OCI Direct Path the target table must be partitioned.

This might sounds like a constraint, but practically speaking it isn’t.

Exploiting Oracle Table Partitioning

Partitions

A full understanding of Oracle table partitioning goes beyond the scope of this tutorial, and you would be advised to read related documentation that gets into this subject in depth, but for the sake of readers who live mostly in the world of Hadoop and have a limited understanding of Oracle let’s briefly outline the basics of what Oracle table partitioning is and why it is essential to understand.

Rule 2: If you are loading really big data into an Oracle table, your Oracle table will want to be partitioned.

The reason is pretty simple. Table partitions are Oracle’s method of breaking up a table into workloads that can be optimized transparently by SQL. In the same way MapReduce jobs scale out by breaking up a workload into data blocks and scheduling tasks to work in parallel against data blocks, Oracle SQL does the same with partitions. This is not only true for querying but it is also true for doing big loads.

Let’s look at the “fivdti” table we have been using. A flat table would be declared like this:

CREATE TABLE fivdti
  (f1 NUMBER,
  i2 INT,
  v3 VARCHAR2(50),
  d4 DATE,
  t5 TIMESTAMP,
  v6 VARCHAR2(200),
  i7 INT);

A partitioned table declaration, using a hash partitioning scheme would look like this:

CREATE TABLE fivdti
  (f1 NUMBER,
  i2 INT,
  v3 VARCHAR2(50),
  d4 DATE,
  t5 TIMESTAMP,
  v6 VARCHAR2(200),
  i7 INT)
PARTITION BY HASH(i7)

PARTITIONS 10

PARALLEL;

With the simple addition of the partition clause at the bottom of the CREATE TABLE clause, you’ve empowered Oracle to exploit big optimizations for processing.  The clause tells Oracle that the table should be divided into 10 partitions, and the partition for a row is determined by performing a hash operation on the value of the i7 column. If you were to compare load rates using OLH, SQL*Loader, or SQL for the flat table and the table that is partitioned, you would typically see a dramatic difference that favors partitioning. The same holds true for SQL. When querying partitioned tables, SQL can do all sorts of tricks under the covers to use parallel query technology that subdivides a job and maximizes parallel CPU and IO.

Oracle table partitioning comes in various flavors such as hash, list, and range. They also can be composites of the same.  OLH supports all partition methods except reference partitioning and virtual column-based partitioning.

Advantages of OCI Direct Path

OCI Direct Path is a well-established method of loading data into Oracle using OCI (Oracle’s C based client interface) or SQL*Loader. It is a code path dedicated to bulk loading and its key advantage is that it bypasses Oracle SQL, which makes it very efficient.

Virtually all relational database systems including Oracle are built on two layers of software: one for managing data at the row level (i.e. SQL), and another for managing data at the block level (i.e. storage). Loading through SQL (i.e. Oracle’s front door) is expensive. It’s okay when one is inserting a singleton row or a small array of rows, but it uses a lot of code path before the rows are passed onto storage and are copied into data blocks that ultimately get written to disk.

OCI Direct Path load is a short cut with an API whose code path both on the client and in the Oracle database is streamlined for loading. It does the work of preparing rows for storage in data blocks using client resources. (For our case the client is OLH running in Hadoop.)  It then sends blocks of rows to Oracle’s storage layer in a form close to what will be written to disk on a code path that minimizes contention: rows don’t need to pass through Oracle's buffer cache layer. It also maximizes parallelism for multi-block IO.  OCI Direct Path can also take advantage of presorted data which helps if it needs to build indexes for a table.

Running an OLH Job With OCI Direct Path

This pretty much looks the same as running a job with JDBC, except that the reduce phase always executes (since the target table is partitioned) , and it is much faster. For both JDBC and OCI Direct Path the actual loading of the Oracle table occurs when the Reduce phase is 67% complete. For large loads approximating or exceeding a terabyte you will see a big difference in the time spent in this phase. OCI Direct Path is much faster than JDBC.

Are You Balanced?

Balanced

Rule 3: After running an OLH load, check out the Oraloader report to see if it is balanced.

After the run of a successful OLH job, the output directory (specified by the “mapred.output.dir” property) generates an elegant report called “oraloader-report.txt” that details the work done in the reduce phase. It identifies reducer tasks that ran and associated statistics of their workload: bytes loaded, records loaded, and duration of the tasks (in seconds). If the load is not balanced, the values for bytes and duration will vary significantly between reduce tasks, and you will want to make adjustments.

Optimizing OLH and OCI Direct Path

Now we will discuss basic steps to optimize OLH using OCI Direct Path:

· Choosing a good number for Reducer Tasks

· Enabling the OLH Sampler

· Finding the sweet spot for Hadoop Map Reduce payloads

· If possible load using SDP transport protocol

Choosing a Number for Reducer Tasks

Rule 4: When using OCI Direct Path you want to choose the number of reducer tasks to be close to a multiple of the number of reducer slots allocated on your Hadoop cluster.

Reducer slots in Hadoop mean the number of processes that can run in a Hadoop cluster at once, performing the reduce phase for an OLH job. The Hadoop Map/Reduce Administration UI displays this as Reduce Task Capacity. Typically you choose some multiple of the number of reducer slots available. For example if the reduce task capacity in the Hadoop cluster is 50, then a mapred.reduce.tasks value of 50 or 100 should work well.

The purpose of this rule is to try to get reducers running and loading at the same time, and to make sure all available slots are being used. Not doing this can be costly. For example, suppose there are 50 reducer slots but you set the number of reducer tasks to 51. If reduce loads are balanced then the 50 reducer slots will start and finish at roughly the same time, but you will to wait for the singleton 51st task to run, which will double the time the reduce phase spends loading the data.

Rule 4 only works fully to your advantage when the data sets are balanced (i.e. you are using the Sampler) and your OLH job is not competing with other ongoing Map Reduce jobs that can steal reduce slots that you were expecting to use.  Note that Apache actually recommends a value close to a multiple of the number of reducer slots, for dealing with situations where reducers are not balanced.

This takes us to the next rule.

Rule 5: Always use the OLH Sampler.

The OLH Sampler

Sampler

The OLH Sampler is an optional feature of OLH that does a great job of balancing the workloads of reducer tasks when partitions are not balanced.  (Note that the Sampler works with all OLH load methods, not just OCI Direct Path).  You can control the Sampler manually by setting the following property to “true” or “false” (for recent versions of OLH the Sampler is turned on by default):

-D oracle.hadoop.loader.sampler.enableSampling=true

For example, suppose I had a customer table which was partitioned using list partitioning representing the fifty states in the United States. Most likely the partition representing California will be much larger than the state of New Hampshire. Without enabling the OLH Sampler, a single reducer task has the burden of publishing a whole partition. This means that one reducer will have to publish California records while another will be tasked to publish the records from New Hampshire. This will cause skew, where some tasks have bigger workloads than others. The OLH Sampler addresses this pathology, and breaks up large partitions into smaller equal sized units that can be dispatched evenly across various reducer tasks.

The overhead of the OLH Sampler is very small for big data payloads. A Hadoop Map Reduce job typically takes minutes or hours, and the sampler overhead typically takes a few seconds. (OLH console output tells you at the outset if the Sampler is running and how much time it cost.) It runs at the beginning of the Map Reduce job and samples the dataset to determine differences between partition sizes, it then creates an partition strategy which balances the reduce load evenly.

Another pathology that the Samper addresses is when you have more available reducer slots than partitions in your table. For instance suppose your table has 10 partitions but your Hadoop cluster has 50 reducer slots free. You would want to set the number of reduce tasks to take advantage of all these reducer slots to speed up the load.

-D mapred.reduce.tasks=50

But without the Sampler enabled this tuning knob would not have the desired effect. When the Sampler is not enabled, partitions are restricted to a single reducer task, which means that only 10 reducers will do real work, and the other 40 reduce slots will have nothing to do.

Based on our experience the Sampler should be used virtually all the time. The only situation to be wary of is when the Hadoop input splits are clustered by the reduce key. (e.g. the input data living in HDFS files is sorted by the value of the partition column). Under these circumstances loads might still be unbalanced. The work-around for clustered data is to force the Sampler to spend more time looking at the distribution of data by looking at more splits. (By default it looks at at least five). This is done by using the following property and setting <N > to a higher number.

-D oracle.hadoop.loader.sampler.minSplits=<N>

Again the higher number will impose more Sampler overhead at the beginning of the job but this should be rewarded with more efficient use of Hadoop resources

Finding the Sweet Spot for Hadoop Map Reduce Payloads

Rule 6: Experiment with different sized payloads.

Hadoop is great technology that does a good job of making sure that Map Reduce payloads scale. That being said, the resources of a Hadoop cluster are still finite, and there is a breaking point where load sizes are simply too big. Hadoop’s scaling typically breaks down in the reduce shuffle/sort stage where there is a tremendous amount of disk and network IO going on within a Hadoop cluster to move sorted data to designated systems where reducer tasks will do the actual loading. A telling sign is when you see your Hadoop job start to suffer from failed and restarted task attempts in the reduce phase. The other obvious sign is that when you double your payload, the time to process the load is greater than a factor of 2.

It’s a good idea to spend some time experimenting with different load sizes to see what your Hadoop configuration can handle. Obviously, if you break down a single big job into a series of smaller jobs, you will be paying a higher cost of overhead for starting up and tearing down multiple Map Reduce jobs.  That being said, breaking down a 90 minute OLH payload into three smaller 30 minute payloads is a perfectly reasonable strategy, since the startup/teardown overhead for running each OLH job is still very small compared to the total time running.

Use the SDP Protocol on Exadata and the BDA

RaceCar

Rule 7: If you are using Oracle Exadata and Oracle BDA with Infiniband, use SDP protocol.

SDP is a network transport protocol supported for loading tables living in an Oracle Exadata machine with HDFS data living in Oracle BDA (Big Data Appliance). Exadata and BDA move data using Infiniband, which has very high throughput and low latency. Because Infiniband has such high bandwidth, it can create bottlenecks in conventional TCP sockets.

SDP is an alternative networking protocol that uses RDMA technology which allows network interfaces to move data packets directly into RAM without involving CPU. In other words it doesn’t reproduce the network bottleneck that is seen when using TCP. In performance test runs we’ve found that using SDP improves the load stage of an OLH Direct Path by ten to twenty percent.

If you are running OLH Direct Path jobs using Infiniband, you will want to take advantage of SDP.  The way this is done is to configure Exadata listeners with an SDP port, and assert an additional Oracle connection descriptor dedicated to SDP when running OLH.

<property>
  <name>oracle.hadoop.loader.connection.oci_url</name>
  <value>
    (HOST=192.168.40.200)(PORT=1523))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=dbm)))
  </value>
</property>

This esoteric property isolates SDP usage only when OLH reduce tasks create connections to Oracle to execute OCI Direct Path loading. All other network activity uses standard TCP connections.


The ADF EMG day at Oracle Open World 2013 Sunday 22nd

$
0
0

I'm happy to say through the kind efforts of the ADF community volunteers, under the expert banner of ODTUG, the ADF EMG will be running another day of ADF sessions at Oracle Open World's user group Sunday 22nd September.  This adds another six ADF sessions and a whole day of solid ADF content for you to enjoy at the world's largest Oracle conference for developers.

The general announcement of sessions is covered on the ADF EMG forums, but a summary of the expert speakers and topics is here:

  • 8:00am - Oracle ADF Task Flows Beyond the 10-Minute Demo [UGF7001] - John King
  • 9:15am - Oracle on Your Browser or Phone: Design Patterns for Web and Mobile Oracle ADF Applications [UGF9898] - Floyd Teter & Lonneke Dikmans
  • 10:30am - ADF Performance Tuning War Stories [UGF2737] - Stephen Johnson, Frank Houweling, Eugene Fedorenko
  • 11:45am - Top 10 Web App Vulnerabilities, and Securing Them with ADF [UGF9900] - Brian Huff
  • 2:15pm - Worst Practices When Developing an ADF Application [UGF9860] - Paco van der Linden & Wilfred van der Deijl
  • 3:30pm - WebCenter & ADF - Responsive and Adaptive Design for Desktop, Mobile & Tablet [UGF9908] - John Sims
You can also view the sessions in the OOW content catalog, and check out all of the Oracle ADF content at Oracle OpenWorld 2013 too.

We hope you'll take the opportunity to join the ever growing ADF community at our largest ADF event, come learn, share, and participate in something that started as a single session at OOW in 2008 with just 20 people, to in 2013 a whole day of ADF content for all attendees to enjoy.

NetBeans IDE for Groovy Purists

$
0
0

The Groovy support in NetBeans IDE is based on two use cases. In the first case, you're a Java programmer and want to use Groovy as a support language; therefore, in this scenario, you'll create a Java project and then add Groovy artifacts (Groovy classes and scripts) and go from there. In the second case, you're using Grails, in which case you'll go to the Groovy category in the New Project wizard (Ctrl-Shift-N) and create a Grails project and continue developing from that point onwards.

But what about if you don't care about Java and you don't care about Grails? You simply want to write Groovy scripts, compile them, debug them, and run them.

Download this plugin and install it into NetBeans IDE 7.3.1, which is the latest stable release of NetBeans IDE, though it may work in later versions of NetBeans IDE too:

http://plugins.netbeans.org/plugin/49928/?show=true

Either install the plugin into the All distribution of NetBeans IDE 7.3.1 or first install the "Groovy & Grails" plugin (via Tools | Plugins) into any other download bundles of NetBeans IDE.

Now you have this new project available in the New Project wizard:

(Sorry for typo "Wheen" in above, will fix in next release...)

Complete the wizard and you should see this:

In other words, you have no Java source files at all, just a Groovy script, and the Groovy JAR (which you can change in the Project Properties dialog) and the JDK (which can also be changed in the Project Properties dialog).

Open the file and you see a Groovy editor:

Right-click in the left margin (or click on a line number) and you can set a breakpoint, visualized by a red line. Then start the Debugger (there's a Debug menu in the main menubar, take a look there).

Hope that helps Groovy purists get started with NetBeans IDE! Feedback welcome, e.g., let me know what other content you'd like the Groovy project to have.

Now go here to read about the latest Groovy features in NetBeans IDE:

https://blogs.oracle.com/netbeansgroovy/

In other news. I'm on vacation until Monday 5 August, if you don't hear from me before then, now you know why!

Accessibility Update: The Oracle ETPM v2.3.1 VPAT is now available on oracle.com

$
0
0

Oracle Tax is committed to building accessible applications.  Oracle uses a VPAT (Voluntary Product Accessibility Template) to document the accessibility status of each product.  The VPAT was created by a partnership of the Information Technology Industry Council (ITI) and the U.S. General Services Administration (GSA) to create a simple document that could be used by US Federal contracting and procurement officials to evaluate a product with respect to the provisions contained in Section 508 of the Americans with Disabilities Act (ADA).

The Oracle ETPM v2.3.1 VPAT is now available on oracle.com: 

http://www.oracle.com/us/corporate/accessibility/templates/t2-3509.html

Oracle Cloud Application Foundation Customers Share their stories! Join July 31st Launch to hear more

$
0
0

From advanced scientific research to secure digital interactions, from telecommunications to ocean carriers, Cloud Application Foundation customers are reaping the benefits of a Flexible, Proven and Integrated Cloud Platform.

In the past couple of weeks we’ve shared stories from some of our customers that were instrumental to making the Cloud Application Foundation 12c release possible. Join us for the July 31st launch eventto learn more about this release.

These customers and many others have helped shape the future of the CAF Platform by participating in our WebLogic and Coherence Customer Advisory Boards and providing ongoing feedback to our Product Management team.You will get to hear some of them speak at the upcoming July 31st launch event.


So how are they using Cloud Application Foundation?

CERN is world-renowned for their scientific research, operating the world’s largest and most complex scientific instruments to study the basic constituents of matter - the fundamental particles. CERN utilizes WebLogic Server as its strategic platform for all developed software, which means support for Administration, Engineering and the Accelerator Center. Watch the video

Gemalto is the leader in making digital interactions secure and easy. Gemalto’s telecommunication business unit is using WebLogic for their web-based applications, business logic, JMS, and web services. Watch the video

OOCL is one of the world's largest integrated international container transportation, logistics and terminal companies. As one of Hong Kong's most recognized global brands, OOCL provides customers with fully-integrated logistics and containerized transportation services, with a network that encompasses Asia, Europe, North America and Australasia. OOCL engaged with Oracle on a multiple-year project to replace core ERP systems. They utilize Oracle WebLogic Suite, Coherence, RAC DB, TopLink, and Oracle Enterprise Manager to address their modern IT challenges. Read the Blog

TURKCELL is the leading communications and technology Company in Turkey, with 34.9 million subscribers and a leading regional player, with market leadership in five of the nine countries in which it operates with over 69 million subscribers. Turkcell's main business functions are CRM , Value Added Services (VAS), Billing/Charging and Network operations managing thousands of user sessions and millions of business transaction every day with 4000+ operating systems, 150+ Oracle databases in production, and 1700+ Oracle WebLogic Server instances. Recently, Turkcell has adopted in-memory computing with Oracle Coherence to increase performance of core applications (achieve more transactions per second), and offload backend data. Read the Blog

Join us for the July 31st launch event to hear from more Cloud Application Foundation customers about their success with this platform.

Oracle支持服务网上讲座(2013年8月)

$
0
0
Oracle Corporation


Oracle 支持服务透过网络研讨会,各种活动及预先录制的课程,为我们的客户,合作伙伴及Oracle员工提供支持服务政策,流程及主动式的工具相关的教育训练课程。这些讲座的目的是确保客户从Oracle支持获得最大的服务价值。

在线讲座

主题描述日期时间报名
在My Oracle Support中快速找到解决方案

"在My Oracle Support寻找解决方案"的讲座是为那些每天处理日常问题和使用Oracle产品而产生的问题的客户设计的。即使对My Oracle Support有经验的用户可能会在本讲座中发现一些有用的技巧和技术。

本讲座旨在为客户获得My Oracle Support的搜索提供专业知识,了解如何为特定的信息需求选择最好的搜索技术,以及参与My Oracle Support的社区提出问题,并找出解决办法推荐的最佳做法。持续时间:30分钟。

8月15日14:00报名
Oracle配置管理基础

本讲座的目标是为新的Oracle支持环境的客户,想利用Oracle配置管理的特性。

这30分钟的网络广播包括功能概述,安全,在线模式和离线模式,安装过程,连接系统细节到服务请求。

8月20日10:00报名
Premier Support 介绍

本讲座的目标是为新的Oracle支持环境的客户,或为那些可能需要了解服务流程的客户提供Oracle Premier支持的最佳实践信息。

这30分钟的网络广播提供了Oracle标准支持服务的概述。在演示过程中,与会者将了解有关的支持政策,包括终身支���策略,有助于最大限度地提高您的投资和更好地控制您的升级策略。简单介绍My Oracle Support平台和支持术语,支持的最佳实践,以及服务请求升级过程也将被讨论。

8月22日14:00报名
数据库升级

数据库升级精华,通过25分钟的介绍和5分钟的Q&A环节,将引导您通过三个关键日期,两个有用的资源,和两个真实生活中的情景,获得你需要知道的成功完成数据库升级的经验。

8月29日14:00报名


您也可以访问My Oracle Support来查看其它安排好的在线课程。要转换为本地时间,请参考world clock.

预先录制的课程

Oracle 提供了一系列建议的培训课程,请参考 note 603505.1来得到预先录制的课程清单,这里有很大一部分使用了普通话(Mandarin)。您需要使用 Internet Explorer 及您的 My Oracle Support 单点登录账号来学习这些录制的课程。

主题听众语言播放
Oracle Support Best Practices(Oracle支持最佳实践)(新)所有客户中文播放
GetProactive: Resolve Fast(快速解决问题)(新)所有客户中文播放
WebLogic GC & OutOfMemory Diagnostic(新)中间件产品中文播放
WLS Cluster Configuration and Problem analysis(新)中间件产品中文播放
Creating Customer Value所有客户中文播放
Oracle Support Basics所有客户中文播放
An Introduction to My Oracle Support所有客户中文播放
Service Request Management所有客户中文播放
Customer User Administration所有客户中文播放
Managing Favorite所有客户中文播放
Quick Search所有客户中文播放
Hot Topic Email所有客户中文播放
Patch and Update所有客户中文播放
Site Alert所有客户中文播放
Search and Browse Features in My Oracle Support所有客户中文播放
Why Use Configuration Manager In The My Oracle Support所有客户中文播放
Enterprise Manager 11g and My Oracle Support所有客户中文播放
Oracle Collaborative Support 所有客户中文播放
How to Escalate a Service Request within Oracle Support所有客户中文播放

如果您有任何问题,请在Support Training Community提交您的问题给我们。




Oracle Grants TCKs for EclipseLink and Virgo

$
0
0

The TCK is a key piece of the puzzle in strongly safeguarding compatibility for anything Java. Generally speaking, companies that make money from compatible Java technology implementations have to pay license fees to run the TCK. This license fee is one of the few sources of money that helps pay the bills for the JCP/TCK process itself.

Nonetheless, Sun and now Oracle has always had a way to grant TCK scholarships for (primarily open source) non-profits and academic institutions. For example, Apache and OW2 have long had Java EE TCK scholarships to certify Geronimo and JOnAS. Oracle recently extended the gesture of good will to the open source community by granting TCK scholarships to the Eclipse Foundation for certifying EclipseLink and Virgo.

Most of you are probably already familiar with EclipseLink - it's the open source JPA reference implemention seeded via a code contribution from Oracle TopLink. You probably are not that familiar with Virgo in comparison - it's an open source Java application server from the Eclipse Foundation. Most of you will probably remember that Virgo was created when SpringSource decided to donate dm Server to the Eclipse Foundation a few years ago. Thanks to the TCK scholarship, Virgo will now aim to become another great Java EE Web Profile application server choice for you instead of just a focus on OSGi and Spring.

This is what Mike Milinkovich, executive director of the Eclipse Foundation, had to say about the grant: "It is important for the Eclipse Foundation to provide our community with the tools they need to enhance developer productivity. As a key contributor to EclipseLink and other projects, Oracle has been a strong supporter of our efforts. Through the Oracle Compatibility Testing Scholarship Program the Eclipse Foundation now has access to the resources we need to achieve Java EE Web Profile compatibility both for Java EE 6, as well as the forthcoming Java EE 7".

You can read about the details in this official press release.

Signature Capture Demo

$
0
0

Many customers, partners and mobile developers have been asking how to provide this via ADF Mobile.  Here is an example that use the JQuery UI signature capture extension along with the JQuery UI Touch Punch extension.  You can find more info about JQuery UI here, more about the JQuery UI signature capture plugin here and finally more info on JQuery UI Touch Punch here


Overview

The main goal of this demo is to show how to use the signature capture component on an AMX page.  Using it on a plain HTML page is provided in the signature capture link above.  The key point to putting in an AMX page is that you can use AMX components to control and manipulate the signature capture control as well as have a consistent styling across them.

What the JQuery Signature capture plugin provides, is a way for the user to draw on the screen, and provides some APIs to control the drawing area.  It also provides an API to fetch a JSON version of what was drawn in the area that can saved and later redrawn into another signature capture or converted to an image later.  The latter is left up to the developer to do but it should be quite easy.

The full source for this completed demo is available here.   This code is based on ADF Mobile version 11.1.2.4.0.

This sample has a single feature with a single AMX page.  That page hosts a signature capture within a Verbatim component.  There are AMX buttons added for Clear (to clear the signature capture area) and Fetch (to return the drawn info as JSON).  When the Fetch is called, the JSON representation of the signature is inserted into the AMX page as text so you can see what was returned.


Step 1 : Get the 3rd Party files and incorporate them into your project

Get JQueryUI, JQuery Signature capture plugin and the JQuery UI Touch Punch plugin.  Each site above has a download link to get the bits you need.  We need only the following files:

NameDescription
jquery-ui.cssStyle sheet for JQuery UI.  Here I've used version 1.9.2
jquery-ui.min.jsMinified version of the JQuery UI javascript.  Here I've used version 1.9.2
jquery.signature.cssStyle sheet for JQuery Signature plugin.  Here I've used version v1.1.
jquery.signature.min.jsMinified version of the JQuery Signature plugin.  Here I've used version v1.1.
jquery.ui.touch.js This extension changes mouse events into touch events.  Here I've used version 0.2.2.

Once you've downloaded these files, you'll want to put them into your public_html folder somewhere.  Then you want to go to your adfmf-feature.xml file and in the "Includes" section in the "Content" tab of the feature your adding this to, add an entry for each one.  This will load these files into the feature's webview when it is initialized and thus every AMX page within that feature will have access to them.  It should look like this:



Step 2:  Add the signature capture component to your page

We do this by adding an amx:Verbatim tag to the page.  Here is the verbatim tag added inside the sample:

    <amx:verbatim id="v1">         <![CDATA[         <script type="text/javascript">             (function() {                 makeSig = function() {                     try {                        var sigElement = document.getElementById("sig");                        if (sigElement == null)                           alert("sigElement not found");                        var sigJq = $(sigElement);                        sigJq.signature();                        sigJq.signature({guideline: true});                     }                     catch (problem) {                        alert("Problem with verbatim code: " + problem);                     }                 }                 window.setTimeout(makeSig, 250);             })();         </script>         <div id="sig" style="height:200px;width:99%"></div>         ]]>     </amx:verbatim>

Notice two things here:  First that you need to specify the div to host the signature component and also add the size you want for the component.  Second is that we needed to set a timeout to fire the code that would replace the contents of the div with the signature component.  This is because there is no hook in AMX currently that fires when the page is fully rendered.  (We'll be adding this in a future release).


Step 3:  Add the Clear and Fetch buttons

Now you need to add your own Javascript file that has some handlers for the Clear and Fetch buttons.  Here's a code example of that Javascript:

  (function () {      // This method clears the signature area      doClear = function () {          var sigElement = document.getElementById("sig");          if (sigElement == null)              alert("sigElement not found");          var sig = $(sigElement);          sig.signature('clear');          adf.mf.api.invokeMethod("mobile.MyClass", "FetchCallback", "", onInvokeSuccess, onFail);      };    // This method gets the signature as a JSON string.        doFetch = function () {          var sigElement = document.getElementById("sig");          if (sigElement == null)              alert("sigElement not found");          var sig = $(sigElement);          var fetchData = sig.signature('toJSON');          adf.mf.api.invokeMethod("mobile.MyClass", "FetchCallback", fetchData, onInvokeSuccess, onFail);      };        function onInvokeSuccess(param) {      };    function onFail() {          alert("It failed");      };
  })();

Once you've created that Javascript file, add it via the adfmf-feature.xml includes that you used to add the 3rd party Javascript/CSS files from step 1.  Now you just add some regular AMX buttons to your page and hook them up to Java handlers.  In those Java handlers you'll use the following code to call to Javascript:

  AdfmfContainerUtilities.invokeContainerJavaScriptFunction( AdfmfJavaUtilities.getActiveContextId(), "doClear", new Object[] { });


Step 4:  Process the Fetch request in Java

This is the last step.  You simply need to handle the callback into Java when the doFetch Javascript is called.  In the Javascript code we added above, notice the following line that does the invoke to Java:

  adf.mf.api.invokeMethod("mobile.MyClass", "FetchCallback", fetchData, onInvokeSuccess, onFail);

This invokes the "FetchCallback" method in the class "mobile.MyClass" and sends it "fetchData" as the parameter.  Now you have the JSON converted data that was drawn into the signature capture on the Java side and can do whatever other processing you wish to it.  This could be simply saving it as is for future display or converting it to an image file or anything else you wish.

Note:  This signature capture example is NOT meant to meet regulatory and legal compliance for signatures!  If your software requires that you meet certain regulatory requirements for storing actual signatures then you need to consider other 3rd party integrations.  This is meant as a simple way to capture signatures or other simple drawings from a mobile device.


Additional new content SOA & BPM Partner Community

$
0
0

·Article Series: Industrial SOA Written collaboratively by eight acknowledged SOA experts, this article series explores the steps necessary to service oriented architecture to the next level: industrialized SOA. The first two articles in this 14-part series are now available on OTN. Read the articles.

·On the Integrity of Data An introduction to the basics of data integrity enforcement in a variety of environments by Oracle ACE Director Lucas Jellema. Read the article.

·Creating a Cloud Roadmap The latest addition to the IT Strategies from Oracle library, this Oracle Practitioner Guide describes a robust process for creating a roadmap for adoption of Cloud Computing within an enterprise.Read the white paper.

·Cookbook: Middleware as a Service Using Oracle Enterprise Manager 12c Step-by-step instructions for provisioning an Oracle WebLogic domain using Oracle Enterprise Manager 12c. Read the white paper.

·Podcast: The State of SOA SOA is alive and kicking and more important than ever. A panel comprised of four of the experts behind the Industrial SOA article series explains why. Listen to the podcast.

·Oracle Service Bus PS6 (11.1.1.7) available for downloadHighlights of this new release include a new config tool to package resources, T2P plugin; WS-Security with MTOM and SwA; SFTP specify Cipher Suite, Hash, Key; OWSM Policy for RESTful services, and more. Read the article.

·Cloud Integrations with Oracle SOA Suite This Oracle Learning Library video demonstration shows you how to connect with the RightNow CX Cloud Service using Oracle SOA Suite. Watch the video.

·Oracle Fusion Middleware Tech Talk: Business Process Management Amit Zavery, vice president of product management for Oracle Fusion Middleware, discusses the importance of business process management and how Oracle Unified Business Process Management Suite is benefitting customers across all industries

·Oracle SOA Suite Team Blog: Latest Release - Oracle SOA Suite 11.1.1.7

·Oracle Magazine, May 2013: Integrate and Mobilize - SOA Bridges the Past and PresentInterested in more content from Oracle Magazine? Get your free subscription today.

SOA & BPM Partner Community

For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]WikiMixForum

MorphoTrak: "Storing billions of images in a hybrid relational and NoSQL database using Oracle Active Data Guard and Oracle NoSQL Database"Database

Keep Taking the Tablets. Early Adopter UX Developer Type Wanted

$
0
0

Here's a free "how to" guide from Oracle Applications User Experience published on OTN via UX Direct that will excite designers, developers, and project managers and get them productively building great tablet solutions with enterprise-level methodologies (are you listening ADF EMG [Application Development Framework Enterprise Methodology Group]?).

If you're embarking on a tablet application design project, then start out with our interactive Oracle Applications User Experience Tablet Guide iBook (yes, you need an iPad).


Develop cool optimized tablet solutions to leverage your cloud applications data with Applications UX's resources.

There's a great conversation on the ADF EMG group about this new resource. And we have a request of our ADF development community: If you're a mobile developer on a tablet project, developing for a native O/S or (preferably, natch) with Oracle ADF Mobile or ADF Faces, who wants to evaluate the guide and provide feedback and examples of how you've used it to build solutions, then let us know using the comments. We can feature your work and findings, if you wish.


Oracle Applications User Experience Tablet Guide

Oracle Applications User Experience Tablet Guide: Early adopter developer wanted.

If you must, well there's a PDF version too.

The outreach continues! Watch out for more announcements of events and happenings to enable developers and other stakeholders in the applications development world to build great looking usable apps on mobile and other devices by checking in regularly on the Voice of User Experience (VoX) blog and following along on Twitter at @usableapps.

Viewing all 19780 articles
Browse latest View live