Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Emulating I2C Devices with Java ME Embedded 3.3

$
0
0

In this post, I will show you how to create an emulated inter-integrated circuit (I2C) device using the Oracle Java ME SDK 3.3 Custom Device Editor and the Embedded Support API.

Oracle's Java ME SDK 3.3 is a fantastic tool to learn how to create applications for embedded devices. The focus of ME Embedded is the Information Module Profile - Next Generation (IMP-NG) headless devices - simple micro controllers with 160KB of memory (or more) designed to read sensor input or control small mechanical devices. The embedded market is growing rapidly as more devices become connected to the"internet of things."

New in the 3.3 version of the SDK is support for a host of peripheral devices, including GPIO, ADC/DAC, UART, I2C, SPI, MMIO and more. The SDK includes an emulator (for Windows) and you can choose between one of two default devices that support IMP-NG.

While the default emulator is useful to start learning Java ME Embedded, at some point you will want an emulator that resembles the target embedded device. This is where the Java ME SDK really shines, by allowing you to design your own emulator. Through the Custom Device Editor, provided with the Java ME SDK 3.3, you select the peripheral devices your physical embedded device supports, including all of relevant information to access the peripherals: hardware port number, pin number, trigger mode, etc. Designing an emulator that matches your physical embedded device can greatly shorten the development cycle of an embedded application.

Start the Custom Device Editor from the command line C:\Java_ME_platform_SDK_3.3\bin\device-editor.exe or through NetBeans and Eclipse. Using the editor, you specify GPIO pins and ports, ADC and DAC devices, and pulse counters as needed. To add serial communication devices (I2C, SPI and MMIO), the editor provides two options: add a simple loopback that echoes back bytes as they are written to the device, or add an implementation of the device using the embedded support API.

Of the three serial bus specifications, I2C is the simplest. It is a two-wire protocol, thus requiring only four lines. For more detailed information on the specification, click here.

The installation directory of the Java ME SDK 3.3, C:\Java_ME_platform_SDK_3.3, contains the documentation and a JAR for the Embedded Support API. Expand theembedded-support-api.zip file located under \docs\api and look at the com.oracle.jme.toolkit.deviceaccess.i2c package.

To emulate an I2C device, you create a class that implements the I2CSlaveBus interface. In NetBeans (or Eclipse) create a Java ME Embedded Application project. Add the \lib\embedded-support-api.jar to the project, then add a Java class that implements the interface:

public class TMP102Device implements I2CSlaveBus { ... }

There are just four methods to implement:

  • int read(byte[] data, int len, I2CSlaveBus.I2CSlaveIdentifier id)
  • void write(byte[] data, I2CSlaveBus.I2CSlaveIdentifier id)
  • void initialize(I2CSlaveBus.I2CSlaveIdentifier id)
  • void close(I2CSlaveBus.I2CSlaveIdentifier id)

The read method writes bytes into the byte array passed to the method as an argument and returns a count of the bytes written to the array. The write method can be used to signal the device for some action. Theinitialize method is called every time the device is accessed through aPeripheralManager.open call - this method can be used to reset the internal state of your emulated device. Finally, the close method should release any resources the emulated device is using.

I choose to emulate a simple I2C temperature device, theTexas Instruments TMP102, a digital temperature sensor with I2C communication capability. After power-on, this device returns two bytes from an internal buffer every time it is read. The first byte contains the left-most 8 bits of a 12-bit word, and the second byte contains the 4 least significant bytes.

This 12-bit value represents a count of 0.0625 degree (Celsius) increments, using the first high order bit to indicate values below 0. Positive temperature values are converted directly to an integer and multiplied by the increment to get the temperature value. For example, if the 12-bit word is 0x320 (0011 0010 000), the temperature is calculated as 0x320 = 800 * 0.0625 = 50 degrees Celsius.

Negative temperature values have a 1 in the high-order bit, and the temperature value is calculated by the 2's complement of the count minus 1. For example, if the 12-bit word is 0xE70 (1110 0111 0000), the temperature is calculated as 0xE70 - 1 = 0xE6F (1110 0110 1111), 2's complement = 0001 1001 000 = 400 * (-0.0625) = -25 degrees C.

Rather than just create a static TMP102 device (returning the same temperature over and over), the class I wrote simulates temperature fluctuations with a thread that randomly changes the "temperature" value by a maximum of +/- .5 degree C every 5 seconds. I start this thread in the initialize method and kill it through the close method. The timing and range of temperature fluctuations is adjustable. To see the complete code, clickhere to download the NetBeans project for the TMP102 emulator.

 To add the TMP102 device to a custom emulator, start by creating a jar file of the project. In NetBeans, you can right-click the project and select Build. A jar file will be created in the dist folder of your NetBeans project.

Next create or modify an existing custom IMP-NG emulator. Start the Custom Device Editor from the command line or through your IDE. Select IMP-NG in the Custom Device Editor Dialog and click New to create a new IMP-NG emulator device, or select one you already have and click Edit.

In the IMP-NG Device editor window, select the I2C tab. Click Custom, then click the Browse button to navigate to the directory where the jar file is located. The implementation class name is the fully qualified name of the I2CSlaveBus class. Enter oracle.example.TMP102Device in the Implementation Class Name field. Click the Add button in the lower right to create a Slave entry.

At this point, you can choose to modify the ID, Name, Bus Number, Address Size and Address of the slave device by clicking in each field and typing. Since my design goal is to emulate a TMP102 device connected to a Raspberry Pi, Model B, I changed the bus number to 1 and the address to 48 (this is a hex number). I2C devices on the Pi use bus 1, and the default address and address size of the TMP102 is 0x48 (72) and 7-bit.

Finally, click OK, and in a few minutes you will see a message that the new/updated emulator is registered with the Device manager. Next, write some code to test the emulated I2C device. You could open the device using the String name of the emulated device like this:

I2CDevice tmp102 = (I2CDevice) PeripheralManager.open("Slave0", I2CDevice.class, null);

However, to emulate opening the device the way it would be opened when attached to a Raspberry Pi, you need to create an I2CConfig object first and pass that to PeripheralManager to open the device:

I2CDeviceConfig config = new I2CDeviceConfig(1, 0x48, 7, 10000); // Bus 1, address 0x48, 7-bit addressing, 10KHz clock

I2CDevice tmp102 = (I2CDevice) PeripheralManager.open(config);

If you prefer, you can run this small NetBean embedded application project I created to test the emulated TMP102 device.

Enjoy!


New Whitepaper – Cloud Performance, Elasticity and Multitenancy with Oracle WebLogic Server 12c and Oracle Database 12c

$
0
0

One of the exciting focus areas of Oracle WebLogic 12c release 12.1.2 is the deep integration with the recently available Oracle Database 12c.

To hear live about these key integration features, please join our launch event on July 31st

One of the advantages of having the world’s most popular Database and the #1 Application Server under one roof, is the simple rule of 1+1=3, and what do I mean by that?

With our Engineering teams on the Database and Middleware side working hand in hand, Oracle WebLogic Server 11g introduced Active GridLink for Real Application Clusters (RAC). In conjunction with Oracle Database, this powerful software technology simplifies management, increases availability, and ensures fast connection failover, with runtime connection, load balancing and affinity capabilities. Deltek is one of the early adopters of these capabilities. Watch their video

With the release of WebLogic Server 12c (12.1.2), tight integration between Oracle WebLogic Server 12c (12.1.2) and Oracle Database 12c enhances these capabilities with improved availability, better resource sharing, inherent scalability, ease of configuration and automated management facilities in a global cloud environment.


It’s worth noting that Oracle WebLogic Server is the only application server with this degree of integration with Oracle Database 12c.

This white paper, authored by Monica Riccelli and Frances Zhao from the CAF Product Management team, explains how these unique database, clustering, and application server technologies work together to enable higher availability, scalability and performance for your business. It starts by introducing Oracle Active GridLink for RAC with attention to ease of configuration, manageability, and performance. Then describes the impact of Oracle WebLogic server on several leading features of Oracle Database12c such as Multitenant Databases (Pluggable Database), Database Resident Connection Pool, Application Continuity, and Global Data Services.

Pleasedownload the whitepaper and let us know your feedback!

You can read more on this topic in these blogs by Steve Felts:

Part 1 - 12c Database and WLS – Overview

Part 2 - 12c Database and WLS - Application continuity


Has the Cloud changed the way developers do their jobs? Not yet.

$
0
0

Continuing on the theme of changes to IT roles brought about by DevOps, cloud computing and other factors...

"Has the cloud really revolutionized the way software developers do their jobs?" ServerSide journalist Cameron McKenzie put that question to several experts, and the answer appears to be Yes and No. McKenzie concludes:
"The benefits cloud computing brings to enterprise environments improve productivity, reduce costs and speed up the time to market, but from the view of the actual application developer, has the cloud really impacted the way they perform their jobs on a daily basis? Today, the answer to that question seems to be 'no', although there is no shortage of experts and innovators in the industry that are working hard to shift that 'no' to an unequivocal 'yes' in the next few years."

Of course, that answer doesn't offer a lot of detail on strategies for preparing for when No becomes Yes. Got any ideas?

Relevant Posts

Announcing: Oracle Solaris Cluster Product Bulletin, July 2013

$
0
0
Product Update Bulletin: Oracle Solaris Cluster, July 2013

Hardware Qualifications

  • X4242A InfiniBand HCA for SPARC T5 servers

Software Qualifications

  • Now adding Oracle Database 12c support for Oracle single instance database, RAC, DataGuard and proxy agent

  • Ready to deploy HA Siebel, TimesTen, SAP livecache/MaxDB, Sybase, ... on Oracle Solaris 11 with the latest Oracle Solaris Cluster agents

  • Exclusive IP in Oracle Solaris 10 Zone cluster on Oracle Solaris Cluster 4.1

  • Oracle Solaris Cluster 4.1 with HA agent for Oracle VM server for SPARC 3.0

Latest Support Information

  • Oracle Solaris Cluster 4.1 SRU3 (4.1.3)

Please read the  Oracle Solaris ClusterProduct Bulletin on Oracle HW TRC for more details.

(If you are not registered on Oracle HW TRC, click here ... and follow the instructions..)

Upcoming Webcast Series: Drive project success with Enterprise Project Portfolio Management

$
0
0

With an increased focus on controlling costs, driving efficiency and minimizing risk, delivering projects that create measurable business value is becoming increasingly tough. Consistently delivering successful projects is vital to the financial success of any organization. Watch these exclusive Oracle Primavera webcasts, featuring expert insights and real-world case studies from across a range of industries showing how you can meet these challenges. Each webcast focuses on one of the three hottest topics in project portfolio management today:

  • Financial Discipline: learn how you can gain greater visibility and control of project finances
  • Operational Excellence: discover the key to increasing efficiency and reducing project costs
  • Risk Mitigation: understand how to overcome uncertainty and avoid costly project delays

If you need to deliver consistent project success, register for the webcasts today to learn how EPPM is helping organizations like yours do just that.

Register today, for the first webcast of this three part series, Financial Discipline: Take Control of Project Finances to Maximize Business Value, August 1st, 2013 – 2:00 p.m. ET

Coherence on Exalogic: dealing with the multiple network interfaces

$
0
0
Recently, we worked an incident where error messages like the following were being thrown when starting the Coherence servers after an upgrade of EECS:
Oracle Coherence GE 3.7.1.8 (thread=Thread-3, member=n/a): Loaded Reporter configuration from "jar:file:/u01/app/fmw_product/wlserver_103/coherence_3.7/lib/coherence.jar!/reports/report-group.xml"
Exception in thread "Thread-3" java.lang.IllegalArgumentException: unresolvable localhost 192.168.10.66 at
com.tangosol.internal.net.cluster.LegacyXmlClusterDependencies.configureUnicastListener(LegacyXmlClusterDependencies.java:199)
...
Caused by: java.rmi.server.ExportException: Listen failed on port: 8877; nested exception is:
java.net.SocketException: Address already in use ...
weblogic.nodemanager.server.provider.WeblogicCacheServer$1.run(WeblogicCacheServer.java:26)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.UnknownHostException: 192.168.10.66 is not a local address
at com.tangosol.net.InetAddressHelper.getLocalAddress(InetAddressHelper.java:117)
at
com.tangosol.internal.net.cluster.LegacyXmlClusterDependencies.configureUnicastListener(LegacyXmlClusterDependencies.java:195)

It is a very known fact that Exalgic has several network interfaces (bond/eth 0,1,2, etc). The logic that Coherence uses when deciding what interface to connect to, specifically to support machines with multiple network interfaces as well as enhancements to allow the localaddress to be specified as a netmask to make configuration across larger clusters easier, makes important (even more than on previuous releases of Coherence) to make sure that the tangosol.coherence.localhost parameter is specified appropriately. From that IP address (or properly mapped host address) the desired network interface to be used can easily be founf and then the Coherence cluster would work finr on it.

Oracle Receives “Strong Positive” Rating in “MarketScope for Segregation of Duty Controls Within ERP and Financial Applications”

$
0
0

A guest post by Sid Sinha, Senior Director, Oracle GRC Product Strategy

Oracle has received a rating of “Strong Positive” in the Gartner’s “MarketScope for Segregation of Duty Controls Within ERP and Financial Applications[1].”

Gartner defines the ERP Segregation of Duties (SOD) Control Market as software providing the following functions: SOD Analysis, Compliant Provisioning, Transaction Analysis, Emergency Privilege Management, Role Management and Privilege Attestation.

According to Gartner, a solution rated “Strong Positive” is viewed as a provider of strategic products, services or solutions. The report framework counsels:

·Customers: Continue with planned investments.

·Potential customers: Consider this vendor a strong choice for strategic investments.

Oracle Advanced Controls includes a comprehensive SOD capability for any application including Oracle ERP platforms and financial applications. Oracle Advanced Controls addresses SOD issues at a finer level of detail than ERP role and permission management, providing unique capabilities to find and fix user security issues across multiple instances at the same time. This capability can be integrated with a user provisioning system and this integration comes pre-built for Oracle Identity Management, Oracle Fusion Applications, the Oracle E-Business Suite and Oracle’s PeopleSoft. Furthermore, Oracle Advanced Controls can scan all business transactions in the system to find actual instances SOD risk and has embedded agents to modify application behavior within the Oracle application suite itself, thereby restricting the options visible to Oracle EBS users, including administrators.

We believe that to prevent financial leakage, corporate fraud and ensure regulatory compliance, Advanced Controls must be enforced at all levels of an organization. Strong SOD policies are dependent upon controlling access to critical business applications such as enterprise resource planning, customer relationship management, and supply chain management systems. Advanced Controls, part of Oracle GRC Applications, enables businesses and organizations to manage, remediate, and enforce user access policies to ensure effective SOD.

For more perspective on the need for segregation of duties and the importance of user access controls please visit the resources section of our web page.

Disclaimer:

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.



[1]“MarketScope for Segregation of Duty Controls Within ERP and Financial Applications,” by Paul E. Proctor, May 14, 2013.

How to Bend Bare Metal to Your Will

$
0
0

photo copyright 2013 by Rick Ramsey

The fins on this 1957 DeSoto were shaped during a time when Americans weren't afraid of offending anyone with their opinions, right or wrong. We have, perhaps, grown a little more introspective, a little more considerate, but our cars have paid the price. They all look alike. Their edges have been work away by focus groups. They have no personality. They cringe at the sight of their own shadows.

I weep for my adopted country.

Well, if you like classic American cars as much as I do, you may on occasion feel the need to bend bare metal to your will. Here's your chance.

Tech Article: How to Get Best Performance From the ZFS Storage Appliance

Disk storage. Clustering. CPU and L1/L2 caching size. Networking. And file systems. Just some of the components of Oracle ZFS Storage Appliance that you can shape for optimum performance. Anderson Souza shows you how. Go ahead. Give your appliance a pair of tail fins. (Link is in the title.)

Psst:
You can see more unique cars from the Golden Age of American Automobile at the Gateway Automobile Museum. If you can't get to the border between Utah and Colorado to appreciate them in person, like I was fortunate enough to do, you can enjoy them through your browser at http://www.gatewayautomuseum.com/cars-and-galleries/.

- Rick

Follow me on:
Blog | Facebook | Twitter | YouTube | The Great Peruvian Novel


Very cool videos about Upgrade to Oracle 12c

$
0
0

Sometimes it is by far easier to watch a few short videos instead of reading an entire book :-)

So enjoy watching Roy talking about Upgrades and Migrations to Oracle Database 12c in short videos covering also the new Upgrade and Migration features in Oracle Database 12c.

.

Chapter 1 - Upgrading is Universal

Chapter 2 - Minimizing Risk and Downtime

Chapter 3 - Leveraging Consolidation to ease Migration

Chapter 4 -Why Upgrade?

Chapter 5 - Automating the Upgrade Process

-Mike


囲碁部がNHKの取材を受けました

Oracle R Distribution for R-3.0.1 released

$
0
0

We're pleased to announce that the Oracle R Distribution 3.0.1 Linux RPMs are now available on Oracle's public yum. R-3.0.1, code-named "Good Sport", is the second release in the R-3.0.x series. This new series in R doesn't announce new features, but indicates that the code base has developed to a new level of maturity.

However, there are some significant improvements in the 3.0 series worth mentioning.  R-3.0.0 introduces the use of large vectors in R, and eliminates some restrictions in the core R engine by allowing R to use the memory available on 64-bit systems more efficiently. Prior to this release, objects had a hard-coded limit of 2^31-1 elements, or roughly 2.1 billion elements.  Objects exceeding this limit were treated as missing (NA) and R sometimes returned a warning, regardless of available memory on the system. Starting in R-3.0.0, objects can exceed this limit, which is a significant improvement. Here's the relevant statement from the R-devel NEWS file:

 There is a subtle change in behaviour for numeric index 
 values 2^31 and larger.These never used to be legitimate 
 and so were treated as NA, sometimes with a warning.They 
 are now legal for long vectors so there is no longer a  warning, and x[2^31] <- y will now extend the vector on a 
 64-bit platform and give an error on a 32-bit one. 

R-3.0.1 adds to these updates by improving serialization for big objects and fixing a variety of bugs.

Older open source R packages will need to be re-installed after upgrading from ORD 2.15.x to ORD 3.0.1, which is accomplished by running:

R> update.packages(checkBuilt = TRUE)

This command upgrades open source packages if a more recent version exists on CRAN or if the installed package was build with an older version of R.

Oracle R Distribution 3.0.1 will be compatible with future versions of Oracle R Enterprise.  As of this posting, we recommend using ORD 2.15.3 with Oracle R Enterprise1.3.1.  ORD 3.0.1 binaries for AIX, Solaris x86, and Solaris SPARC platforms will be available from Oracle's free and Open Source portal soon. Please check back for updates.

OSCON Trip Report

$
0
0

OSCON 2013 was held from July 22 to July 26 in Portland, Oregon. I presented the Java EE 7 hands-on lab there as well as a session on WebSocket/JSR 356. This was my first time to the revered conference.

OSCON was a unique and valuable experience. I would definitely look forward to doing it again some time. More details, including slide decks, lab materials and code examples, posted on my personal blog.

Webcast Announcement: EBS CRM Fundamentals - Resources

$
0
0

Webcast.jpg

Webcast: EBS CRM Fundamentals - Resources

Date: August 13, 2013 at 11am ET, 10am CT, 9am MT, 8am PT, 4pm GMT+1, 8.30pm IST

The EBS CRM team will be presenting a webcast on one of the cornerstones of EBS CRM: resources.  Join the team for this presentation which will review the set up and maintenance of resources and their use in the CRM applications.  During this one hour webcast, we will review the purpose behind resources, how they are set up and pertinent tables regarding resource management within EBS CRM.

Topics will Include

  • Types of Resources in CRM
  • Set up and import of resources
  • Creating groups and roles
  • Maintaining resources
  • Trouble Shooting Resource Issues
  • CRM Resource tables
  • Review the purpose of CRM Sysadmin Roles.

    REGISTER.jpg
    Further details and links to register are in Doc ID 1568712.1.




    Mehr Support ohne Zusatzkosten

    $
    0
    0
    Wer mit Engineered Systems arbeitet und Oracle Premier Support Kunde ist, kann jetzt von zusätzlichen Support-Optionen profitieren.Das Angebot des Platinum Supports richtet sich an Partner, die zertifizierte Konfigurationen auf integrierten Systemen einsetzen. Beispiele sind die Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud oder Oracle SPARC SuperCluster T4-4. Das Platinum Support-Paket enthält 24/7 Fault Monitoring, kürzere Antwortzeiten und System Patching Services, ein jederzeit erreichbares Support Response Team und für Exadata eine zusätzliche Hotline. Dieser einzigartige Service erhöht die Systemverfügbarkeit und hilft, Ausfälle zu vermeiden.

    Welche konkreten Leistungen der Platinum Support für Engineered Systems genau enthält, können Sie hier im englischsprachigen Flyer nachlesen.


    Mehr Support ohne Zusatzkosten

    $
    0
    0
    Wer mit Engineered Systems arbeitet und Oracle Premier Support Kunde ist, kann jetzt von zusätzlichen Support-Optionen profitieren.Das Angebot des Platinum Supports richtet sich an Partner, die zertifizierte Konfigurationen auf integrierten Systemen einsetzen. Beispiele sind die Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud oder Oracle SPARC SuperCluster T4-4. Das Platinum Support-Paket enthält 24/7 Fault Monitoring, kürzere Antwortzeiten und System Patching Services, ein jederzeit erreichbares Support Response Team und für Exadata eine zusätzliche Hotline. Dieser einzigartige Service erhöht die Systemverfügbarkeit und hilft, Ausfälle zu vermeiden.

    Welche konkreten Leistungen der Platinum Support für Engineered Systems genau enthält, können Sie hier im englischsprachigen Flyer nachlesen.



    Oracle12c Database Days: Von September bis Januar

    $
    0
    0
    Seit Ende Juni steht Oracle Database 12c zum Download zur Verfügung. Um einzelne Themen ausführlich behandeln zu können, finden ab September die deutschsprachigen "Oracle Database Days" in verschiedenen Oracle Geschäftsstellen statt. Jeder Monat steht dabei unter einem anderen Motto.Den Anfang macht im September die neue Oracle "Multitenant" Architektur.

    Das Highlight des neuen Datenbank Release 12c ist die Möglichkeit, Datenbanken auch als "Pluggable Database" (PDB) innnerhalb sogenannter "Container Databases" zu verwalten. So eröffnet beispielsweise das schnelle "Ein- und Ausstecken" von PDBs viele neue Möglichkeiten in den Bereichen Konsolidierung und Datenbank Clouds. In dieser speziell von der Oracle BU DB zusammengestellten halbtägigen Veranstaltung lernen Sie alles Wissenswerte über Oracle Multitenant. Veranstaltungsbeginn ist dabei 11:30 Uhr. 

    Termine und Veranstaltungsorte: 

    • 17.09.2013:Oracle Niederlassung München 
    • 18.09.2013: Oracle Customer Visit Center Berlin
    • 19.09.2013: Oracle Niederlassung Frankfurt

    Die Teilnahme an der Veranstaltung ist kostenlos. Weitere Informationen zur Agenda, zu den weiteren geplanten Oracle Database Days sowie die Möglichkeit zur Anmeldung finden Sie auf
    http://tinyurl.com/odd12c. Der erste Termin dieser Reihe, am 17.09. in München, bietet noch eine Besonderheit: Im Anschluss an die Veranstaltung findet das Treffen der DOAG-Regionalgruppe München/Südbayern statt. Also am besten gleich für beides anmelden!

    Oracle Tips : Solaris lgroups, CT optimization, Data Pump, Recompilation of Objects, ..

    $
    0
    0
    1. [Re]compiling all objects in a schema
    exec DBMS_UTILITY.compile_schema(schema =>'SCHEMA');

    To recompile only the invalid objects in parallel:

    exec UTL_RECOMP.recomp_parallel(<NUM_PARALLEL_THREADS>, 'SCHEMA');

    A NULL value for SCHEMA recompiles all invalid objects in the database.


    2. SGA breakdown in Solaris Locality Groups (lgroup)

    To find the breakdown, execute pmap -L | grep shm. Then separate the lines that are related to each locality group and sum up the value in 2nd column to arrive at a number that shows the total SGA memory allocated in that locality group.

    (I'm pretty sure there will be a much easier way that I am not currently aware of.)


    3. Default values for shared pool, java pool, large pool, ..

    If the *pool parameters were not set explicitly, executing the following query is one way to find out what are they currently set to.

    eg.,
    SQL>select * from v$sgainfo;
    
    NAME                                  BYTES RES
    -------------------------------- ---------- ---
    Fixed SGA Size                      2171296 No
    Redo Buffers                      373620736 No
    Buffer Cache Size                8.2410E+10 Yes
    Shared Pool Size                 1.7180E+10 Yes
    Large Pool Size                   536870912 Yes
    Java Pool Size                   1879048192 Yes
    Streams Pool Size                 268435456 Yes
    Shared IO Pool Size                       0 Yes
    Granule Size                      268435456 No
    Maximum SGA Size                 1.0265E+11 No
    Startup overhead in Shared Pool  2717729536 No
    Free SGA Memory Available                 0
    12 rows selected.
    

    4. Fix to PLS-00201: identifier 'GV$SESSION' must be declared error

    Grant select privilege on gv_$SESSION to the owner of the database object that failed to compile.

    eg.,
    SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
    Warning: Package Body altered with compilation errors.
    
    SQL> show errors
    Errors for PACKAGE BODY OWF_MGR.FND_SVC_COMPONENT:
    
    LINE/COL ERROR
    -------- -----------------------------------------------------------------
    390/22   PL/SQL: Item ignored
    390/22   PLS-00201: identifier 'GV$SESSION' must be declared
    
    SQL> grant select on gv_$SESSION to OWF_MGR;
    Grant succeeded.
    
    SQL> alter package OWF_MGR.FND_SVC_COMPONENT compile body;
    Package body altered.

    5. Solaris Critical Thread (CT) optimization for Oracle logwriter (lgrw)

    Critical Thread is a new scheduler optimization available in Oracle Solaris releases Solaris 10 Update 10 and later versions. Latency sensitive single threaded components of software such as Oracle database's logwriter benefit from CT optimization.

    On a high level, LWPs marked as critical will be granted more exclusive access to the hardware. For example, on SPARC T4 and T5 systems, such a thread will be assigned exclusive access to a core as much as possible. CT optimization won't delay scheduling of any runnable thread in the system.

    Critical Thread optimization is enabled by default. However the users of the system have to hint the OS by marking a thread or two "critical" explicitly as shown below.

    priocntl -s -c FX -m 60 -p 60 -i pid <pid_of_critical_single_threaded_process>

    From database point of view, logwriter (lgwr) is one such process that can benefit from CT optimization on Solaris platform. Oracle DBA's can either make the lgwr process 'critical' once the database is up and running, or can simply patch the 11.2.0.3 database software by installing RDBMS patch 12951619 to let the database take care of it automatically. I believe Oracle 12c does it by default. Future releases of 11g software may make lgwr critical out of the box.

    Those who install the database patch 12951619 need to carefully follow the post installation steps documented in the patch README to avoid running into unwanted surprises.


    6. ORA-14519 error while importing a table from a Data Pump export dump
    ORA-14519: Conflicting tablespace blocksizes for table : Tablespace XXX block \
    size 32768 [partition specification] conflicts with previously specified/implied \
    tablespace YYY block size 8192
     [object-level default]
    Failing sql is:
    CREATE TABLE XYZ
    ..

    All partitions in table XYZ are using 32K blocks whereas the implicit default partition is pointing to a 8K block tablespace. Workaround is to use the REMAP_TABLESPACE option in Data Pump impdp command line to remap the implicit default tablespace of the partitioned table to the tablespace where the rest of partitions are residing.


    7. Index building task in Data Pump import process

    When Data Pump import process is running, by default, index building is performed with just one thread, which becomes a bottleneck and causes the data import process to take a long time especially if many large tables with millions of rows are being imported into the target database. One way to speed up the import process execution is by skipping index building as part of data import task with the help of EXCLUDE=INDEX impdp command line option. Extract the index definitions for all the skipped indexes from the Data Pump dump file as shown below.

    impdp <userid>/<password> directory=<directory> dumpfile=<dump_file>.dmp \
        sqlfile=<index_def_file>.sql INCLUDE=INDEX

    Edit <index_def_file>.sql to set the desired number of parallel threads to build each index. And finally execute the <index_def_file>.sql to build the indexes once the data import task is complete.

    Oracle Business Intelligence Training Workshops in UK in September

    $
    0
    0

    These trainings are free of charge and only available to OPN member partners from any country in EMEA.

    Oracle Business Intelligence 10g to 11g Upgrade 3-day Workshop

    If you are a Business Intelligence practitioner and familiar with Oracle BI version 10g, then this work shop will upgrade your knowledge to the latest version of OBI 11g technology, reporting solutions and new features. The Workshop provides opportunities to practice with the OBIEE 11g environment in hands on activities. Participants will gain in-depth understanding of the new architecture of OBIEE 11g security mode, installation& configuration as well as reporting aspects like new ROLAP / MOLAP style hierarchical browsing, new chart types, the Action Framework and Advanced Visualisation. This will prepare you to take the OBI11g OPN Specialisation exam.

    Audience for 10g to 11g Workshop

    • Business Intelligence Application Developer or Consultant familiar with Oracle BI version 10g
    • Data Warehouse Developer familiar with Oracle BI version 10g
    • Enterprise Architects & Industry Solutions Architects

    Event Details:Tuesday,Sep 10 - Sep 12, 2013,09:00 AM – 05:00 PM

    Oracle Office in Reading
    Building 510, Oracle Parkway Thames Valley Park,
    Reading, RG6 1RA
    United Kingdom

    Register Now for 3-day 11g Upgrade Workshop

    Oracle Business Analytics Partner Sales Workshop

    This workshop is aimed at introducing attendees to Oracle Business Analytics from a sales perspective. It will help you find out what is new and what has been enhanced, and in understanding how this works for existing Oracle Applications Customers, especially with the launch of FUSION Applications.

    Attendees will understand how Analytics is positioned for the market opportunities from Big Data, Customer Experience, and Cloud and to be ready to talk confidently about Oracle BI Foundation 11g, BI-Apps, Endeca, Exalytics and Mobile Business Analytics solutions to their clients.

    Audience for Sales Workshop

    • Executives, Sales and presales from Oracle Partners
    • Existing partners selling applications or technology or hardware will all benefit by be able to cross-sell Analytics to existing and new clients.

    Event Details:Tuesday,Sep 10, 201308:45 AM – 05:00 PM

    Oracle Office in Reading
    Building 510, Oracle Parkway Thames Valley Park,
    Reading, RG6 1RA
    United Kingdom

    Register Now for Sales Workshop

    Oracle – The journey begins by Shambo Chatterjee:Campus Hires’13 from BIT-Mesra

    $
    0
    0

    Hello I am Shambo Chatterjee better known as SAM. As a campus hire I joined Oracle on 1st July 2013 in Oracle IDC Hyderabad. I feel myself really lucky and good to be in a company which is the world’s largest enterprise software company, (Oracle).

    On the first day we were directed to the conference hall where we had our induction program and got introduced to each other, apart from that we were gifted with goodies that made the day special. Next day we were about to begin the journey into the world of corporate where we would be sitting on the cubicles leaving our college classroom benches, a totally new world with the aroma of expectation, excitement and freshness.

    On the day of the GO program we were taught some oracle values and many other things. There was a lot of fun waiting for us with the ‘Kick-off’ of the GO plus program.

    After a long wait on the waiting list we got our seats confirmed and packed our bags to go to ICRISAT. It was a three day program with lots of adventure, activities, fun and yes not to forget the awesome food.

    The first evening we got ourselves introduced to each other in a slightly different way. Then we were divided into random groups and had to name it with some funny names like ‘Yanna_Rascala’. We were asked to act out two Oracle values without mentioning the name of the values. The audience had to trigger it out what the values where. Not to forget the mouth watering food that we were served in the dinner.

    The next day we woke up early in the morning to see the sunrise disappointed by the clouds. It followed by some really adventurous games ‘The Australian trolley’---- were we were divided into some random groups with four people on each pair of wooden planks. The team that would cross the finish line first without dropping their legs on the ground would win the game. It was filled with lots of fun where we watched people fall, some doing it extremely well just to conclude the Oracle values of synchronization, and quality improvement that will help to reach your goal even in tough situations.

    There was lot more activities like walking on rope, hanging on rope to jump from one island to other, some logical puzzles and lots more. Each and every activity was filed with fun, fear and excitement. 

    THE WHEELS OF KNOWLEDGE

    One thing that I learned in these days was ‘Don’t be afraid when fear comes in your way. Just relax and concentrate and let fear, fear you. In that way the work will become simple and you take a leap ahead into the future.’

    All these events changed my way I use to think about the corporate and the IT industry and I started loving it.

    That was all about the fun part and about my job in Oracle.

    I am an Application Developer in Oracle. It’s really a great place to start as a fresher a great opportunity where you come in touch with great people who are always humble to help you. Inside the company they treat all employees as equal irrespective of their experience or positions. Yeah there is one best thing I just forgot to mention ---- you can enjoy unlimited coffee, soups, tea, cold drinks to get yourself refreshed. After the work hours you can even go and build your muscle in the gym. There are playground with floodlights where you can play football, cricket and volleyball. The work life balance is great where you get an opportunity to work hard and play harder.

    Don’t Choose Your Enterprise Case Management System Without Reading This!

    $
    0
    0

    Enterprise Case Management (ECM) in compliance space is a hot topic today. Institutions are focusing on ECM as a means to achieve productivity gains and win customers. I believe there are three primary drivers for this line of thinking:

    • Replacing a detection or transaction/activity monitoring system is costly and risky.  It is costly because you need to revamp everything from your data to your detection models.  It is risky because touching your models means drawing scrutiny from your regulators.
    • Institutions today have several detection systems and any consolidation strategy for the disparate detection systems is not practical for the reasons listed in the point above. Therefore, compliance stakeholders find enterprise case management to be the natural hub for executing their consolidation strategy. 
    • Over the course of last decade, practitioners have come to realize that compliance is still a people-intensive operation.  No matter the automation one puts in place, regulatory requirements mandate that detailed analysis and investigations be conducted by humans. This requirement lends its to a higher focus on your investigation and analysis platform.

    There are very few financial institutions whose compliance and fraud management departments have not toyed with the idea of an enterprise case management strategy.  However, and unfortunately, very few institutions have tasted success in their strategy.  In most instances the strategy has not taken off the ground, and where it has seen some traction, it has ceased to be an enterprise play after an initial couple of iterations.  These institutions have started on this journey with a strong blueprint of end state and strategic vision; however a majority of them have failed when it comes to execution.

    Very few institutions have successfully managed to design, invest, and execute an enterprise case management strategy and thereby reaped the benefits of their success.  This lack of success has resulted in once bullish financial crime management stakeholders shying away from the enterprise investigation strategy.  Their faith in an enterprise case management solution has disappeared.  That, in my opinion, is a big mistake because by abandoning an enterprise case management strategy, they are inadvertently making their job difficult, increasing their compliance costs and exposing themselves to financial crime perpetrators and regulators alike.

    It is not rocket science

    I believe institutions make their enterprise case management blueprint way more complicated than it should be.  An enterprise investigation does not require sophisticated technology.  Yes, it does demand a flexible and extensible solution, but that does not necessarily mean it requires a sophisticated, over-the-top next generation platform.  Several existing platforms, external and internal, can be employed effectively to meet the goals of enterprise case management.  I believe more than the technology and the tool, enterprise case management demands strong execution and strong program management discipline on part of the institution.

    Figure 1: Components of Enterprise Case Management

    The diagram to the left speaks to some of the must have components of an enterprise case management tool.  None of these components are new and ground breaking concepts.  They have been out there in the enterprise software industry for years and have been proven to deliver.  The key is to apply these concepts in context of financial crime investigations.

    It is a journey, accept it

    "Do you support Google-like search?”, “Will your system integrate with our business process management framework?” or “Does your case management system use our business rules framework?” are some of the queries I come across these days while talking about case management.  While these are not unreasonable questions, institutions and decision makers should recognize that their enterprise case management strategy does not fail because the tool they chose or the tool they built cannot support these functions.  In a nutshell, though it is acceptable and recommended to pursue these features in your target investigations platform, please do not consider them as key criteria for your tool selection or tool development process.  Instead, emphasize on:

    • The key components that we discussed above.
    • The team, internal and external, that will work on your project.
    • The different stake holders who will be using this system today and in the future. Are they onboard with the overall strategy?
    • The tool provider (if you are going with a vendor solution) and its appetite and willingness to work with you to help execute your strategy.

      Figure 2: Iterative Process

      There is enough benefit to drive you from putting your different functions of financial crime and compliance management on a single case management tool.  It is a journey.  Score the small wins first.  They will provide a very high return on investment and more importantly, they will provide a foundation to build your next generation, complex features and functions.

      Work with your guinea pig

      However broader and comprehensive an enterprise case management strategy may be, there is always that one group within the organization whose needs drive the project.  This group is desperately in need of a case management tool, and it is this group's success that will decide if your strategy will be successful.  It is needless to say that if you fail in your first step, your opportunity to execute your vision is lost.  This is where majority of the enterprise case management strategies fizzle out.  Therefore, this guinea pig is your most important partner in your strategy.  At the end of the first deployment, you want this guinea pig to be your champion. It has to be the object of envy amongst the different groups.  This will make it that much easier for you to execute.

      • Set the right expectations.
      • Understand the key pain points. Stay consistent with those pain points and don't get caught up in the feature/function frenzy.
      • Give their requirements priority and attention.
      • Define key performance metrics up front. This will help you quantify a clear return on investment once the project is live.

      "What you see is what you get!!!"  Yeah Right!!!

      Although this point is related to the first couple of points I discuss in this post, I believe it warrants its own section.

      Have the competence, patience and tenacity to see and understand what is under the hood.

      And I say this with passion.  When you are selecting a vendor, please spend time on proper due diligence.  Remember that a vendor’s perspective of what it takes to implement their tool will always vary from that of yours.  Do recognize that it is you and your institution that has to implement the system.  The vendor and its tool is just one piece of the puzzle, although an important one.  Going in with as much knowledge of the tool as you can garner during the selection process can make a difference between failure and success.  Do note that there is no silver bullet to getting an enterprise case management system implemented.

      No one said executing an enterprise case management strategy was easy.  However, the upside of having a successful enterprise case management platform is high enough that it warrants some diligent planning and execution.  Institutions that have successfully achieved this goal or are on the path to achieve this goal are already reaping the benefits of their investment.  There is absolutely no reason for you to stay behind and not reap the rewards.

      Gaurav Harode is the Sales Consulting Director for Financial Services Analytical Applications at Oracle. He can be reached at gaurav.harode AT oracle.com.

      Viewing all 19780 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>