Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Solaris Random Number Generation

$
0
0
The following was originally written to assist some of our new hires in learning about how our random number generators work and also to provide context for some questions that were asked as part of the ongoing (at the time of writing) FIPS 140-2 evaluation of the Solaris 11 Cryptographic Framework.

1. Consumer Interfaces

The Solaris random number generation (RNG) system is used to generate random numbers, which utilizes both hardware and software mechanisms for entropy collection. It has consumer interfaces for applications; it can generate high-quality random numbers suitable for long term
asymmetric keys and pseudo-random numbers for session keys or other cryptographic uses, such as a nonce.

1.1 Interface to user space

The random(7D) device driver provides the /dev/random and /dev/urandom devices to user space, but it doesn't implement any of the random number generation or extraction itself.

There is a single kernel module (random) for implementing both the /dev/random and /dev/urandom devices the two primary entry points are rnd_read() and rnd_write() for servicing read(2) and write(2) system calls respectively.

rnd_read() calls either kcf_rnd_get_bytes() or kcf_rnd_get_pseudo_bytes() depending on wither the device node is an instance of /dev/random or /dev/urandom respectively.  There is a cap on the maximum number of bytes that can be transfered in a single read, MAXRETBYTES_RANDOM (1040) and MAXRETBYTES_URANDOM(128 * 1040) respectively.

rnd_write() uses random_add_entropy() and random_add_pseduo_entropy() they both pass 0 as the estimate of the amount of entropy that came from userspace, so we don't trust userspace to estimate the value of the entropy being provided.  Also only a user with uid root or all privilege can open /dev/random or /dev/urandom for write and thus call rnd_write().

1.2 Interface in kernel space

The kcf module provides an API for randomnes for in kernel KCF consumers. It implements the functions mentioned above that are called to service the read(2)/write(2) calls and also provides the interfaces for kernel consumers to access the random and urandom pools.

If no providers are configured no randomness can be returned and a message logged informing the administrator of the mis-configuration.

2. /dev/random

We periodically collect random bits from providers which are registered with the Kernel Cryptographic Framework (kCF) as capable of random number generation. The random bits are maintained in a cache and it is used for high quality random numbers (/dev/random) requests. If the cache has sufficient random bytes available the request is serviced from the cache.  Otherwise we pick a provider and call its SPI routine.  If we do not get enough random bytes from the provider call we fill in the remainder of the request by continously replenishing the cache and using that until the full requested size is met.

The maximum request size that will be services for a single read(2) system call on /dev/random is 1040 bytes.

2.1 Initialisation

kcf_rnd_init() is where we setup the locks and get everything started, it is called by the _init() routine in the kcf module, which itself is called on very early in system boot - before the root filesystem is mounted and most modules are loaded.

For /dev/random and random_get_bytes() a static array of 1024 bytes is setup by kcf_rnd_init().  

We start by placing the value of gethrtime(), high resolution time since the boot time, and drv_getparam(), the current time of day as the initial seed values into the pool (both of these are 64 bit integers).  We set the number of random bytes available in the pool to 0.

2.2 Adding randomness to the rndpool

The rndc_addbytes() function adds new random bytes to the pool (aka cache). It holds the rndpool_lock mutex while it xor in the bytes to the rndpool.  The starting point is the global rindex variable which is updated as each byte is added.  It also increases the rnbyte_cnt.

If the rndpool becomes full before the passed in number of bytes is all used we continue to add the bytes to the pool/cache but do not increase rndbyte_cnt, it also moves on the global findex to match rindex as it does so.

2.3 Scheduled mixing

The kcf_rnd_schedule_timeout() ensures that we perform mixing of the rndpool.  The timeout is itself randomly generated by reading (but not consuming) the first 32 bits of rndpool to derive a new time out of of between 2 and 5.544480 seconds.  When the timeout expires the KCF rnd_handler() function [ from kcf_random.c ] is called.

If we have readers blocked for entropy or the count of available bytes is less than the pool size we start an asynchronous task to call rngprov_getbyte() gather more entropy from the available providers.

If there is at least the minimum (20 bytes) of entropy available we wake up the threads stopped in a poll(2)/select(2) of /dev/random. If there are any threads waiting on entropy we wake those up too.  The waiting and wake up is performed by cv_wait_sig() and cv_broadcast(), this means that the pool lock will be held when cv_broadcast wakes up a thread it will have the random pool lock held.

Finally it schedules the next time out.

2.4 External caller Seeding

The random_add_entropy() call is able able to provide entropy from an external (to KCF or its providers) source of randomness.  It takes a buffer and a size as well as an estimated amount of entropy in the buffer. There are no callers in Solaris that provide a non 0 value for the estimate
of entropy to random_add_entropy().  The only caller of random_add_entropy() is actually the write(2) entry point for /dev/random.

Seeding is performed by calling the first available software entropy provider plugged into KCF and calling its KCF_SEED_RANDOM entropy function. The term "software" here really means "device driver driven" rather than CPU instruction set driven.  For example the n2rng provider
is device driver driven but the architecture based Intel RDRAND is regardedas "software".  The terminology is for legacy reasons in the early years of the Solaris cryptographic framework.  This does however mean we never attempt to seed the hardware RNG on SPARC S2 or S3 core based systems (T2 through M6 inclusive) but we will attempt to do so on Intel CPUs with RDRAND.

2.5 Extraction for /dev/random

We treat rndpool as a circular buffer with findex and rindex tracking the front and back respectively, both start at position 0 during initial initalisation.

To extract randomness from the pool we use kcf_rnd_get_bytes(); this is a non blocking call and it will return EAGAIN if there is insufficient randomness available (ie rndbyte_cnt is less than the request size) and 0 on success.

It calls rnd_get_bytes() with the rndpool_lock held, the lock will be released by rnd_get_bytes() on both sucess and failure cases.  If the number of bytes requested of rnd_get_bytes() is less than or
equal to the number of available bytes (rnbyte_cnt) then we call rndc_getbytes() immediately, ie we use the randomness from the pool.   Otherwise we release the rndpool_lock and call rngprov_getbytes() with the number of bytes we want, if that still wasn't enough we loop picking up as many bytes as we can by successive calls, if at any time the rnbyte_cnt in the pool is less than 20 bytes we wait on the read condition variable (rndpool_read_cv) and try again when we are woken up.

rngprov_getbytes() finds the first available provider that is plugged into KCF and calls its KCF_OP_RANDOM_GENERATE function.  This function is also used by the KCF timer for scheduled mixing (see later discussion).  It cycles through each available provider until either there are no more available or the requested number of bytes is available.  It returns to the caller the number of bytes it retreived from all of the providers combined.

If no providers were available then rngprov_getbytes() returns an error and logs and error to the system log for the administrator.  A default configuration of Solaris (and the one required by FIPS 140-2 security target) has at least the 'swrand' provider.  A Solaris instance running on SPARC S3 cores (T2 through M6 inclusive) will also have the n2rng provider configured and available.

2.6 KCF Random Providers

KCF has the concept of "hardware" and "software" providers.  The terminology is a legacy one from before hardware support for cryptographic algorithms and random number generation was available as unprivileged CPU instructions.

It really now maps to "hardware" being a provider that as a specific device driver, such as n2rng  and "software" meaning CPU instructions or some other pure software mechanism.  It doesn't mean that there is no "hardware" involved since on Intel CPUs with the RDRAND instruction calls are in the swrand provider but it is regarded as a "software" provider.

2.6.1 swrand: Random Number Provider

All Solaris installs have a KCF random provider called "swrand". This provider periodically collects unpredictable input and processes it into a pool or entropy, it implements its own mixing (distinct from that at the kcf level), extraction and generation algorithms.


It uses a pool called srndpool of 256 bytes and a leftover buffer of 20 bytes.

The swrand provider has two different entropy sources:

1. By reading blocks of physical memory and detecting if changes occurred in the blocks read.

Physical memory is divided into blocks of fixed size.  A block of memory is chosen from the possible blocks and hashed to produce a digest.  This digest is then mixed into the pool.  A single bit from
the digest is used as a parity bit or "checksum" and compared against the previous "checksum" computed for the block.  If the single-bit checksum has not changed, no entropy is credited to the pool.  If there is a change, then the assumption is that at least one bit in the block has changed.  The possible locations within the memory block of where the bit change occurred is used as a measure of entropy. 

For example,

if a block size of 4096 bytes is used, about log_2(4096*8)=15 bits worth of entropy is available.  Because the single-bit checksum will miss half of the changes, the amount of entropy credited to  the pool is doubled when a change is detected.  With a 4096 byte block size, a block change will add a total of 30 bits of entropy to the pool.

2. By measuring the time it takes to load and hash a block of memory and computing the differences in the measured time.

This method measures the amount of time it takes to read and hash a physical memory block (as described above).  The time measured can vary depending on system load, scheduling and other factors.  Differences between consecutive measurements are computed to come up with an entropy estimate.  The first, second, and third order delta is calculated to determine the minimum delta value.  The number of bits present in this minimum delta value is the entropy estimate.

3. Additionally on x86 systems that have the RDRAND instruction we take entropy from there but assume only 10% entropic density from it.  If the rdrand instruction is not available or the call to use it fails (CF=0) then the above two entropy sources are used.

2.6.1.1 Initalisation of swrand

Since physical memory can change size swrand registers with the Solaris DR subsystem so that it  can update its cache of the number of blocks of physical memory when it either grows or shrinks.

On initial attach the fips_rng_post() function is run.

During initalisation the swrand provider adds entropy from the high resolution time since boot and the current time of day (note that due to the module load system and how KCF providers register these values will always be different from the values that the KCF rndpool is initalised with). It also adds in the initial state of physical memory, the number of blocks and sources described above.

The first 20 bytes from this process are used as the XKEY and are also saved as the initial value of previous_bytes for use with the FIPS 186-2 continuous test.

Only after all of the above does the swrand provider register with the cryptographic framework for both random number generation and seeding of the swrand generator.

2.6.1.2 swrand entropy generation

The swrand_get_entropy() is where all the real work happens when the KCF random pool calls into swrand.  This function can be called in either blocking or non blocking mode. The only difference between blocking and non blocking is that the later will return EAGAIN if there is insufficient entropy to generate the randomness, the former blocks indefinitely.

A global uint32_t entropy_bits is used to track how much entropy is available.

When a request is made to swrand_get_entropy() we loop until we have the available requested amount of randomness.  First checking if the number of remaining entropy in srndpool is below 20 bytes, if it is then we block waiting for more entropy (or return EGAIN if non blocking mode).

Then determine how many bytes of entropy to extract, it is the minimum of the total requested and 20 bytes.  The entropy extracted from the srndpool is then hashed using SHA1 and fed back into the pool starting at the previous extraction point.  We ensure that we don't feed the same entropy back into the srndpool at the same position, if we do then the system will force a panic when in FIPS 140 mode or log a warning and return EIO when not in FIPS 140 mode.

The FIPS 186-2 Appendix 3 fips_random_inner() function is then run on that same SHA1 digest and the resulting output checked that each 20 byte block meets the continous RNG test - if that fails we panic or warn as above.


We then update the output buffer and if continue the loop until we have generated the requested about.  Before swrand_get_entropy() returns it zeros out the used SHA1 digest and any temp area and releases the srndpool mutex. lock.

2.6.1.3 Adding to the swrand pool

The swrand_seed_random () function is used to add request adding entropy from an external source via the KCF random_add_entropy() call. If is called from KCF (ie something external to swrand itself) synchronously then the entropy estimate is always 0.  When called asynchronously we delay adding in the entropy until the next mixing time.

The internal swrand_add_entropy() call deals with updating srndpool it does do by adding and then mixing the bytes while holding the srndpool mutex lock.  Thus the pool is always mixed before returning.

2.6.1.4 Mixing the swrand pool

The swrand provider uses the same timeout mechanism for mixing that is described above for the KCF rndpool for adding new entropy to the the srndpool using the sources described above.

The swrand_mix_pool() function is called as a result of the timeout or an explicit request to add more entropy.

To mix the pool we first add in any deferred bytes, then by sliding along the pool in 64 bit chunks hash the data from the start upto this point and  the position we are long the pool with SHA1.  Then XOR the resulting hash back into the block and move along.

2.6.2 n2rng random provider

This applies only to SPARC processors with either an S2 core (T2, T3, T3+) or to S3 core (T4, T5, M5, M6) both CPU families use the same n2rng driver and the same on chip system for the RNG.

The n2rng driver provides the interface between the hyper-privilged access to the RNG registers on the CPU and KCF.

The driver performs attach time diagnostics on the hardware to ensure it continues operating as expected.  It determines that it is operating
in FIPS 140-2 mode via its driver.conf(5) file before its attach routine has completed. The full hardware health check in conjunction with the  hypervisor only when running in the control domain. The FIPS 140 checks are always run regardless of the hypervisor domain type.  If the FIPS 140  POST checks fail the driver ensures it is deregistered with KCF.

If the driver is suspended and resumed it reconfigures and re-registers with KCF.  This would happen on a suspend/resume cycle or during live  migration or system reconfiguration.

External seeding of n2rng is not possible from outside of the driver, and it does not provide the seed_random operation to KCF.

This algorithm used by n2rng is very similar to that of swrand, if loops collecting entropy and building up the requested number of bytes checking that each bit of entropy is different from the previous one, applying the fips_random_inner() function and then checking the resulting processed bytes differs from the previous set.

The entropy collection function n2rng_getentropy() is the significant difference between it and swrand in how they provide random data requests to KCF callers.

n2rng_getentropy() returns the requested number of bytes of entropy by uses hypervisor calls to hv_rng_data_read() and providing error checking to ensure we can retry on certain errors but eventually give up after a period of time or number of failed attempts at reading from the hypervisor.   The function hv_rng_data_read() is a short fragment of assembler code that reads a 64 bit value from the hypervisor RNG register (HV_RNG_DATA_READ 0x134) and is only called by n2rng_getentropy() and the diagnostic routine called at driver attach and resume time.

3.0 FIPS 186-2: fips_random_inner()

We will discuss this function here because it is common to swrand, n2cp, the /dev/urandom implementation as well as being used in userspace function fips_get_random.

It is a completely internal to Solaris function that can't be used outside of the cryptographic framework.

    fips_random_inner(uint32_t *key, uint32_t *x_j, uint32_t *XSEED_j)

It computes a new random value, which is stored in x_j; updates XKEY.  XSEED_j is additional input.  In principle, we should protect XKEY, perhaps by placing it in non-paged memory, but we aways clobber XKEY with fresh entropy just before we use it.  And  step 3d irreversibly updates it just  after we use it.  The only risk is that if an attacker captured the state while the entropy generator was broken, the attacker could predict future  values. There are two cases:

  1. The attack gets root access to a live system.  But there is no defense against that that we can place in here since they already have full control.
  2. The attacker gets access to a crash dump.  But by then no values are being generated.


Note that XSEED_j is overwritten with sensitive stuff, and must be zeroed by the caller.  We use two separate symbols (XVAL and XSEED_j) to make each step match the notation in FIPS 186-2.

All parameters (key, x_j, XSEED_j) are the size of a SHA-1 digest, 20 bytes.

The HASH function used is SHA1.

The implementation of this function is verified during POST by fips_rng_post() calling it with a known seed.  The POST call is performed before the swrand module registers with KCF or during initalisation  of any of the libraries in the FIPS 140 boundary (before their symbols are available to be called by other libraries or applications).

4.0 /dev/urandom

This is a software-based generator algorithm that uses the random bits in the cache as a seed. We create one pseudo-random generator (for /dev/urandom) per possible CPU on the system, and use it, kmem-magazine-style, to avoid cache line contention.

4.1 Initalisation of /dev/urandom

kcf_rnd_init() calls rnd_alloc_magazines() which  setups up the empty magazines for the pseduo random number pool (/dev/urandom). A separate magazine per CPU is configured up to the maximum number of possible (not available) CPUs on the system, important because we can add more
CPUs after initial boot.

The magazine initalisation discards the first 20 bytes so that the rnd_get_bytes() function will be using that for comparisons that the next block always differs from the previous one.  It then places the next  20 bytes into the rm_key and next again 20 bytes into rm_seed.  It does this for each max_ncpus magazine.  Only after this is complete does kcf_rnd_init() return back to kcf_init().  Each of the per CPU magazines has its own state which includes hmac key, seed and previous value, each also has its own rekey timers and limits.

The magazines are only used for the pseduo random number pool (ie servicing random_get_pseduo_bytes() and /dev/urandom) not for random_get_bytes() or /dev/random.

Note that this usage is preemption-safe; a thread entering a critical section remembers which generator it locked and unlocks the same one; should it be preempted and wind up running on a different CPU, there will be a brief period of increased contention before it exits the critical section but nothing will melt.

4.2 /dev/urandom generator

At a high level this uses the FIPS 186-2 algorithm using a key extracted from the random pool to generate a maximum of 1310720 output blocks before rekeying.  Each CPU (this is CPU thread not socket or core) has its own magazine.

4.3 Reading from /dev/urandom

The maximum request size that will be services for a single read(2) system call on /dev/urandom is 133120 bytes.

Reads all come in via the kcf_rnd_get_pseduo_bytes() function.

If the requested size is considered a to be large, greater than 2560 bytes, then instead of reading from the pool we tail call the generator directly by using rnd_generate_pseudo_bytes().

If the CPUs magazine has sufficient available randomness already we use that, otherwise we call the rnd_generate_pseudo_bytes() function directly.

rnd_generate_pseduo_bytes() is always called with the cpu magazine mutex already locked and it is released when it returns.

We loop through the following until the requested number of bytes has been built up or an unrecoverable error occurs.

rm_seed is reinitialised by xoring in the current 64 bit highres time, from gethrtime() into the prior value of rm_seed.  The fips_random_inner() call is then made using the current value of rm_key and this new seed.

The returned value from fips_random_inner() is then checked against our previous return value to ensure it is a different 160bit block.  If that fails the system panics when in FIPS 140-2 mode or returns EIO if FIPS mode is not enabled.

Before returning from the whole function the local state is zero'd out and the per magazine lock released.

5.0 Randomness for key generation

For asymmetric key generation inside the kernel a special random_get_nzero_bytes() API is provided.  It differs from random_get_bytes() in two  ways, first calls the random_get_bytes_fips140() function which only returns once all FIPS 140-2 initalisation has been completed.  The  random_get_bytes() function needs to be available slightly earlier because some very early kernel functions need it (particularly setup of the VM system and if ZFS needs to do any writes as part of mounting the root filesystem).  Secondly it ensures that no bytes in the output have the 0 value, those are replaced with freshly extracted additional random bytes, it continues until the entire requested length is entirely made up of non zero bytes.

A corresponding random_get_nzero_pseduo_bytes() is also available for cases were we don't want 0 bytes in other random sequences, such as session keys, nonces and cookies.

The above to functions ensure that even though most of the random pool is available early in boot we can't use it for key generation until the full FIPS 140-2 POST and integrity check has completed, eg on the swrand provider.

6.0 Userspace random number

Applications that need random numbers may read directly from /dev/random and /dev/urandom. Or may use a function implementing
the FIPS 186-2 rng requirements.

The cryptographic framework libraries in userspace provide the following
internal functions:

    pkcs11_get_random(), pkcs11_get_urandom()
    pkcs11_get_nzero_random(), pkcs11_get_nzero_urandom()

The above functions are available from the libcryptoutil.so library but are private to Solaris. Similar to the kernel space there are pkcs11_get_nzero_random() and pkcs11_get_nzero_urandom() variants that ensure none of the bytes are zero.  The pkcs11_ prefix is because these are private functions mostly used for the implementation of the PKCS#11 API.  The Solaris private ucrypto API does not provide key generation functions.

The pkcs11_softtoken C_GenerateRandom() function is implemented by calling pkcs11_get_urandom().

When pkcs11_softtoken is performing key generation C_GenerateKey() or C_GenerateKeyPair() it uses pkcs11_get_random for persistent (token) keys and pkcs11_get_urandom() for ephemeral (session) keys.

The above mentioned internal functions generate random numbers in  the following way.

While holding the pre_rnd_mutex (which is per userspace process) pkcs11_get_random() reads in 20 byte chunks from /dev/random and calls fips_get_random() on that 20 bytes and continutes a loop building up the output until the caller requested number of bytes are retrived or an unrecoverable error occurs (in that case it will kill the whole process using abort() when in FIPS 140-2 mode).

fips_get_random() performs a continous test by comparing the bytes taken from /dev/random. It then performs a SHA1 digest of those bytes and calls fips_random_inner().  It then again performs the byte by byte continous test.

When the caller requested number of bytes have been read and post processed the pre_rnd_mutex is released and the bytes returned to the caller
from pkcs11_get_random().

The initial seed and XKEY for fips_random_inner() are setup during the initalisation of the libcryptoutil library before the main() of the application  is called or any of the functions in libcryptoutil are available. XKEY is setup by feeding the the current high resolution time into seed48() and  drand48() functions to create a buffer of 20 bytes that is then digested through SHA1 and becomes the initial XKEY value.  XKEY is then updated when fips_random_inner() is called.

pkcs11_get_urandom() follows exactly the same algorithm as pkcs11_get_random() except that the reads are from /dev/urandom instead of /dev/random.

When a userspace program forks pthread_at_fork handlers ensure that requests to retreive randomness are locked out during the fork.

 


I hope this is useful and/or interesting insight into how Solaris generates randomness.


How to Integrate with the Cloud - Oracle OpenWorld Customer Panel

$
0
0

Hear customer experts describe how they integrated their existing on-premises applications with the cloud.  Here is a peak into a slide to be presented by customer panelist JDSU on using Oracle SOA Suite to integrate with Salesforce.  Andrew Randall, Senior Manager of Information Technology from JDSU will provide step by step guidance, lessons learned and where they are going next in his portion of the session titled:

Cloud Seeding: Using Oracle SOA to Enrich Salesforce.com with Back Office Data

JDSU Sneak PeakAdditional experts on the panel include Asheesh Srivastava, Manager of IT Delivery at Xerox and Bharat Raval, a Manager with IEEE.

For more information on this Cloud Panel, please select the session titled "Oracle SOA Suite Customer Panel: Unifying Cloud Applications with On-Premises Applications" in the Oracle SOA Focus On document.  Hope to see you there on Tuesday Sept 24th at 12:00 PT.

New MySQL Certification Exams

$
0
0

Want to prove your expertise with MySQL? Why not get certified.

Oracle has just announced the release of the new MySQL Certification Path including the:

You can register for these certification exams from today. Learn more about these certifications on the Oracle University blog.

Successful candidates will require a strong MySQL experience. Although there is no required training for these certifications, the following courses are highly recommended:

For more information on the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

Should You Consolidate Your Servers Onto Oracle SuperCluster?

$
0
0
"Are you planning to consolidate a server running a business-critical application that you want to update with future releases over upcoming years, or are you trying to get rid of an old server running a legacy application that will not be updated anymore?"

This is just one of the questions Thierry asks in his article, which is a great resource for sysadmins, systems architects, and IT managers who are trying to decide whether to consolidate individual servers onto an Oracle SuperCluster. Your answer will determine whether you should put your application in native or non-native Oracle Solaris zone.

Other questions Thierry and friends ask:

  • Is my server eligible for physical-to-virtual (P2V) migration?
  • Are you planning a long-term or short-term migration?
  • How critical are performance and manageability?

Once he has helped you determine your general direction, he discusses these architectural considerations:

  • SuperCluster domains
  • Network setup
  • VLAN setup
  • Licensing considerations

Finally, he provides a thorough step-by-step instructions for the migration itself, which consists of:

  • Performing a sanity check on the source server
  • Creating a FLAR image of the source system
  • Creating a ZFS pool for the zone
  • Creating and booting the zone
  • Performance tuning

And just in case you're still not sure how it's done, he concludes with an example that shows you how to consolidate an Oracle Solaris 8 Server Running Oracle Database 10g. It's all here, give it a good read:

Technical Article: If Virtualization Is Free, It Can't Be Good, Right?

Article by Thierry Manfé, with contributions from Orgad Kimchi, Maria Frendberg, and Mike Gerdts

Best practices and hands-on instructions for using Oracle Solaris Zones to consolidate existing physical servers and their applications onto Oracle SuperCluster using the P2V migration process, including a step-by-step example of how to consolidate an Oracle Solaris 8 server running Oracle Database 10g.

Video Interview: Design and Uses of the Oracle SuperCluster

Interview with Alan Packer

Allan Packer, Lead Engineer of the Oracle SuperCluster architecture team, as explains how the design of this engineered system supports consolidation, multi-tenancy, and other objectives popular with customers.

By the way, that's a picture of an 01 Ducati 748 that I took in the Fall of 2012.

- Rick

Follow me on:
Blog | Facebook | Twitter | YouTube | The Great Peruvian Novel

Architecture, Architects and Java!

$
0
0

Architects get plenty of attention at JavaOne! The content for JavaOne 2013 in San Francisco is organized into eight tracks, and   cover eight different roles, including software architects.    Here's just a small sampling of the nearly 220 sessions available for architects:

SessionTitlePresenters
CON5657Pragmatic Big Data Architectures in the Cloud: A Developer’s PerspectiveFernando Babadopulos - CTO, TailTarget
Fabiane Nardon, Chief Scientist -  TailTarget
CON2020Building Modular Cloud Applications in Java: Lessons LearnedPaul Bakker - Software Architect, Luminis Technologies
Bert Ertman - Fellow, Luminis
CON1967Getting Serious with Versioned APIs in ScalaDerrick Isaacson - Director of Development, Lucidchart
CON6494Tutorial: Building Modular Enterprise Applications in the Cloud Age

Paul Baker - Architect, Luminis                                     Bert Ertman - Fellow, Luminis

CON6009(Dev && Ops).toPublicCloud() Cyrille Le Clerc - Solution Architect, CloudBees
TUT7861Developing Java EE Connector Architecture–Based Resource Adapters Made EasyDapeng Hu, Oracle
CON7872Trust Me, I’m an M2M DeviceNoel Poore - Architect, Oracle

CON2959Modular JavaScript

Paul Bakker - Architect, Luminis Technologies Sander Mak - Engineer, Luminis Technologies

CON3921How Lucidchart Scales with Play, Akka, and ScalaRyan Knight - Typesafe Reactive Consultant, Typesafe  Brian Pugh - VP of Engineering, Lucid Software
BOF4253Instant Distribution of Updates to Hundreds of Millions of Users

Zbynek Slajchrt - Java Architect, AvastLukas Karas - Software developer, Avast

For more information and a complete listing of sessions by and for architects check out the JavaOne 2013 Content Catalog.

Going to Oracle OpenWorld and/or JavaOne? Attend OTN Kick Off Event Sunday

$
0
0

The first event in the OTN Lounge @ Oracle OpenWorld (Moscone South Lobby) will be the OTN Kick Off Event – Sunday September 23rd -  3 to 5pm.

Many of the leads for the activities taking place in the OTN Lounge will be on hand to give a short 10 to 15 minute overview.  After learning about their plans, then you can follow them to our ‘secret’ location for refreshments and to continue the discussion. Schedule is as follows (subject to change)-
  • 3 pm - Roland Smart, Vice President of Social and Community Marketing – Welcoming Comments
  • 3:15pm - APEX Dev Challenge/David Peake
  • 3:30pm – OTN forums, what's new and what's next / Sonya Barry
  • 4:00 pm  - Java Embedded / James Allen
  • 4:15 - Oracle 12c Multitenant / Pluggable DB --Jenny Tsai
  • 4:30 - Fusion Dev Challenge / Oliver
  • 4:45  - RAC Attack / Yury Velikanov, Team Technical Lead, Pythian
We look forward to seeing you there!  If you don't have your Oracle OpenWorld or JavaOne passes yet maybe a Discover pass for only $75 is your best bet! 

Architecture, Architects and Java

$
0
0

Architects get plenty of attention at JavaOne! The content for JavaOne 2013 in San Francisco is organized into eight tracks, and   cover eight different roles, including software architects.    Here's just a small sampling of the nearly 220 sessions available for architects:

SessionTitlePresenters
CON5657Pragmatic Big Data Architectures in the Cloud: A Developer’s PerspectiveFernando Babadopulos - CTO, TailTarget
Fabiane Nardon, Chief Scientist -  TailTarget
CON2020Building Modular Cloud Applications in Java: Lessons LearnedPaul Bakker - Software Architect, Luminis Technologies
Bert Ertman - Fellow, Luminis
CON1967Getting Serious with Versioned APIs in ScalaDerrick Isaacson - Director of Development, Lucidchart
CON6494Tutorial: Building Modular Enterprise Applications in the Cloud Age

Paul Baker - Architect, Luminis                                     Bert Ertman - Fellow, Luminis

CON6009(Dev && Ops).toPublicCloud() Cyrille Le Clerc - Solution Architect, CloudBees
TUT7861Developing Java EE Connector Architecture–Based Resource Adapters Made EasyDapeng Hu, Oracle
CON7872Trust Me, I’m an M2M DeviceNoel Poore - Architect, Oracle

CON2959Modular JavaScript

Paul Bakker - Architect, Luminis Technologies Sander Mak - Engineer, Luminis Technologies

CON3921How Lucidchart Scales with Play, Akka, and ScalaRyan Knight - Typesafe Reactive Consultant, Typesafe  Brian Pugh - VP of Engineering, Lucid Software
BOF4253Instant Distribution of Updates to Hundreds of Millions of Users

Zbynek Slajchrt - Java Architect, AvastLukas Karas - Software developer, Avast

For more information and a complete listing of sessions by and for architects check out the JavaOne 2013 Content Catalog.

UKOUG Conference - Three presentations

$
0
0

Ok, my two hour presentation at UKOUG is now split into two one hour presentations. So my schedule now looks like:

  • Monday 2nd December: Getting the most out of Oracle Solaris Studio
  • Monday 2nd December: Where code meets the processor - performance tuning C/C++ applications
  • Wednesday 4th December: Multicore, Multiprocess, Multithread to be presented on Wednesday 4th December

I'm very pleased that I've got three separate hour long sessions. The material better fits this distribution, plus I really don't think that people could sit comfortably for two hours.

I'll be hanging out at the conference for the entire week, so please do take the time to find me for a chat.


Oracle OpenWorld: Context and Risk-Aware Access Control: Any Device Anywhere

$
0
0

Are you attending Oracle OpenWorld 2013?  What are you doing to manage access to information, from any device, anytime...and from anywhere?

Customers expect consistent levels of service across laptop, tablet, and smartphone, but the very nature of mobile computing introduces a whole new range of security concerns, such as device type and configuration, location, type of connection, data to be accessed, and transactions to be performed. All of these factors are to be evaluated at runtime in an authentication and authorization decision. Essentially, a security system must adapt to changes in context and risk level at the time of the request. This session will help you understand how Oracle’s access management technology intelligently reacts to changes in context across a variety of devices to maximize levels of security and control.  REGISTER NOW for this session at this year's Oracle OpenWorld 2013.  For a complete listing of Security focused tracks at this year's OOW2013, please click HERE


Oracle SQL Developer v4EA2 Is Now Available

$
0
0

EA1 dropped in July. Now here we are a little more than a week before Oracle Open World and we are making Early Adopter 2 of Oracle SQL Developer version 4 available for you to download and provide feedback.

There’s two big things you need to know about:

  1. About 500 bug fixes
  2. Support for ADDM/AWR/ASH

Bug Fixes

Thank you, thank you, thank you for your feedback on the Forums, Twitter, our blogs, and any other way you might be able to find us. We want to make version 4 our best release yet and we won’t release the product officially until it meets our high standards and expectations.

That being said, we’re getting a lot closer!

I’m sure we’ve created a few more bugs for you to play with, so don’t be shy, and keep up the feedback!

ADD/AWR/ASH

These are very popular features of Oracle Database Enterprise Edition’s Diagnostic Pack. If you click through the ‘Are you sure you want to use this feature’ dialog, then you’ll see these new items in the DBA panel:

Please make sure you can use this stuff before you actually use it.

Please make sure you can use this stuff before you actually use it.

I don’t have time in this post to go over everything – that’s what I’ll do next week, but here’s a quick sneak peek:

Active Session History, default is last 5 minutes

Active Session History, default is last 5 minutes

Once the HTML report is generated, you can browse it inside of SQL Developer, save it to the raw HTML, or auto-browse it in your default browser. I really like this last bit :)

Click the 'Open in Browser' button to get the report out of SQLDev

Click the ‘Open in Browser’ button to get the report out of SQLDev

Much, much, more

You can also manage your snapshots (create new ones or delete existing), create new baselines and feed those baselines to your AWR reports, etc. And like I said, much much more content around these new features coming soon.

How much is your privacy worth?

$
0
0

I have an offer for you.  You tell me some details about yourself, like marital status, number of kids, where you live, your hobbies, etc., and I'll make sure all the advertising you see online and receive in the mail is relevant to you. No more receiving coupons for diapers when your kids are already driving.  No more online ads for dating services when you're married (unless, of course, your hobbies override this).  Fewer credit card applications in the mail (hey, at least they're not as bad as those AOL CDs we used to get).

Seems like a good deal to me.  You get offers you can actually use, the number of mailings that go straight into the trash are cut down, and advertisers get a bump in effectiveness.  Its all good, right?  Wait, you don't like the deal?  What if I can show you you'll actually receive discounts that exceed $100 each year?  That goes straight into your pocket.  No?  Well, how much is your privacy worth?

That's really the big question.  I'd venture to guess that if you're age 99-50 you won't put a price on your privacy.  49-31 will actually give it some thought and provide a number.  Those 30 and under aren't worried about privacy -- they appreciate the reduction in clutter and time savings.

My mother-in-law thinks the NSA is listening to her phone conversations, so my teen-aged son likes to throw in the occasional "bomb" or "hijack" during conversations just to taunt her.  My mother refused to use ATMs.  My dad carefully shredded every piece of discarded mail.  Contrast that with the constant sharing of personal information by teenagers, even in the face of increased identity theft and socially engineered hacks.  Its amazing to step back and look at the dichotomy between the levels of sharing across age groups.

So who's right?  In the words of Jessie Jackson on SNL, "the question is moot."  Retailers must continually adjust to the dynamic tastes of their customers. Is it proper to show a toilet on TV?  In the early days of TV it most certainly was not.  Thankfully, "Leave it to Beaver" broke down societies' hang-ups and sneaked one past the censors.

Nordstrom tried an experiment this summer by tracking mobile phones in their stores.  The fact they were doing this was posted at the entrances, and no personally identifiable information was collected.  They just wanted to see how often anonymous shoppers visited, how long they stayed, and their path through the store.  This is really no different than the cookies in your Web browser.  The same information can be obtained using cameras or simply by following people, but the mobile phone makes it much cheaper to do.  It wasn't until the media proclaimed "big brother is watching you shop" that there was backlash.

AdAge recently reported on the D2 Digital Dialogue conference in which Julie Bernard, senior VP-customer strategy, marketing and advertising of Macy's, spoke on retailers collecting and using customer data."The media has spun this story so negative, and it's really a shame that people in our positions have not taken a more dominant position on speaking on the macro and micro economic benefits of delivering relevancy by responsibly using customer data."  She went on to say, "There's a funny consumer thing.  They're worried about our use of data, but they're pissed if I don't deliver relevance. … How am I supposed to deliver relevance and magically deliver what they want if I don't look at the data?"

Good question.  My recommendation is to keep trying.  Knowing that consumers' attitudes are changing, its important to "skate to where the puck is going, not where its been."  This is a journey in which we'll move slowly, at each step ensuring its always a win-win for both retailers and consumers, and always acting responsibly.

By the way, if you'd like to take me up on that original offer, Acxiom allows you to access and edit the data they've collected on you at aboutthedata.com.  The screen-shot above is from that site.

Learn, Network, and Unwind at the Oracle OpenWorld Exhibition Halls

$
0
0
At Oracle OpenWorld, the Exhibition Halls are packed with partners eager to share their latest breakthroughs, best practices, and more. Plus many of them go out of their way to make the exhibition hall experience as engaging and enjoyable as possible, so don't miss any of the fun and "stuff" they bring to help keep you energized.

There will be more than 500 Oracle Partners participating in one or more of these Exhibition Halls: 

Oracle OpenWorld: Moscone South, Moscone West
Java: Hilton Grand Ballroom
HCM @ OpenWorld: Palace Hotel Garden Court
MySQL Connect: Hilton Yosemite Room 
CX Exhibition Experience: Moscone West, Lobby Level 3 

There will also be Oracle OpenWorld Pavilions featuring technologies including Cloud Solutions, Mobile Enterprise, Linux and Virtualization, JD Edwards, Hyperion, and more.

In addition to networking and learning, the Oracle OpenWorld Experiences are a great place to unwind and have fun.
  • Grab a coffee in the morning and a well-deserved beer in the afternoon at the Tap and Brew, sponsored by NetApp (Room 101 Moscone South and Booth 3841 Moscone West)
  • Try your hand at the Golf Experience, sponsored by Tech Data (1933 Moscone South and 3209 Moscone West) 
  • Roll away at our brand new Bocce Ball Experience, sponsored by Rapid E-Suite (3909 Moscone West).
Whether you’re looking to learn something new, add to your network, relax, or blow off steam, the Exhibition Halls should be on your must-attend list while attending Oracle OpenWorld.

JCP Events at JavaOne

$
0
0

The Java Commmunity Process is the way Java technology gets discussed, changed and improved. Anyone can participate in reviewing and providing feedback for the Java Specification Requests (JSRs), and anyone can sign up to become a JCP Member and then participate on the Expert Group of a JSR or even submit their own JSR Proposals. JavaOne is a great opportunity to meet and influence JCP members (and perhaps become one yourself!). As Java Champion Bruno Souza puts it: "The JCP has done important steps towards transparency, and is now moving forward to increase the participation of individual developers, making it a lot easier for developers to join even when their companies don't want to sign the long JCP documents. But, for this to really work, there must be pressure from developers, showing their interest that those things actually happen." Martijn Verburg of the London Java Community says "I'd *highly* recommend attendance at the public JCP EC meeting. With JavaEE 7 out the door and Java 8 pretty much done, it's a perfect time for you as engineers/developers/technical leaders to have a say in the future of the Java ecosystem."

The JCP office would like to extend an invitation to all developers at JavaOne to attend and participate in the JCP-related activities happening during the week.  Some highlights of events that you will not want to miss are below and more details and sessions are available on JCP.org:
http://jcp.org/en/press/news/JCP_JavaOne2013 .

JCP Community Meeting aka Public JCP EC Meeting (Sunday):
This public meeting is part of JavaOne User Group Sunday-Session IDs: UGF10364 and UGF10365 at  Moscone West, Rooms 3020 and 3022. The first 50 attendees will also receive a signed copy of Antonio Goncalves new book.
Time: 18:00- 19:00. Let us know if you have suggestions for agenda items -the primary focus will be on JCP.Next progress and feedback.


The JCP Party (Monday):
The JCP will hold the 11th Annual JCP Awards ceremony and party on Monday, 23 September at the Hilton Hotel, Cityscape (at the top of the Hotel building). San Francisco, CA, from 18:00 - 21:00.  Food, friends, drinks, door prizes, raffles, Duke photo booth, Java Band, and Arun Gupta book signing! Reserve your spot today:
http://jcp2013.eventbrite.com/

JCP Meet and Greet (Wednesday): 
Location: Hilton San Francisco - Union Square 13 Room
Date and Time: Wednesday, 25 September  16:00 PM - 17:00 PM
Food and drinks will be served. RSVP for the meeting to pmo@jcp.org so we'll know to expect you!  This is a great, informal opportunity for JUG leaders, Java Champions and Adopt-a-JSR participants to meet with JCP experts and discuss opportunities to get more involved with the JCP program.

The JCP will also have Expert Drop in hours in the OTN Lounge if you want to have a chat with us.  They will be there Monday ,Tuesday, and Wednesday from 14:30:15:00.

Read OTN's JavaOne Conference blog to know everything that's going at JavaOne. For up-to-the-minute updates and reminders, follow @JavaOneConf.

Oracle Identity Management Leveraging Oracle's Engineered Systems

$
0
0

Enterprises deploy Information Technology (IT) applications in various ways today. They may use on-premise physical servers, virtualization, private clouds, public clouds, or a combination thereof. In all cases, the main goals include improving the ease of application deployment, increasing system performance, providing security across the enterprise, and ensuring contained costs.

This white paper presents the business benefits of leveraging Oracle’s engineered systems for deploying and running Oracle Identity Management. Click to read

SOCIAL IN THE ORACLE HCM CLOUD - PART 2

$
0
0

Contact for post: Mark Bennett

Part 1 introduced the challenges and opportunities customers face in effectively using social in their organizations and in particular, their HCM processes. It also gave a brief overview of what other vendors have done and how Oracle’s approach offers a better way. Part 2 describes Oracle’s Social HCM Cloud solution in more detail.

ORACLE'S SOCIAL HCM CLOUD SOLUTION

Oracle Social HCM Cloud embeds a native social platform OSN (Oracle Social Network) in a full, social-enabled enterprise application suite. This results in two very important ways which make social a feature that truly generates superior business performance:

1. Enabling Social capabilities in Existing applications and processes– takes an already powerful suite of enterprise applications and both enhances it with social capabilities as well as makes social an integral, value-creating part of getting work done.">

    • Social platform features embedded in applications
    • Business context embedded in the social platform

    2.Creating new applications that could not exist before–provide powerful applications that utilize social technologies to create value in a way that could not have been done before.

    • Workforce Reputation Management
    • Social Sourcing

    Both of these are possible because we also provide a very robust foundation in the Unified Socially enabled profile.

    UNIFIED, SOCIAL-AUGMENTED PROFILES

    Profiles for employees, partners, and candidates are the heart of the individual and talent management experience. The Oracle Social HCM Cloud unifies the individual profiles maintained by application platforms including Fusion and Taleo, together with profiles of individuals on external networks and services such as LinkedIn, Facebook, and Craigslist. These aggregated or federated profiles are then augmented by the Oracle Social Network in two areas.

    The first is to present this aggregated individual profile data in the Oracle Social Network UI so that users have a richer picture of the person they are looking at in the network. The second is to enhance the ability of Oracle Social Network to make useful recommendations for people to follow or otherwise collaborate with.

    This is a two-way street. Just as the aggregated profile information assists Oracle Social Network with useful information, so too does OSN provide the  individual profile with useful information about the individual’s social role, behavior, and impact. This individual profile augmented with social data will constitute the more complete backbone of employee, candidate, and partner engagement processes, including: recruiting, learning, performance, goals and development.

    ENABLING SOCIAL IN EXISTING APPLICATIONS

    Oracle already has a powerful, integrated suite of enterprise applications that provide tremendous value. By extending these existing applications to effectively leverage social to the fullest extent as a native feature, Oracle not only improves the productivity of employees using these applications to get work done, but also enables companies to more fully realize the potential of using social technologies to improve engagement and generate strategic insight.

    • Goal Management– Goal management is inherently a collaborative social experience. We are adding conversations from Oracle Social Network to this process. This allows for an ongoing dialog between the employee, the manager and the team that is helping drive the goal. Additionally this will help in sharing and collaborating on work that drives goal results. Performance goals will provide the context for OSN Conversations.
    • Performance Management – Whether done periodically as focal reviews or as ongoing conversations, performance management that is actually about not just improving the “what” but also the “how” in the achievement of goals successfully, is inherently a social experience. The development of competencies in support of performance is in particular a coaching conversation between manager and employee. OSN Conversations will be enabled around key business objects such as performance documents as well as development plans and goals.
    • Talent Review and Succession Planning– These processes inherently are collaborative and have multiple stakeholders across HR and the line of business to drive these meetings. Using Oracle Social Network, we are greatly enriching the experience of these processes. OSN Conversations will be enabled around key business objects such as talent review meetings and succession plans.
    • Career Development– Employees are looking for the tools that can really help them take charge of their career. Embedding social into these tools allows employees to tap into each others’ experience and learning to help better understand their options and chart out a way to identify and achieve their career aspirations. Development plans and goals will provide the context for OSN Conversations.
    • Learning– Learning is an inherently social activity when done to really achieve development and growth. Embedding social into the learning experience takes place in two ways: the first is by having employees discuss and rate formal learning available in catalogs, and the second (and often more effective) is to have employees actually create, curate, and share content to help others learn. OSN Conversations will be enabled around key learning business objects such as courses and curricula.
    • Recruiting– Both the candidate as well as the recruiting team experience with recruiting is social in nature. Social can enhance engagement by leveraging and strengthening the reputational aspects vs. simply parsing resumes and best fit algorithms. Job requisitions will provide the context for OSN Conversations.
    • On-boarding– Whether completely new to the organization or just new to a particular role, employees often face a steep learning curve to get up to speed. Much of it is finding out who it is that one needs to know to help get things done, learn what’s going on, etc. Embedding on-boarding into social helps get the right network connections, community memberships, etc. in place automatically for the employee. Jobs and roles will provide the context for OSN Conversations.
    • Benefits Enrollment– Providing a way for employees with similar family or health circumstances to form communities to discuss and learn from each other greatly increases engagement by helping them navigate through the sometimes bewildering maze of options. Along the way, employees build a sense of community amongst themselves and can start to form long term connections.
    SOCIAL PLATFORM FEATURES EMBEDDED IN APPLICATIONS 

    What makes Oracle HCM applications different is that rather than having either a separate or generic, non-collaboration area, the Oracle social platform displays the conversations in the application platform UI shell that are contextually related to the business process or business object being manipulated in the application.

    This enables easy access by users to the conversations that will help access expertise, answer their questions, provide updates to their social network, and make them more productive. They can do this without having to leave the application context, thus avoiding any breach of focus on the task at hand.

    Embedding social in applications improves the overall experience and performance of people who use the applications. It’s important to note however, that it is unlikely that on its own, embedding social in the application will start causing people to use applications they otherwise would not normally use – this is more about making existing users more productive.

    BUSINESS CONTEXT EMBEDDED IN THE SOCIAL PLATFORM

    More and more employees, both in functional as well as sales and production roles, want to access their work through mobile devices. Mobile devices such as smartphones and tablets offer greater freedom and versatility to users, freeing them from being tied to their desktop computers, and even their laptops as well.

    Given that social network usage has also increased through the use of mobile devices, it’s not surprising to see an overlap where employees frequently wish to access their social network to get updates, find answers, etc. in order to help them get their work done and accomplish goals.

    The Social HCM Cloud improves the experience for users by embedding the business context into OSN conversations. This enables users to focus on the collaboration aspects of getting their work done, for example: finding expertise, getting answers to questions, and updating their colleagues. They don’t need to login to the business application to find out what is going on and help someone out with a problem, or continue a discussion about something we are working on.

    By having the business context embedded in the collaboration experience, multiple benefits occur:

    • The first is that the presence of a context helps to keep the collaboration focused on the task at hand; it communicates and reminds everyone what the purpose of the collaboration is.
    • The second is that if the business process or object the collaboration is about needs to be updated, it can be updated without having to leave the collaborative experience (provided the user has the right security permissions.)
    • The third is that the activities of all the participants in the collaboration can be implicitly related to this context. This means that the organization has better knowledge about the contribution employees make and can relate it directly to the achievement of individual goals and corporate objective.

    CREATING ENTERPRISE APPLICATIONS THAT COULD NOT EXIST BEFORE

    The advent of social technologies has enabled the creation of applications that could not have existed previously. These applications create value for companies in new, or at least previously underserved, ways. While these applications can sometimes exist as standalone, by adding them to the Oracle application suite on top of Oracle’s embedded social platform, not only do customers benefit from the integration of the enterprise data and process in the system of record, but also from the leveraging of other social tools available in the Oracle social platform. This combination creates value for companies previously not available.

    WORKFORCE REPUTATION MANAGEMENT

    Workforce Reputation Management (WRM), is a new, innovative application designed to help organizations establish, track, and monitor employee social media policy compliance, while simultaneously providing Human Resources and Recruiting leaders additional insight into employee and candidate social reputation and influence.

    WRM monitors both public external networks like Twitter and Facebook, as well as internal data sources to enable compliance officers and administrators to ensure organizational social media policies are being adhered to, while also providing insight into company, department, and individual reputation and influence.

    Additionally, WRM allows HR and Recruiting leaders to tap into reputation, influence, and the social network graphs of their employees, to facilitate internal team and project building, as well as to discover external talent that fits the organization’s current and future needs. WRM takes the wide range of disparate data being produced across external and internal platforms and transforms it into accessible, relevant, and actionable information.

    SOCIAL SOURCING

    Social Sourcing adds social talent sourcing capabilities to the Social HCM Cloud. It enables organizations to empower recruiters, hiring managers, and employees to leverage their social networks to distribute job opportunities, source higher quality referrals, market their employment brand, and manage corporate alumni relationships.

    The social networks that are leveraged include LinkedIn and Facebook, which are quite extensive and provide a reach for organizations that had been limited to only the network that the organization as an entity had itself, not all the networks of the individuals involved in the recruiting function, or even better, the managers and employees who are closest to the role that needs to be filled.

    Social Sourcing extends the power available through social networks to the identification of, and maintaining relationships with, prospective candidates. This, combined with the other social-enabled processes in the Social HCM Cloud, fills out the whole lifecycle of employees from a social perspective.

    Next up: Part 3 shows how Oracle’s Social HCM Cloud solution delivers improved business outcomes for customers.


    Meet the Winners: Oracle Excellence Awards

    $
    0
    0

    Meet this year's impressive Oracle Fusion Middleware innovators.

    These Oracle Excellence Award winners will be honored during Oracle OpenWorld, so be sure to attend, learn more about their award-winning achievements, and help give them the recognition they deserve.

    What: Oracle Excellence Awards Ceremony for Fusion Middleware Innovation
    When: Monday, September 23
    Time: 4:45 p.m–5:45 p.m.
    Where: Lam Research Theater at YBCA


    NEW -- FMW Newsletter Sep 2013

    $
    0
    0

    The latest FMW newsletter has been published, with information about:

    • FMW 12c,
    • the latest IdM patches,
    • FMW trainings,
    • Oracle OpenWorld,
    • links to IdM Twitter/LinkedIn/blogs,
    • and more
    To see the latest issue, go to:
    • Oracle Fusion Middleware Support News : Current Edition - Volume 9 : September 2013 (Doc ID 1347075.1)
    For more information about the newsletter, see our blog post:

    Happy Trails!

    "Internet of Things""モノのインターネット"

    $
    0
    0

    再来週の Oracle Open World の目新しいキーワードの一つに "Internet of Things"がありそうです。

    インテルさんのキーノートセッション

    Intel at Oracle OpenWorld

    explosion of connected mobile devices and the Internet of Things, each consuming services and generating massive amounts of varied data.

    弊社キーノートセッション

    Oracle OpenWorld 2013 Keynotes

    The Internet of Things and the rise of a machine-to-machine (M2M) ecosystem

    日本語だと"モノのインターネット"という訳ですが2000年代初頭から使われていたそうです。

    ニュース - 科学&宇宙 - “モノのインターネット”で変わる世界 - ナショナルジオグラフィック公式日本語サイト(ナショジオ)

    September 2, 2013

    このほど、『オックスフォード英語辞典』のオンライン版に、“モノのインターネット(Internet of things)”という言葉が追加された。

    OOW 2013 Content: Access at Scale for Hundreds of Millions of Users

    $
    0
    0

    Scalability has become a much more important requirement for IDM professionals as we expand to securely accommodate multiple personal networked devices with access to our corporate apps and data.

    Access at Scale for Hundreds of Millions of Users [CON8833] will take a look at this trend and will review several business cases.  In addition to the Oracle speakers, this session will feature Nirmal Rahi, Solution Architect from College Board, Brendan McGuire, Director from KPMG and Chirag Andani, Sr. Director, Identity & Access Management, PDIT - Oracle.

    Plan on attending this session on:

    Monday, Sep 23, 12:15 PM - 1:15 PM - @ Moscone West - 2018

    NEW -- FMW Newsletter Sep 2013

    $
    0
    0

    The latest FMW newsletter has been published, with information about:

    • FMW 12c Release
    • WebCenter 11.1.1.8 Release
    • July Critical Patch Update (CPU)
    • WebCenter Portal Bundle Patch 11.1.1.7.1
    • FMW trainings
    • Oracle OpenWorld - Sep 22-26, 2013, San Francisco, CA
    • Links to Portals Twitter / Blogs
    • and more
    To see the latest issue, go to:
    • Oracle Fusion Middleware Support News : Current Edition - Volume 9 : September 2013 (Doc ID 1347075.1)
    Viewing all 19780 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>