Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

The Data Brew - Week of March 7

$
0
0

The Data Brew - Your Source for Hot, Fresh Data News!

Welcome to The Data Brew! This weekly series highlights the latest in data-driven news and updates for marketers, all in one place.

Data Dark Roast

In Data-Driven Advertising, 'Reputation Miners' Can Wreck Your Budget
Omar Tawakol, SVP & GM Oracle Data Cloud, discusses reputation mining’s impact on the data and advertising world in a new article. Tawakol explains that marketers who are lured in by reputation mining can end up buying substandard data, that also squanders their advertising budgets. Learn more from Forbes.

Consumers Remember Stories, Not Products
“Do you remember anything at all when you watch ads -- or is the experience a hazy blur?” asks Cory Treffiletti, VP Marketing, Oracle Data Cloud. Treffiletti explains that consumers remember stories surrounding a product, which may inspire the use of the product. That element is what online advertising is lacking. Learn more from MediaPost.

Social Espresso Shot

Mobile Dating Apps Spend More to Attract Women
New research shows that the average cost to get women to install, register for and subscribe to a mobile dating app is higher than the cost to acquire men using the app. In addition, mobile operating systems influence app costs and conversion rates. Read more from eMarketer.

Facebook sees big spike in small businesses paying to advertise
About a decade ago, small businesses began asking people to "like" them on Facebook as a free way for them to connect with customers. Today, small businesses on Facebook account for the vast majority of the company's paid advertisers. Read more from CBS News.

The Data Brew - Your Source for Hot, Fresh Data News!

Stay up to date on all things data! Follow us on Twitter and like us on Facebook to stay in the loop.

Need data-related answers for your next marketing campaign? Contact thedatahotline@oracle.com today.


Digital Collaboration Webcast - March 10

$
0
0
Oracle Corporation
Banner
Find new ways to collaborate more effectively among citizens, employees, partners, and other agencies

Today’s digital workplace requires going beyond simple file sharing in the Cloud to delivering the next wave of productivity, efficiency, and workgroup innovation. Agencies and organizations need services that blend content, people, process and communications--enabling better and faster decisions while accelerating how work gets done. Unlike first generation content-only Cloud vendors, Oracle provides an integrated productivity suite of Cloud services that helps business communicate more effectively by automating business processes involving content.

Adapting existing systems to meet today’s needs face many challenges, including:

  • Support multi-channel requirements
  • Simplify communications to include content rich business processes that span multiple applications
  • Enable mobile applications for field workers who need access to content in context with applications
Oracle’s Digital Collaboration solution addresses these challenges by providing:

 

  • Convenient file sharing and collaboration, anywhere, anytime, via any device
  • Simplified process automation – business friendly composition, configurable rules, auto-generated forms, process health and SLA monitoring
  • Actionable alerts and security controls
  • Integrations with SaaS and On-Premise applications
  • Mobile Web, interactive content, and rich Websites
During the webinar you will learn how Oracle and TekStream help you create a unique digital collaboration environment for your organization. You will see how to use a micro-site to upload documents, route them for approval, and provide an online collaboration experience for your users.

Please join Oracle and TekStream on March 10th to understand how you can take advantage of a transformative, Cloud-based, digital experience for your organization.

We look forward to seeing you!



Red Button Top
Register Now
Red Button Bottom
Live Webcast

CalendarMar 10, 2016
10:00 am PST |
1:00 pm EST
Integrated Cloud Applications and Platform Services
Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Avaya Transforms with Oracle CX Cloud

New Social Customer Success videos

Oracle Social Cloud integrates with Unmetric

$
0
0

We are thrilled to announce integration with Unmetric, a social media intelligence platform focused on brands. Read Mike Marzano (Unmetric) full interview about the Umetric integration with Oracle Social Cloud here!

Extracts from the interview:

Maggie Schneider Huston, Oracle: Would you please describe what Unmetric does? 

Mike Marzano, Unmetric: We describe Unmetric as the only social media intelligence platform focused on brands. Our products help digital marketers, social media analysts and content creators harness social signals to track and analyze competitive content and campaigns, and to create better content and campaigns of their own. Hundreds of global brands and digital agencies use Unmetric for real-time insights from the owned channels of over 40,000 brands across more than 30 sectors on all the major social networks including Twitter, Facebook, Pinterest, YouTube, Linkedin, and Instagram. 

MSH: Can you give me an example of what the integration between SRM and Unmetric will look like in real life? 

MM: The integration was driven by a key Automobile Manufacturer who partnered with both Oracle and Unmetric as they managed their social efforts across the globe. As mature social marketers, they recognized the value of having a complete array of paid, owned and earned insights at their fingertips. With the integration of Unmetric's competitive owned channel insights into Oracle SRM, they can now completely evaluate their strategy's effectiveness in each of their global markets and course correct in real-time.

TBC….

New Commerce Cloud Customer Testimonial Video: Live Comfortably

$
0
0

Learn more about how Live Comfortably uses Oracle Commerce Cloud to implement a direct-to-consumer strategy and create an immersive online experience for their customers.

The vision of the Live Comfortably brand is to bring comfort to the world.  They believe everyone deserves to Live Comfortably.  The spirit in their name brings forth an emotional connection with customers.  It is something that we all deserve to enjoy.  Live Comfortably launched in 2011 with a luxury bed collection, and now brings their luxurious products direct to consumers on their gorgeous website.

This Live Comfortably site, utilizing Oracle Commerce Cloud is being highlighted in this new video.

Watch and share this video with prospects and customers, and also share the Live Comfortably website as an example what your customers can do with Oracle Commerce Cloud, to increase their business and provide a world-class customer experience

Commerce: Growing the Beard in 2016!

$
0
0

In 2015 Commerce investment increased by 145%...Brenna Johnson describes why 2016 Commerce investment will be even bigger:

While ecommerce is beyond the chasm, there’s still a lot that’s about to happen. Apparently, the opposite of “Jumping the Shark” is “Growing the Beard”– and this is going to be an explosive Riker’s-Beard-kind-of-year for commerce. It was good before, but things are about to get better.

2015 saw a 145% increase in investor money pumped in to ecommerce. I’ll leave the stats and projections to the pros, but my senses tell me 2016 is bigger than your average growth year for commerce.   Here’s why…

EMEA Partner Webinar: Oracle Customer Experience for Financial Services, Monday April 18, 2016, 4pm CET

$
0
0

Oracle’s Financial Services Industry Cloud solution is gaining momentum in the marketplace. 

Join us on Monday April the 18th at 4pm CET for an exclusive Oracle EMEA Partner webinar to learn more about Oracles Financial Services offering and how Oracle is engaging with customers to define their path to digital transformation.

This presentation provides an overview of Oracle’s CX Cloud solution for the Financial Services industry.  Agenda:

  • CX Cloud Industry Overview
  • Introduction to Oracle’s Financial Services Offering (including demo)
  • Customer Momentum and Lighthouse Program
  • Q&A’s

This webinar is targeting System Integrators and ISV’s which are looking for innovative idea’s to help their Financial Services customers making a successful transition into the era of Digital Customer Experience

Speakers:

  • Leslie Buhrer - Senior Director, Oracle Product Management, CX Industry Financial Services
  • David Lopes – Director, Oracle EMEA Digital Experience
  • Daryn Mason – Senior Director, Oracle EMEA Digital Experience
  • Richard Lefebvre – Director, Oracle Alliances & Channels

Register now!


Register for Partner Webcast on Oracle Revenue Management Cloud Service!

$
0
0

Join Seamus Moran, Senior Director Financials Product Strategy for the Oracle Revenue Management Cloud Service Webcast. This product is especially important for those partners who have a strong Financial Cloud practice and want to sell, demo or implement the Revenue Management Cloud Service product.  It is important to effectively set expectations about the functionality and communicate the benefits of the product to customers.

Click here for the web registration and access!

Two Worlds Colliding

$
0
0

By: Krista Lambert, Director, Engineering & Construction Strategy, Oracle Primavera

Bringing together the best of both worlds

Site foremen are formidable people. You don’t want to feel the force of their frustration. When someone else’s mistake plays havoc with their plans or makes them miss deadlines, it can create unbearable situations.

But you can avoid frustrating your foremen with short interval planning. It’s a technique used in Lean Construction, designed to flush inefficiency out of the system. The technique relies on frequent and open collaboration on the job. The idea is that short-term plans are created daily to adapt to changing circumstances, ensuring employees are not left scratching their heads with nothing to do.

But it has its shortcomings.

It could miss important dependencies in the project which, if ignored, could delay completion. Since the 1940s the critical path method has been developed to seek out these dependencies, showing project managers where to focus their energies if they want to avoid being late.

And yet the critical path method is often set against short interval planning as if project managers and planners must pick one method or the other to succeed. But this doesn’t have to be the case.

The tools and technologies to bring the two approaches together – and get the best from both – are available today. The reality is that an open approach can help you stay in touch with the project whatever is thrown at you at any stage of the project giving you greater control.

To discover more, read our latest business brief.

The 10 Most Recently Created Notes for JDeveloper/ADF/MAF as of 3 March 2016

$
0
0

 The 10 most recently created notes for JDeveloper/ADF/MAF  as of 3 March 2016.

2112905.1

In IE9 Unable To Select More Than One File Through the Browse Button Of Af:InputFile Component

2112401.1

A font-family is ignored in ADF Skin if the character is multi-bytes

2112169.1

JDeveloper 12c: How to Export data rows to Excel file in 2013 format ?

2110526.1

SEVERE: Cannot Load Extension Oracle.maf

2110404.1

Error : couldn't find "libocldvk.so" When MAF Is Integrated with Cordova Spen Plugin

2110091.1

How to create Fusion ADF Web Application that connects to Microsoft SQL Server in JDeveloper 11.1.2.4 and later

2108053.1

weblogic.common.ResourceException: No credential mapper entry found for password indirection user=hr for data source hrconn

2105240.1

How To Determine Whether Installed JDeveloper 12.1.2.0.0 Is 32-bit or 64-bit

2105167.1

How To Remove "Show All" Option In Trinidad Table With Pagination

2104836.1

No "Unwrap Wrapped Parameters" Option In JDeveloper 12.2.1.0.0 When Generating WS Client


Oracle NoSQL Database Bulk Put Results

Mobile & IoT Webcasts

$
0
0

While people normally worry more about device architecture & interaction more in Mobile & IoT implementations, it might be the right time to start taking a closer look at the back end platforms. Mobile and IoT Platforms have come a long way in terms of features and customer adoption. The idea is to help you scale your projects, better integrate & analyze, and address security concerns. Please join below webcasts to hear Oracle's story around Mobile & IoT Platforms -

http://web2k.us.oracle.com/pls/web2k/isd.enewsletter.show_detail?p_id=5667
http://web2k.us.oracle.com/pls/web2k/isd.enewsletter.show_detail?p_id=5668

For selected few attendees, we will also offer onsite workshops to get deeper into your use cases and help bring your ideas to fruition. Look forward to interact with you on these game changing initiatives. 


Setting up JDeveloper to Develop and Test Embedded Framework Regions

$
0
0

Overview

Oracle Workflow Notification system provides ability to use OA Framework regions to show content within a notification body in Worklist. This provides following benefits:

  1. Reuse application code to display content inside workflow notifications
  2. Consistent look and feel of notification content with OA Framework pages

Refer to Oracle Workflow Developer Guide for more information.

A challenge for application developers had been to setup JDeveloper to develop, embed and test their OA Framework regions using OA Framework JDeveloper Extension within workflow notifications before deploying it to an EBS environment. Most developers complete development of a standalone OA Framework region, migrate it to EBS, embed to workflow notifications and test it from actual workflow notification pages. Troubleshooting issues with such embedded regions had been extremely difficult.

This blog post provides instructions to develop, embed, test and troubleshoot embedded framework regions within JDeveloper. 

Using OA Extension

Refer to chapter "Setting Up Your Development Environment" in OA Framework Developer's Guide for details on setting up OA Framework JDeveloper project against an E-Business Suite env.

Instructions 

Use below instructions to setup the OA Framework JDeveloper environment with required Worklist run-time files.

  1. From the corresponding Apps env's web tier, copy following directories
    1. $JAVA_TOP/oracle/apps/fnd/wf/directory to $JDEV_USER_HOME/myglobalclasses/oracle/apps/fnd/wf/directory
    2. $JAVA_TOP/oracle/apps/fnd/wf/utilities to $JDEV_USER_HOME/myglobalclasses/oracle/apps/fnd/wf/utilities
    3. $JAVA_TOP/oracle/apps/fnd/wf/worklist to $JDEV_USER_HOME/myglobalclasses/oracle/apps/fnd/wf/worklist
    4. PLEASE NOTE:
      1. These are all the seeded Oracle Workflow's run-time Java class files that are executed by worklist pages when rendering
      2. All classes under myglobalclasses directory take precedence over the default libraries loaded by JDeveloper
  2. From the corresponding Apps env's web tier, copy directory
    1. $FND_TOP/mds/wf/worklist/webui to $JDEV_USER_HOME/myprojects/oracle/apps/fnd/wf/worklist/webui
    2. PLEASE NOTE:
      1. These are all the seeded Oracle Workflow's run-time OAF page definitions used to render worklist pages
      2. All JRAD XML files have to be copied under myprojects. These are automatically pulled by JDeveloper into the project when you refresh it.
  3. Update config file application.xml under $JDEV_USER_HOME/system/oracle.j2ee./embedded-oc4j/config to remove the line containing wf.zip. This zip file is very old and contains Workflow UI files that may not be compatible with the target EBS env you are working against.
  4. For 11i, the application.xml is under $JDEV_USER_HOME/system9.0.3.x.x/oc4j-config/
  5. Setup the compile and run directories to use myglobalclasses folder instead of the default myclasses folder
    1. Go to Project Properties -> Project Content -> Output Directory. Change myclasses to myglobalclasses
    2. Go to Project Properties -> Run/Debug -> Select Default -> Edit -> Run Directory. Change myclasses to myglobalclasses
  6. To verify that the worklist is setup correctly to run from JDeveloper, you can now run AdvancWorklistPG.xml from the project. This should display the Advanced Worklist page.
  7. In the same JDeveloper, setup your application specific OAF regions that you want to embed into workflow notifications.
  8. Complete required setup to embed the region into Workflow Notifications as per Workflow Developer Guide.
  9. Now run AdvancWorklistPG.xml again and test your notification.

Conclusion

Such JDeveloper setup would save a ton of time for developers since all testing is done within the JDeveloper env itself without going to the EBS env.

Não tenha medo do ZFS Filesystem - Parte 1

$
0
0

Não tenha medo do ZFS Filesystem - Parte 1

Importante: Em todos os posts desse blog, estão
presentes minhas opiniões, visões e experiências
técnicas.O desenvolvimento, disponibidade de novas
funcionalidades ou quaisquer outras características
de produtos, são de única e exclusiva decisão da Oracle(r).

O filesystem ZFS é uma das inovações mais importantes que introduzimos no Oracle Solaris (ainda na versão 10) - parece que foi ontem, mas já se passaram mais de 11 anos desde seu lançamento.

Porém, mesmo depois deste tempo todo, ainda vejo pessoas com dúvidas sobre o ZFS, sua arquitetura e seu gerenciamento - especialmente considerando que operações, assim como os métodos de recuperação em caso de crises, em outros filesystems mais conservadores, como o UFS (antigo padrão do Oracle Solaris antes do ZFS),  são bem conhecidos e fazem parte do dia-a-dia do sysadmin.

A grande questão é: "por que migrar de outro filesystem para o ZFS?". A resposta é mais simples para o mundo Oracle Solaris - Porque root file system com UFS não é suportado pela Oracle a partir da versão 11. Mas além da questão de suporte oficial, existem muitas vantagens como performance, resiliência, alta disponibilidade, praticidade no gerenciamento, detecção de falhas e uma característica exclusiva do ZFS no mundo dos filesystems: self-healing (explicado mais a frente).

Com esta pergunta em foco, vou iniciar uma série de posts sobre as caraterísticas internas que fazem do ZFS o file system mais avançado disponível no mercado e como suas características podem nos ajudar nas atividades de dia a dia gerenciando ambientes críticos e complexos.

Visão geral

O ZFS pode ser dividido em duas partes principais: o ZFS pool e o ZFS dataset.

A base do ZFS é o pool - ele é o "alicerce" que suporta os datasets. Quando um dataset precisa de mais espaço, a alocação que o ZFS realizaráé a partir dos blocos disponíveis no pool onde o dataset se encontra.

O gerenciamento de espaço do ZFS pool (zpool) pode ser comparado ao sistema de memória virtual - quando mais memória é adicionada a um sistema, o Solaris não requer que a memória seja configurada e assinalada para os processos. O mesmo acontece com o zpool. Os datasets são criados no topo do zpool e não usam nenhuma camada intermediária para virtualizar volumes. Isso significa que você pode facilmente adicionar espaço com poucos comandos ZFS. 

Ainda, todos os datasets compartilham o espaço em um mesmo zpool. Não é necessário distribuir o espaço entre os datasets, pois eles podem crescer automaticamente dentro do espaço que lhe foi alocado no zpool. Quando um novo disco é adicionado ao zpool, todos os datasets dentro deste zpool podem utilizar o espaço extra disponibilizado e quando um arquivo é apagado, o espaço livre retorna para o zpool. 

São 3 os datasets possíveis na arquitetura do ZFS: 

ZFS file systems: Um file system que é montado para uso geral
Volumes:raw devices que podem ser usados como swap, dump devices em base de dados, etc
Clones: Uma copia de um file system ou volume
Snapshots: Uma copia de um file system ou volume, como o clone, mas read-only

Comparando o modelo tradicional de file system/volume versus o ZFS (baseado em zpools - chamado de pooled storage), podemos citar:

Tradicional:

  • Cada file system reside em uma partição/volume
  • Atividades de crescimento (growth) ou encolhimento (shrinking) de filesystems feitas de forma manual
  • Largura de banda limitada para cada file system
  • O storage é fragmentado

ZFS Pooled Storage:

  • Sem partições para gerenciar
  • growth/shrinking de File system são automáticos
  • Toda a largura de banda sempre disponível
  • Todo o storage no pool é compartilhado


Outra grande vantagem, que facilita o gerenciamento do ZFS: ele apresenta, apenas, dois comandos de administração: o comando zpool, para criar, modificar e destruir zpools. E o comando zfs, para criar, modificar e destruir ZFS datasets.

A criação de um zpool é bastante simples, basta determinar o nome do pool, o layout de formatação dos discos envolvidos (mirror, raidz, stripped, etc...) e opcionalmente log devices, cache devices e spare devices.

O exemplo abaixo, efetua a criação de um zpool chamado ORACLE, formado por dois mirrors compostos por dois discos cada:

# zpool create ORACLE mirror c0t1d0 c1t1d0 mirror c0t2d0 c1t2d0

Para expandir o pool, basta adicionar novos devices:

# zpool add ORACLE mirror c0t3d0 c1t3d0

A situação atual do zpool, incluindo seus objetos lógicos e físicos, pode ser visualizada com a saída do sub-comando status:

# zpool status ORACLE
  pool: ORACLE
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        ORACLE      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
          mirror-2  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0

errors: No known data errors
#

A criação de um dataset é tão simples quanto criar um simples diretório em um ambiente Oracle Solaris. No exemplo abaixo, estamos criando o filesystem /ORACLE/data01, parte do zpool ORACLE:

# zfs create ORACLE/data01
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
ORACLE                92.0K  87.0G   9.5K  /ORACLE
ORACLE/data01      24.0K  87.0G   8K  /ORACLE/data01

Um lab completo usando Oracle Solaris 11 para x86 e Virtualbox está disponível neste link.

Olhando mais de perto



A figura acima mostra como a estrutura interna do ZFS é organizada.

Os principais componentes do código estão na área de kernel (ou kernel land) do Solaris, divididos em três camadas:

  • Pooled Storage
  • Transactional Object
  • Interface

E na área de usuários (ou user land) podemos encontrar os seguintes componentes:

  • Consumers (file system e dispositivos)
  • Aplicações (interagindo com os ZFS datasets)
  • Comandos (zfs e zpool)
  • libzfs (a biblioteca de integração de outros components com o ZFS)

Então para entender como a orquestra funciona, vamos conhecer os instrumentos individualmente, ou seja, vamos analisar a estrutura da base ao topo.

Virtual Devices e Physical Devices


O pool é uma coleção de dispositivos virtuais, chamados de vdev, que são organizados em forma de árvore (vdev tree). Um vdev pode ser um arquivo (até mesmo um sparse file), um slice de um disco, um disco local (ou uma LUN de um storage externo), podendo-se habilita automaticamente o write cache do disco.

O acesso ao vdev (quando este é um disco) é feito através do LDI (layered driver interface). O LDI, que é um conjunto de device driver interfaces (ddi) e device kernel interfaces (dki), possibilita que um módulo de kernel acesse outros dispositivos no sistema, além de auxiliar na definição de quais dispositivos estão sendo usados pelos módulos do kernel.

Um vdev descreve um único dispositivo, ou uma coleção de dispositivos organizados de acordo com certas características de performance e tolerância a falhas. Todas as operações do zpool passam pelo framework do vdev, que fornece funções como:

  • Replicação de dados
  • I/O scheduling
  • Caching

As configurações suportadas no ZFS são stripped (sem nenhum tipo de alta disponibilidade de dados, mirror (espelhamento, similar ao RAID-1), RAID-Z[1] (similar ao RAID-5), e RAID-Z2 com double parity (similar ao RAID-6).

Aqui vale lembrar que o algoritmos tradicionais baseados em paridade de dados (como implementado em (RAID-4, RAID-5, RAID-6, RDP, e EVEN-ODD, por exemplo) sofrem de um problema conhecido como "write hole". Funciona assim: se apenas parte de um stripe em RAID-5 (por exemplo) é escrita e, durante esta operação, há uma perda de energia antes de todos os blocos irem para o disco, a paridade vai continuar fora de sincronia com os dados (ou seja, é inutilizada para sempre), a não ser que depois uma escrita sobrescreva o stripe todo.

No RAID-Z, o ZFS usa stripes de tamanhos variáveis, então todas as escritas são "full stripe". Este design sóé possível porque o ZFS integra file system e gerenciamento de dispositivo de tal forma que o metadata do file system tem informação suficiente sobre o modelo de redundância de dados para lidar com estes stripes de tamanho variado. RAID-Z é a primeira solução apenas em software que endereça e soluciona os write holes.

A raiz da estrutura do zpool é o root vdev, e os vdevs diretamente conectados a ele (children) são chamados de top level vdevs. Nas operações de escrita os dados são distribuídos através dos top level vdevs e quando novos top level devices são adicionados eles juntam-se automaticamente à estrutura do zpool.

Existem diferentes tipos de vdevs:

  • raidz, raidz2, raidz3 (lógico)
  • root, mirror, replacing, spare (lógico)
  • disco (físico)
  • arquivo (físico)
  • hole, missing (top-level)

Um grupo raidz pode ter paridade simples, dupla ou tripla o que significa que o raidz group pode garantir a integridade de dados, mesmo com a perda de 1 (paridade simples), 2 (paridade dupla) ou 3 (paridade tripla) discos parte do mesmo zpool.

  • raidz ou raidz1 - paridade simples (single parity)
  • raidz2 - paridade dupla (double parity)
  • raidz3 - paridade tripla (triple parity)

O layout dos vdev

O ZFS reserva 1 megabyte no dispositivo para ser usado como label do vdev. São quatro cópias (256 kilobytes cada) deste label, distribuídas no início e no fim de cada disco, aumentando a possibilidade de recuperação em caso de falha: uma área reservada para proteção da VTOC, outra para o cabeçalho do boot block e as duas restantes são "vdev configuration" e "uberblocks array". 

A "vdev configuration" possui, entre outras coisas, o GUID (global unique identifier) do root vdev, o número de top level vdevs e detalhes da vdev tree, como o tipo (file, disk, mirror, raidz, spare, replacing), path para o dispositivo físico e flags (degraded, removed, etc).

O uberblocké a porção do label que contém informação necessária para accessar o conteúdo do pool, sendo somente um uberblock ativo por vez - para definir qual o uberblock será o ativo, o ZFS considera: o uberblock com o mais alto transaction group e checksum válidos.

Para garantir acesso constante ao uberblock ativo, ele nunca é sobrescrito. Todos os updates de um uberblock são feitos modificando outro elemento do array de uberblocks (No "uber block array" cada uberblock tem 1 kilobyte de tamanho). Após escrever o novo uberblock, o número do transaction group e os timestamps são incrementados, tornando-o assim o novo uberblock ativo "atomicamente". Uberblocks são escritos usando um algoritmo round-robin entre os vários vdevs dentro do pool.

ZIO

Falando de modo geral, o ZIO fornece um framework linear para todas as transações de I/O no ZFS (síncronas e assíncronas). Traduz DVA (Data Virtual Addresses) para posições lógicas nos vdevs, cuida da geração e verificação do checksum, cuida da compressão e encriptação de dados, deduplicação de blocos (data deduplication) e tentativas de operações de I/O que possam ter falhado (I/O retry).

Para fornecer verificações de integridade end-to-end, os checksums são habilitados por padrão para todos os blocos, mas só pode ser desabilitado para dados, não para metadados (ZFS metadata).

Já a compressão de dados pode ser habilitada pelo comando zfs para dados, ativando uma flag especial no block pointer, e atualmente é possível usar três algoritmos, lzjb, gzip (todos os níveis) e ZLE (zero length encoding). A compressão é automaticamente feita para os metadados.

Deduplicação de Dados (data deduplication - dedup)

Data Deduplicationé o processo de eliminação de copias duplicadas de dados. O dedup pode ocorrer em vários níveis, como arquivo, bloco or byte.

Os conjuntos de dados (arquivos, blocos ou uma série de bytes) passam por uma função hash para gerar o checksum, isso cria uma identificação exclusiva daquele dado (a probabilidade é muito alta de que o hash seja único). Se for usado um hash seguro como SHA256, a probabilidade de um erro (hash collision) é aproximadamente 2^256, ou 10^77, ou ainda em uma notação mais familiar 0.00000000000000000000000000000000000000000000000000000000000000000000000000001. Para referência, isto é 50 vezes menos provável de ocorrer do que um erro ECC indetectável e incorrigível na memória dos hardwares mais confiáveis nos dias de hoje.

Blocos de dados são "lembrados" em uma tabela que faz o cruzamento do checksum dos dados com a localização no disco e o contador de referência (ref count como em hard links). Quando você armazena uma nova copia de um dado existente, ao invés de alocar um novo espaço no disco, o código do dedup apenas incrementa o ref count no dado existente.

No ZFS, o dedup é em nívem de locos, pois esta é a menor granularidade que faz sentido para um storage system de uso geral. Como o bloco de checksum do ZFS tem 256 bits, O dedup fornece assinaturas únicas para todos os bloco em um zpool, desde que a função de checksum seja criptográficamente forte, exemplo SHA256.

ARC

ARC significa "Adaptative Replacement Cache" e a implementação no ZFS foi inspirada no trabalho de Nimrod Meggido e Dharmendra Modha "ARC: A Self-Tuning, Low Overhead Replacement Cache" apresentado no FAST 2003 (Usenix Conference of File and Storage Technologies). O ARC fornece cache para os buffers de SPA, ou seja, o ZFS usa ARC para fazer cache de data blocks. Ele usa um algoritmo alto-ajustável combinando métricas de acesso á blocos chamadas MRU (Most Recently Used), MFU (Most Frequently Used) e LRU (Least Recently Used).

O ARC alcança uma alta taxa de acerto (high cache hit rate) por usar múltiplos algoritmos de cache ao mesmo tempo: MRU e MFU. A memória principal é equilibrada entre esses algoritmos com base no seu desempenho, o que favorece manter extra metadata (na memória principal) para ver como seria o desempenho de cada algoritmo se ele dominasse toda a memoria.

  • ARC é usado pelo layer do DMU para fazer cache de data buffers
    - ZFS usa o ARC ao invés do page cache do sistema
  • Uma hash-table visivel por todo o sistema é mantida para todos os "cached buffers"
  • O DVA é usado como chave para o hash
  • Os buffers vem do kmem caches (kernel memory caches) criados pelo ZFS

L2ARC

Para melhorar o desempenho foi adicionado um segundo nível de cache (Level 2 ARC - L2ARC). A memória disponível no sistema é finita, portanto o ARC possui um mecanismo para liberar blocos para novas entradas. É neste ponto que entra o L2ARC - ele é um cache adicional entre o ARC e o disco, criado para impulsionar a performance das leituras aleatórias (random reads). Para isso são usados dispositivos com latência de leitura menor do que discos convencionais (solid state drives - SSD, por exemplo), caso contrário o resultado é o mesmo de ter apenas o ARC.

O funcionamento é relativamente simples: uma thread do ZFS percorre a lista de blocos que serão liberados das listas do MFU/MRU e copia os blocos para os devices do L2ARC (se ainda não estiverem presentes). Não existe uma ligação entre os 2 caches (cascade), portanto não há garantia de que os blocos liberados no ARC estarão no L2ARC.

SPA


O SPA fornece as interfaces para criar, destruir, importar, exportar e modificar storage pools.

Os zpools são alocados em estruturas no kernel chamadas spa_t e armazenados em uma árvore AVL (árvore de busca binária autobalanceada) global. A estrutura guarda, em áreas distintas, a configuração do zpool, informações de spares, cache devices (l2arc).

O zpool historyé um ring buffer de 1% do tamanho do pool (mínimo 128KB e máximo de 32MB), implementado e mantido para registrar as ações dos comandos zpool e zfs, além de eventos internos do ZFS. Embora seja um ring buffer, o registro de criação do zpool nunca é sobrescrito.

Metaslab/Spacemap


Todo file system precisa manter o controle de 2 coisas básicas: onde há dados, e onde está o espaço livre.

A princípio, em uma estrutura que gerencia discos, filesystems e dispositivos que armazenem dados, manter o controle do espaço livre não é essencial. Cada bloco só pode ter um dos dois status, alocado ou livre, então o espaço livre pode ser calculado assumindo que tudo é livre e depois subtraindo tudo que esteja alocado. Além disso, o espaço usado pode ser encontrado por um busca na árvore de diretórios (tree traversal). Qualquer bloco que não for encontrado pela busca à partir do root dir do zpool, por definição, é livre (simples não? nem tanto).

Na prática, encontrar espaço desta forma pode ser insuportável, porque pode levar muito tempo para terminar em file systems de tamanho não trivial. Para fazer a alocação e liberação de blocos de maneira rápida, o file system precisa de uma maneira eficiente de manter o controle do espaço livre.

O ZFS utiliza spacemaps para controlar o espaço livre. Ele divide o espaço existente em cada device virtual em uma região chamada metaslab. Cada metaslab tem um mapa associado, que descreve o espaço livre daquele metaslab. O spacemap é apenas um log de alocações e liberações de blocos, em ordem cronológica.

Quando o ZFS decide alocar blocos de um metaslab em particular, ele primeiro lê o space map daquele metaslab no disco e depois aplica as alocações/liberações em uma árvore AVL na memória.

...continua

    Autor deste post:

    Diogo Padovani
    Principal Systems Engineer
    Oracle Systems, Revenue Product Engineering (RPE)


    Results in from 2015 Select from SQL Championship

    $
    0
    0

    You will find below the rankings for the Annual SQL Championship for 2015. The number next to the player's name is the number of times that player has participated in a championship. Below the table of results for this championship, you will find another list showing the championship history of each of these players. 

    Congratulations first and foremost to our top-ranked players:

    1st Place: pavelz of Czech Republic

    2nd Place: SteliosVlasopoulos of Belgium

    3rd Place: JustinCave of United States

    Next, congratulations to everyone who played in the championship. We hope you found it entertaining, challenging and educational. And for those who were not able to participate in the championship, you can take the quizzes through the Practice feature. We will also make the championship as a whole available as a Test, so you can take it just like these players did.

    Finally, many thanks to Kim Berg Hansen, the SQL Quizmaster who provided a very challenging set of quizzes, and our deepest gratitude to our reviewers, especially Elic, who has once again performed an invaluable service to our community.

    Steven Feuerstein

    RankNameCountryTotal Time% CorrectTotal Score
    1pavelz (2)Czech Republic 32 mins 04 secs85%5872
    2SteliosVlasopoulos (3)Belgium 44 mins 02 secs83%5674
    3JustinCave (3)United States 36 mins 40 secs81%5553
    4Maxim Borunov (2)Russia 38 mins 14 secs81%5547
    5mentzel.iudith (3)Israel 42 mins 29 secs79%5380
    6Christoph Hillinger (3)Austria 44 mins 28 secs79%5372
    7NickL (1)United Kingdom 34 mins 46 secs77%5261
    8seanm95 (3)United States 27 mins 17 secs74%5141
    9krzysioh (3)Poland 31 mins 14 secs74%5125
    10JeroenR (1)Netherlands 28 mins 55 secs72%4984
    11Pavel_Noga (2)Czech Republic 44 mins 50 secs70%4771
    12Andrei Puzanau (1)Belarus 24 mins 49 secs68%4701
    13Chase (3)Canada 30 mins 00 secs68%4680
    14NielsHecker (3)Germany 43 mins 37 secs68%4626
    15Henry_A (2)Czech Republic 13 mins 25 secs66%4596
    16Sachi (3)India 24 mins 22 secs66%4553
    17Eric Levin (3)United States 32 mins 20 secs66%4521
    18Sandra99 (2)Italy 39 mins 53 secs66%4490
    19Chad Lee (3)United States 29 mins 50 secs64%4381
    20Sartograph (1)Germany 41 mins 47 secs64%4333
    21Marek Sobierajski (1)Poland 23 mins 13 secs62%4257
    22berkeso (1)Hungary 23 mins 38 secs62%4255
    23alexs2011 (1)Romania 33 mins 24 secs62%4216
    24Mehrab (2)United Kingdom 44 mins 39 secs62%4171
    25katuinbouter (1)Netherlands 24 mins 45 secs60%4101
    26Kuvardin Evgeniy (3)Russia 30 mins 51 secs60%4077
    27craig.mcfarlane (1)Norway 33 mins 06 secs60%4068
    28PZOL (1)Hungary 37 mins 12 secs60%4051
    29richdellheim (1)United States 44 mins 48 secs60%4021
    30AnnaO (3)Ireland 11 mins 48 secs57%4003
    31AndreyBerliner (2)Ukraine 29 mins 01 secs57%3934
    32Rytis Budreika (3)Lithuania 08 mins 06 secs55%3868
    33Michal P. (1)Poland 30 mins 15 secs55%3779
    34tonyC (2)United Kingdom 41 mins 44 secs55%3733
    35Talebian (1)Netherlands 15 mins 51 secs53%3687
    36TZ (1)Lithuania 10 mins 23 secs49%3408
    37Karel_Prech (1)Czech Republic 30 mins 26 secs49%3328
    38Enrico Rebecchi (1)Italy 44 mins 08 secs45%2973
    39VictorD (2)Russia 28 mins 40 secs40%2735

    Championship Performance History

    After each name, the quarter in which he or she played, and the ranking in that championship.

    NameHistory
    pavelz2014:2nd, 2015:1st
    SteliosVlasopoulos2013:27th, 2014:7th, 2015:2nd
    JustinCave2013:17th, 2014:12th, 2015:3rd
    Maxim Borunov2015:4th
    mentzel.iudith2013:4th, 2014:20th, 2015:5th
    Christoph Hillinger2013:2nd, 2014:5th, 2015:6th
    NickL2015:7th
    seanm952013:30th, 2014:15th, 2015:8th
    krzysioh2013:25th, 2014:38th, 2015:9th
    JeroenR2015:10th
    Pavel_Noga2014:30th, 2015:11th
    Andrei Puzanau2015:12th
    Chase2013:11th, 2014:32nd, 2015:13th
    NielsHecker2013:7th, 2014:4th, 2015:14th
    Henry_A2015:15th
    Sachi2013:9th, 2014:40th, 2015:16th
    Eric Levin2013:19th, 2014:21st, 2015:17th
    Sandra992014:24th, 2015:18th
    Chad Lee2013:31st, 2014:3rd, 2015:19th
    Sartograph2015:20th
    Marek Sobierajski2015:21st
    berkeso2015:22nd
    alexs20112015:23rd
    Mehrab2014:35th, 2015:24th
    katuinbouter2015:25th
    Kuvardin Evgeniy2014:33rd, 2015:26th
    craig.mcfarlane2015:27th
    PZOL2015:28th
    richdellheim2015:29th
    AnnaO2013:28th, 2014:13th, 2015:30th
    AndreyBerliner2014:23rd, 2015:31st
    Rytis Budreika2013:22nd, 2014:8th, 2015:32nd
    Michal P.2015:33rd
    tonyC2014:26th, 2015:34th
    Talebian2015:35th
    TZ2015:37th
    Karel_Prech2015:38th
    Enrico Rebecchi2015:39th
    VictorD2014:37th, 2015:40th

    Results in from 2015 Database Design Championship

    $
    0
    0

    You will find below the rankings for the Database Design Annual Championship for 2015. The number next to the player's name is the number of times that player has participated in a championship. Below the table of results for this championship, you will find another list showing the championship history of each of these players. 

    Congratulations first and foremost to our top-ranked players:

    1st Place: Andrey Zaytsev of Russia

    2nd Place: SteliosVlasopoulos of Belgium

    3rd Place: pavelz of Czech Republic

    Next, congratulations to everyone who played in the championship. I hope you found it entertaining, challenging and educational. And for those who were not able to participate in the championship, you can take the quizzes through the Practice feature. We will also make the championship as a whole available as a Test, so you can take it just like these players did.

    Finally, our deepest thanks to Chris Saxon, the Database Design Quizmaster, for coming up with another great set of championship quizzes, as well as our reviewers, in particular Elic, who has once again performed a service to the community "above and beyond."

    Steven Feuerstein

    RankNameCountryTotal Time% CorrectTotal Score
    1Andrey Zaytsev (2)Russia 33 mins 47 secs87%4815
    2SteliosVlasopoulos (2)Belgium 33 mins 55 secs84%4664
    3pavelz (2)Czech Republic 38 mins 28 secs82%4496
    4Maxim Borunov (2)Russia 35 mins 07 secs79%4360
    5mentzel.iudith (2)Israel 38 mins 36 secs79%4346
    6whab@tele2.at (1)Austria 16 mins 19 secs76%4285
    7JustinCave (2)United States 31 mins 53 secs76%4222
    8Karel_Prech (1)Czech Republic 19 mins 01 secs74%4124
    9katuinbouter (1)Netherlands 20 mins 42 secs74%4117
    10JeroenR (1)Netherlands 21 mins 26 secs74%4114
    11Eric Levin (2)United States 20 mins 47 secs71%3967
    12siimkask (2)Estonia 26 mins 12 secs71%3945
    13Sachi (2)India 20 mins 26 secs68%3818
    14tonyC (1)United Kingdom 26 mins 06 secs68%3796
    15João Borges Barreto (2)Portugal 30 mins 57 secs68%3776
    16msonkoly (1)Hungary 37 mins 47 secs68%3749
    17coba (2)Netherlands 15 mins 17 secs66%3689
    18Chase (2)Canada 17 mins 45 secs66%3679
    19Sherry (2)Czech Republic 21 mins 32 secs66%3664
    20kbentley1 (1)United States 28 mins 27 secs66%3636
    21krzysioh (2)Poland 32 mins 45 secs66%3619
    22Rytis Budreika (2)Lithuania 10 mins 07 secs63%3560
    23Joaquin_Gonzalez (2)Spain 15 mins 07 secs63%3540
    24seanm95 (2)United States 18 mins 42 secs63%3525
    25Marek Sobierajski (1)Poland 22 mins 15 secs63%3511
    26Michal P. (2)Poland 18 mins 33 secs61%3376
    27ted (1)United Kingdom 28 mins 11 secs61%3337
    28Kuvardin Evgeniy (2)Russia 32 mins 23 secs61%3320
    29MarcusM (2)Germany 35 mins 24 secs61%3308
    30NielsHecker (2)Germany 39 mins 32 secs61%3292
    31Pavel_Noga (2)Czech Republic 39 mins 48 secs61%3291
    32JasonC (2)United Kingdom 21 mins 05 secs58%3216
    33AnnaO (2)Ireland 20 mins 06 secs55%3070
    34PZOL (2)Hungary 33 mins 55 secs53%2864
    35manfred.kleander (2)Austria 36 mins 29 secs53%2854
    36NickL (1)United Kingdom 38 mins 37 secs53%2846
    37VictorD (2)Russia 14 mins 01 secs47%2644
    38umir (1)Italy 14 mins 12 secs16%843

    Championship Performance History

    After each name, the quarter in which he or she played, and the ranking in that championship.

    NameHistory
    Andrey Zaytsev2014:9th, 2015:1st
    SteliosVlasopoulos2014:7th, 2015:2nd
    pavelz2014:2nd, 2015:3rd
    Maxim Borunov2014:23rd, 2015:4th
    mentzel.iudith2014:5th, 2015:5th
    whab@tele2.at2015:6th
    JustinCave2014:3rd, 2015:7th
    Karel_Prech2015:8th
    katuinbouter2015:9th
    JeroenR2015:10th
    Eric Levin2014:10th, 2015:11th
    siimkask2014:13th, 2015:12th
    Sachi2014:33rd, 2015:13th
    tonyC2015:14th
    João Borges Barreto2014:41st, 2015:15th
    msonkoly2015:16th
    coba2014:14th, 2015:17th
    Chase2014:20th, 2015:18th
    Sherry2014:26th, 2015:19th
    kbentley12015:20th
    krzysioh2014:37th, 2015:21st
    Rytis Budreika2014:29th, 2015:22nd
    Joaquin_Gonzalez2014:21st, 2015:23rd
    seanm952014:1st, 2015:24th
    Marek Sobierajski2015:25th
    Michal P.2014:30th, 2015:26th
    ted2015:27th
    Kuvardin Evgeniy2014:19th, 2015:28th
    MarcusM2014:32nd, 2015:29th
    NielsHecker2014:15th, 2015:30th
    Pavel_Noga2014:24th, 2015:31st
    JasonC2015:32nd
    AnnaO2014:31st, 2015:33rd
    PZOL2014:38th, 2015:34th
    manfred.kleander2014:28th, 2015:35th
    NickL2015:36th
    VictorD2015:37th
    umir2015:38th

    EBS Financials February 2016 Recommended Patch Collections (RPCs) Released!

    $
    0
    0
    Oracle E-Business Suite (EBS) Financials Development has released February 2016 Recommended Patch Collections (RPCs) for the following products:
    • Assets
    • Cash Management
    • Collections (coming soon)
    • E-Business Tax
    • Calculation
    • Reporting Ledger
    • iReceivables
    • Loans
    • Payables
    • Receivables
    • Sub-Ledger Accounting
    • Payments
    • Internet Expenses
    For details and the complete list of available RPCs for Oracle EBS Financials, please see Doc ID 954704.1, EBS: R12.1 Oracle Financials Recommended Patch Collection (RPC).

    Bridging the Gap Between Mobile and Customer Service

    $
    0
    0

    By Daniel Foppen, Senior Principal Product Manager, Oracle Service Cloud

    Twenty years ago, mobile devices were just getting started.  In fact, back in 1995 only one percent of the population had access to a mobile device. Today, there are over 5.2 billion mobile phone users comprising 73% of the global population.  Mobile devices now have an impact on just about every part of our daily lives – from communication and social interaction to mobile commerce.  To say that mobile is a trend is an understatement.  The rise of mobile is fundamentally changing the way we interact – and is spawning a whole new generation of technology, applications and businesses.  Particularly within the service space, mobile is not only pushing how organizations should assess evolving customer engagement, but how best to tackle mobilizing the modern customer service organization. 

    We see a trend in business software that is focused around the mobile experience, in which employees across the enterprise use software for a wide variety of functions including customer service, sales force automation, collaboration and communication, all while on the move, using their phones and tablets. There is great value in terms of agility, productivity and employee experience to increase your organization’s mobility.  Yet, we would encourage you to not translate this into, "we need a mobile app or responsive user interface for all of our software."

    There are use cases in which it makes sense, and there are use cases in which it clearly doesn’t. A customer service representative (CSR) working for a large B2C contact center, handling complex cases from many different channels, has a need for a highly productive work environment.  It just doesn't make sense to try to make that CSR handle these cases on a mobile phone or tablet.  A sales representative on the road, or a field service representative however, is on the move every day. In both scenarios, a mobile experience makes perfect sense.

    To Mobile, Or Not To Mobile (That’s The Question)

    Before jumping into relevant use cases, it is helpful to clarify a common misconception about mobile: Mobile isn't just about mobile phones.

    A lot of investments have gone into making specific applications for specific types of devices, e.g. a desktop application, a mobile application or a tablet application. Yet, it becomes less and less important to talk about device-specific software, as the lines between these categories are blurring.  Mobile is about understanding specific tasks and use-cases, providing the tools that make the greatest impact, and making sure these different tools are consistent and connected. Let’s review some use cases within different areas of customer service…

    Mobile Scenarios in Customer Service

    Agents working in multi-channel contact centers spend the majority of their day solving cases coming in from a range of different channels. They need an interface in which productivity is key. They need all the context and data available to solve the customer issue as efficiently as possible. They need a unified desktop, integrated with sensitive data from back-end systems through behind-the-firewall integration. Also, they are likely using two or three big monitors (flanked by yellow post-it notes and cute pictures of kids and dogs). Clearly this is not a great use case for mobile.

    However, when you think about supervisors and managers that walk around the contact center, mobile access could be of great value.  Still, mobile access doesn't necessarily mean this persona would access the system through a mobile phone. Supervisors and managers may want to monitor their operations, yet get deeper into cases when needed. Access through a tablet would probably make most sense.

    Similarly, when customer service is decentralized and service is delivered via face-to-face support in stores, at airports, front-desks, branches, etc. users will occasionally need to review cases, update contact information and access customer product information. They will need easy access to this information on a computer, laptop or tablet outside the contact center in order to deliver a connected customer experience.

    Uberization Of Field Service

    When determining where to apply a mobile experience, it might be easy to overlook some of the most obvious use cases. Let’s explore the ultimate mobile use case: field service.  Advancements in mobile technology have not just changed how field service representatives engage with a device, but also the type of work they perform, as well as how they manage their day.

    Today, customers expect every service agent they engage with to solve all of their problems. For field service, this means that the customer expects a field representative to understand everything that has occurred in the service journey before arriving onsite for a job.  In addition, the customer expects the field representative to have the same abilities and tools as every other person on the customer service organization. The result is that all of these new tasks need a mobile interface that can quickly be accessed by a field service representative. 

    Furthermore, advancements in mobile technologies are allowing a complete shift in how field organizations are structured and managed. Mobile technology and the sharing economy are now allowing for non-centralized field service organizations. This is a trend we refer to as the “Uberization” of field service, which means that through mobile access and automation, the field can dispatch their own work, create their own schedules, and make adjustments as the day changes, all while operating at an optimal level.

    Complex Service On The Move

    Another great example where we see great mobility use cases are around complex rule or policy processes, for example around immigration cases. You would typically associate officials assessing such cases with office desks, lengthy forms and rubber stamps and long queues of applicants waiting outside. Now, with greater numbers of refugees entering Europe we have seen solutions to equip officials to go outside their offices, right where the refugees are arriving and on the spot conduct the assessment with a tablet app and simple interview screens to determine the appropriate asylum status.  Mobile decisioning is providing better agility by enabling consistent service regardless of device or channel.

    Don’t Forget Your Customers

    25% of our customers’ customers already use a mobile device to navigate to your support portal. Is your website prepared for that? Using responsive design you can ensure the support section on your website is presented in the optimal way for each type of screen. Also make sure your knowledge articles are structured in a way the content can be easily consumed on a smaller screen.  In addition to self-service and knowledge we would also recommend looking at mobile use cases for assisted service experiences. For instance, with in-app mobile co-browse, live chat over mobile phones, as well as video chat.

    Mobile is undoubtedly changing both our personal and professional lives. Customer service organizations should decide on a strategy to bridge the gap between mobile and customer service. This requires a strategic review of value drivers, combined with a tactical search for relevant use cases.

    Don’t fall in the “we need an app for everything” trap – some users need big screens, some users don’t.  Investigate how to use mobile technologies to change your field technicians into versatile brand ambassadors, and explore opportunities to increase agility and mobility by bringing complex policy and rule processes to a mobile environment.  Finally, consumers will ever more use their mobile devices to contact you, so your website and contact centers need to be ready for this new reality.

    An Interview with Scott Lynn, Senior Manager, Solaris Product Management, Oracle Corporation

    $
    0
    0

    Guest Blogger:
    Torrey Martin
    Fujitsu M10 Product Specialist
    Fujitsu-Oracle Center of Excellence  




    The Oracle Solaris operating system has been powering servers around the world for decades and continues to set the bar in performance, reliability and security. The Fujitsu M10 SPARC server family runs Oracle Solaris exclusively, and the qualities of OS and server combine to provide applications running around the world with mission-critical and high performance piece of mind. Recently the Fujitsu Center of Excellence for Oracle team jumped at the opportunity to interview one of Oracle Solaris’ key evangelists. 

    How long has Oracle Solaris been around and what changes have been the most important or had the largest impact on customers?

    Solaris was first launched in 1992, and in 2005 we released Oracle Solaris 10, which had a major impact for our customers; it featured Solaris ZFS, Zones or Solaris Containers, as well as DTrace, which brought big, new capabilities to our customers. In fact, a large number of our customers are still running Oracle Solaris 10 today.

    Oracle Solaris 11 is even more impressive. For example, our patching mechanism in Oracle Solaris 11 is revolutionary and has provided our customers with a 16X reduction in patching time over Red Hat Enterprise Linux. Updates are easy, we’ve dramatically shrunk the time and effort needed to patch. You don’t have to build a custom patch set, which is one of the things that used to take so long. Oracle Solaris updates come as one complete, pre-tested patch set. Plus, with Solaris Boot Environments you can patch systems while they’re running so the only downtime you experience is a fast reboot. With Oracle Solaris, it now takes just minutes to patch a server; even a very large system with 10,000 disk drives and 20 different network interfaces. On a simple Windows PC, patching can take the system offline for 30 minutes or more! Think of the security and time savings that Oracle Solaris 11 provides. When a vulnerability is discovered we release the patch, you type “pkg update,” you reboot, and you are back up and running in minutes.

    And that’s just the initial release of Oracle Solaris 11. Since then, we’ve created a technology called Unified Archives, making it easier to manage cloud environments. It’s a flexible way of taking a snapshot of a system, and redeploying on any other system using Oracle Solaris virtualization technologies– regardless of which virtualization technology they were created in, or the size.

    In terms of security and from my experience, rather than a full install that requires an iterative process of disabling functions, Oracle Solaris can be installed with a minimal package and then only the required functions added while maintaining PCI-DSS compliance. The audit process is much easier. This is especially important for customers in the e-commerce and financial fields, and it makes Oracle Solaris less costly to secure than Linux.

    How is Oracle Solaris evolving to meet the needs of cloud, mobile, scale-out, IoT, etc?

    We integrated OpenStack, the fastest growing open source project in history into Oracle Solaris 11.2, giving customers a full cloud management infrastructure and a set of APIs. I want to point out that we didn’t just add OpenStack to Oracle Solaris; we actually integrated the two together.For example, OpenStack works with our system management software, so if a VM service running in an OpenStack cloud cluster of 1,000 machines goes down, we automatically re-start services, so it never “goes down.”

    For scale-out environments, we have integrated Puppet into Oracle Solaris 11 and continue to work with other open source technologies.

    In terms of mobile and the Internet of Things, with everything being browser-based these days, you can use Solaris technologies to make your back-end server infrastructure secure, giving you the assurance that when you connect a device to your network, your database servers are secure, and those machines can’t be used by cyber criminals to infiltrate your datacenter.

    Is Solaris still relevant in the datacenter space?

    As much as some people like to say we’re not, we absolutely are. Security is a top priority and Oracle Solaris gives you so many built in security capabilities, which when used together, can protect you from attacks. Another big technology in Oracle Solaris 11.3 is the virtual memory system. We are able to demonstrate one of the advantages of Oracle Solaris over Linux in terms of database start-up time. We took two identical x86 boxes with two disk drives side-by-side: one with Solaris installed, one with Red Hat Enterprise Linux installed. When we started the Oracle Database (6TB memory, 5TB SGA) running on Red Hat Linux, it took 51 minutes for database start-up. With the identical Oracle Database running on Oracle Solaris, it took only 166 seconds (~2.8 minutes) to start the same configuration. The Solaris team has and continues to work with the Oracle Database team to provide additional benefits for using Oracle Solaris.

    We’re also building out a program to make Oracle Solaris and SPARC readily available to the open source community so anyone can develop and test on top of Solaris, making it easy for SPARC and Oracle Solaris to be the default platform.

    Is Oracle Solaris still relevant in the independent software vendor space (ISV)?

    For the ISV community, take a look at our software investments. We doubled the size of our Solaris development team since the Sun acquisition (2010), and we’re investing heavily to make Solaris even more secure and easier to use. We give you a set of developer tools called Oracle Solaris Studio that ISVs, and even customers, are using. It supports multiple platforms (SPARC/Solaris, x86/Solaris, other OSes), so you can find cross-platform performance bugs.

    Customers have told us that our tools are easier to use and so much better for diagnosis. The Solaris tools give them a 50 percent increase in developer productivity. By the way, only the Solaris tools can go from stack trace to Java down to C.

    We’re working hard with ISVs to give them everything they need.ISVs see the value from deploying on Oracle Solaris and their customers are asking for it.

    Security is increasingly important in a world with billions of end-point devices and cloud-based apps – how secure is Oracle Solaris?

    In my opinion, Oracle Solaris is the most secure operating system out there. In addition to Solaris packaging, the inherent way it works, and PCI-DSS compliance testing, and we offer Immutable Zones.

    Immutable Zones let you set the hypervisor, guest, and host OS to read-only - not even writeable by root. Also, by default, Oracle Solaris doesn’t have a root user; it’s all role-based access control (RBAC) to carefully regulate who sees what. This is increasingly important, because almost every major attack today involves someone getting escalated root privileges, allowing them to run malicious code. And these aren’t “smash and grab” attacks. They want to be in there for months; scanning systems, looking for vulnerabilities like LDAP/Active Directories to attack, to gain access to user names, passwords, and other data.

    Because Oracle Solaris provides read-only systems, cyber criminals simply cannot land. Even if they somehow get into your network, they can’t gain a foothold. So you can use Immutable Zones and Kernel Zones to isolate or “DMZ off” your web-tier.

    Why use Oracle Solaris over Linux?

    This is really simple. In my opinion, Solaris is more secure than Linux. Oracle Solaris is simpler to manage than Linux. Paired with today’s high-performing hardware, Oracle Solaris is more efficient and much easier to manage and maintain than Linux.

    Earlier, we talked about the 16X advantage over Red Hat, allowing Oracle Solaris administrators to spend less time on patching and updating. Add to that our unified archive capability, which allows you to take a snapshot of a machine running multiple VMs, encrypt it, and then deploy all or part of it – and that’s with any VM, any virtualization technology, any size.

    We have one customer who runs our compliance tool over his entire datacenter to get a weekly report to know everything is fine. In this case, we’re taking 30 to 60 percent of compliance spend and reducing it by as much as 10X, which frees up all that extra money for the customer to innovate in the datacenter.

    What advantages do features like ZFS, Zones, and DTrace offer customers?

    Let’s start with DTrace. DTrace allows customers to analyze how their systems and software are running; giving them an in-depth view of the system and what the software running on that system is doing at any time and in real-time. The only other way to do this is to build this capability into the application software itself, but the nice thing about DTrace is that it’s built into the operating system, and it’s safe to run in production.

    Various versions of Linux have tried to implement something like DTrace unsuccessfully. A blogger friend of mine, who uses both Linux and Oracle Solaris, was trying to diagnose a problem in Red Hat Linux using a tool with similar functionality and it crashed the production server!

    Oracle Solaris Zones are basically zero-overhead virtualization. Truly, there’s no additional overhead, plus it’s built into the operating system so you not only get better performance when you’re virtualized, but you also need to buy fewer machines. As a comparison, a traditional type 2 hypervisor can use up to 40 percent of the processor just to manage the environment. You don’t incur that overhead with Oracle Solaris - even when the number of virtual environments on the system gets large. With Solaris Kernel Zones, you get all the flexibility of a hypervisor, without the performance penalty normally associated with virtualized environments or the dollar penalty required to license that virtualized environment. Bare-metal performance without spending money!

    Next, we have our advanced file system, Solaris ZFS. Besides being ultra-reliable and having the ability to detect and fix corruption at the disk block level before it happens, ZFS builds in compression, de-duplication and encryption. Why is this important? Compression and de-dupe result in big savings for our customers. Customers use ZFS to compress data in their datacenters and get from 3X to 22X improved compression rates, depending on the data sets. Compressing data sets for your database or your application at this rate is phenomenal – it means one-third fewer disks needed, one-third cost on disks, not to mention the floor space saved, much lower power and cooling needs, and lower administrative costs.

    In terms of security, cryptographic engines are built into the processors today and Solaris automatically uses these cryptographic engines to achieve lightning-fast cryptography. It’s so fast that you don’t even question what to encrypt, you just encrypt everything.

    Are there advantages to running Oracle software products on Solaris?

    Yes! Our Solaris kernel engineers work side-by-side with Oracle Database developers in order to make the database run better on Oracle Solaris. This shows up in many ways, one being the dynamic resizing of database shared memory. We can resize memory up and down, which means you can actually resize VMs running the database. For example, say there are certain peak times of year when your database needs maximum resources. Oracle Solaris allows you to easily allocate more memory or CPU power, or shrink them back down so resources can be used by another VM without having to take them offline.

    If you could dispel one rumor or misunderstanding about Oracle Solaris, what would it be?

    The biggest misunderstanding about Oracle Solaris is that we’re not innovating...but, I hear this from customers who are running Solaris 8, 9 and 10! These customers need to move to Oracle Solaris 11, and they will see an amazing amount of innovation going on. As I mentioned, we’ve doubled the size of the Oracle Solaris development team, and there are a large number of people, all over the world, working on Oracle Solaris today.

    Another thing I hear is that Oracle Solaris is not open.While the kernel is not open, 90% of the software shipped with Oracle Solaris is open source. With Oracle Solaris 11.3, we announced that all of our open source software is freely available to anyone to update. With an entire community available to detect and fix bugs, the overwhelming majority of bug fixes in Oracle Solaris happen in free and open source software and are available as soon as the fix hits our release repository.

    What do you think of the Fujitsu M10 servers and the Fujitsu SPARC64 X/X+ processors?

    The Fujitsu SPARC64 X and X+ processors have a unique feature that has always intrigued me: they provide hardware acceleration for Oracle NUMBER, so sequences are faster. You can accelerate calculations since the work is offloaded from the software and done in hardware in a nanosecond or two. So anytime there is math involved in the database, the database is going to run much, much faster. I was an Oracle Database engineer for 8 1/2 years, responsible for sequences and Oracle NUMBER, and always thought, “why don’t we have that in our servers?”

    We have a very strong relationship with Fujitsu and expect that to continue. Fujitsu is one of the few companies that has access to Oracle Solaris source code, and that has to do with the strong relationship. Our customers win because they get to pick the best hardware that meets the specific needs of their deployments.

    Viewing all 19780 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>