Quantcast
Channel: Oracle Bloggers
Viewing all 19780 articles
Browse latest View live

Oubliez la fidélité. L’attachement, voilà ce qui fait vraiment revenir les clients !

$
0
0


Philip Graves, psychologue spécialisé dans le comportement des consommateurs, s’est penché sur la différence entre la fidélité et l’attachement client.

Il apporte des pistes de réflexion sur les questions et les défis fréquemment rencontrés dans le domaine du service client :

  • pourquoi vous perdez votre temps à essayer de fidéliser vos clients
  • les sentiments qui animent réellement vos clients
  • les indicateurs les plus pertinents
  • la manière de concevoir des initiatives de services plus efficaces à l’ère du numérique.

Vous souhaitez en savoir plus sur les réponses à ces enjeux ? Téléchargez dès maintenant votre exemplaire de l’étude : bit.ly/DocDogcampaign


Coherence 12.1.3 New Features by Craig Blitz

$
0
0

clip_image002

Oracle's Coherence product management director, Craig Blitz, introduces the 12.1.3 release and its two key themes of developer productivity and developer agility. New features such as JCache and the recently added support for the MemCached API are also mentioned. Watch the video here.

For more information visit the Coherence tag (WebLogic Community membership required)

WebLogic Partner Community

For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInForumWiki

Technorati Tags: ,,,,,,

Test-driven development using the Oracle SOA Suite by The Cattle Crew

$
0
0

clip_image001As in all software projects, quality assurance with thorough testing in integration projects is a key factor to success. Test-driven development focuses exactly on this aspect: unit and integration testing of integration elements must be done as soon as possible. The tests used in these phases must also be reproducible so that they can be run automatically in case of changes in the integration logic, thus guaranteeing that no unintended changes are made.

Oracle SOA/BPM Suite is a powerful tool suite for integration. This article shows how test-driven development can be done with the Oracle tooling. Integration elements built with Oracle SOA/BPM Suite are SCA composites made up of several components. Since mainly the integration layer is concerned, there is quite often heavy usage of external web services, database adapters, etc. The composites also usually have an inbound interface the invocation of which is the starting point of the integration logic.

As it can be seen from above, the key to testing SCA composites is to define an inbound message and assert the data found in various other messages during the integration logic. Some easy scenarios can instantly be identified. XSLT transformations within the composites can be unit-tested without the complete logic itself for example. Likewise, end2end testing is also easily done – at least for a web service interface – thus assuring at least that the result is what we expected. The problem is that for end2end testing, the correct functionality of back-end systems is a prerequisite. If the back-end functionality is developed parallel to the integration logic, end2end testing is far away from test-driven development. Read the complete article here.

SOA & BPM Partner Community

For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]Wiki

Java: 20 летвтемпеинноваций

$
0
0

23 маядвадцатьлетназадвышлаперваяпубличнаяверсия Java (таймлайндвадцатилетнейисторииразвитияплатформы). Этобылоченьудачныймомент, совпавшийсновойфазойраспространенияинтернетаиусилениемролитехнологийвповышенииэффективностибизнеса, проработкебизнес-процессовисозданииновыхспособоввзаимодействиямеждукомпаниямииихклиентами.

Оценитьважностьязыкапрограммирования (особеннотакогопопулярногокак Java) можночерезто, какблагодаряемутехнологиямнаходятсяновыеприменения. Иу Java тутвсевполномпорядке. Например, революциябольшихданных—побольшомусчетузаслуга Java.

СегоднязначительнаячастьсерверногоПОнаписанана Java. «Интернетвещей»такжеразвиваетсяпобольшейчастизасчетустройствподуправлением Java.

Однакодвадесяткалетназадкязыкупрограммированияпредъявлялисьиныетребования. Этодолженбылбытьхорошийуниверсальныйязыкпрограммированиядлядесктопов.

Java появиласьвчрезвычайноважныймоментдляисториипрограммирования. ДотехпорвразработкеПОцарствовалитриязыка: Fortran внаучныхвычислениях, COBOL вбизнесеи C (С++ тогдатольконачиналраспространяться) вовсехостальныхпроявленияхкоммерческогопрограммирования.

Менеепопулярныеязыкизаполнялиузкиениши: Ada (вооруженныесилы), Pascal (любительскоепрограммированиеиПОдлямелкогобизнеса), Smalltalk и Lisp (академическиекруги), Perl (системныеадминистраторы) ит.д. Ноосновусоставляла, конечно, большаятройка.

Недовольство C

Какбытонибыло, недовольствоязыком C постепеннонарастало. Унеговтевременабылодвакрупныхнедостатка: во-первых, онбылслишкомнизкоуровневый, приходилосьиспользоватьслишкоммногокоманддлятого, чтобывыполнятьдажесамыепростыезадачи. Во-вторых, оннебылпортируемым: есликоднаписанподплатформу PC, запуститьегонаминикомпьютереилимейнфреймеуженеполучится.

НаписаниеПОнанизкоуровневомязыкепоощущениямнапоминаетстрижкугазонаножницами. Работатьнадтакимипроектамискучноиизнурительно.

Несмотрянато, чток 1995 годумногиепроизводителиужепринялистандарт ISO 1989, закреплявшийспецификацииязыка C, укаждогоизнихбылиуникальныерасширения, заметноусложнявшиепортированиекодаподновуюплатформу.

Такчтосовершеннонеслучайно, чтоименновэтовремязародиласьцелаяплеядановыхязыков. Тольков 1995 годупоявились Ruby, PHP, Java и JavaScript.

Java практическисразусталапопулярнасредиосновногокостякаразработчиковблагодаряпортируемостиибольшомунаборувстроенныхбиблиотек. Тогдалозунгом Java было«Напишиодинраз, запускайгдеугодно». Иэтодействительноработало. Именнопоэтому Java сталаотличнымвыборомдлянаписаниябизнес-приложений, которыенужнобылозапускатьнаразныхплатформах.

Последовавшаяподдержка Java состороны IBM (вособенностичерезProject San Francisco) утвердила Java вкачествеосновногоязыкапрограммированиядлябизнеса.

Когдаязыкобретаеттакуюпопулярность, долгаяжизньемупрактическигарантирована. Итотфакт, чтовсеперечисленныедваабзацаназадязыкивэтомгодууспешноперешагиваютдвадцатилетнююотметку, полностьюподтверждаетэто. Однако Java выделяетсявэтомспискетем, какоеразвитиеязыкиплатформапережилизаэтовремя.

Одинизсамыхочевидныхпримеровположительныхизменений—этоулучшенияв Java Virtual Machine (JVM). Именноонассамогоначалаобеспечивалапортируемостькода, однакораньшерадиэтогоприходилосьжертвоватьбыстродействием. Сразвитием JVM необходимостьидтинатакойкомпромисспрактическисошлананет.

Постоянныеулучшения

Сегодня Java —одинизсамыхбыстрыхязыковпрограммирования, онпрекрасномасштабируетсяиспособенобрабатыватьогромныересурсы. Феноменбольшихданных, обусловленн��йименновозмoжностями Java, вполноймереподтверждаетэто.

Естественно, поначалув Java былиострыеуглыиразличныедетскиепроблемы, однакопостоянныеулучшенияпревратилиеевинструмент, способныйсправитьсяпрактическислюбойзадачей.

Например, Java 8 принеслассобойнекоторыевозможностиизфункциональногопрограммирования, которыепозволяютделатькодкомпактнее, надежнееивыразительнее.

Подробностиизистории Java настолькоширокоизвестны, чтонесложнозабыть, насколькоэтовсе-такиредкоеявление. Далеконекаждыйязыкполучаеттакуюширокомасштабнуюипостояннуюподпитку. Сравнитьможноразвечтос C# (всвязкесрантаймом .NET) от Microsoft.

Наопределенномэтапебыланадежданато, чтокрупныесообществаразработчиковсамостоятельносмогутпродвигатьтакогородаизменения. Идействительно, то, вкакомтемперазвивалисьтогдасредстваразработки, давалопрограммистампочвудляуверенности. Однаковскореоказалось, чтоэтисредствабылискорееприятнымиисключениями, анепредвестникамитого, чтопоследовалодальше.

Празднуядвадцатилетие Java хочетсяотметитькакглавноедостижениенесамфактдолголетияязыка, аименнонеизменныйтемпинновацийставшийвозможнымблагодаряпостоянныминвестициям.

Demantra Worksheet Performance - A summary guide at Customer Request

$
0
0

Worksheet performance.  There are dozens of notes.  It can be challenging to find the best approach. 

  • If you are on 7.3.1.4 or greater, see the following three notes.  Upgrade to the latest version of TABLE_REORG.  Run TABLE_REORG with the 'T' option and review the suggestions in the LOG_TABLE_REORG table.
  • Demantra TABLE_REORG procedure. Did you know that TABLE_REORG has replace rebuild_schema mad rebuild_tables?(Doc ID 2005086.1)
    - Demantra TABLE_REORG Tool New Release with Multiple Updates! Partitions, DROP_TEMPS and More! 7.3.1.3 to 12.2.x.(Doc ID 1980408.1)
  • If you have an error: Demantra table_reorg Procedure Failed ORA- on sales_data mdp_matrix promotion_data How do I Restart? rupd$_ mlog$_ I have Table cannot be redefinitioned in the LOG_TABLE_REORG table(Doc ID 2006779.1)

I would consider these notes to be the best regarding worksheet performance:

  • Oracle Demantra Worksheet Performance - A White Paper (Doc ID 470852.1)
  • Oracle Demantra Worksheet Performance FAQ/TIPS 7.3+! (Doc ID 1110517.1)
  • Demantra 12.2.4 Worksheet Performance Enhancements Parameter dynamic_hint_enabled, Enable Dynamic Degree of Parallelism Hint for Worksheets. 
  • Development Recommended Proper Setup and Use (Doc ID 1923933.1)
  • Demantra Development Suggested Performance Advice Plus Reference Docs (Doc ID 1157173.1)
  • Oracle Demantra Worksheets Caching, Details how the Caching Functionality can be Leveraged to Potentially Increase Performance (Doc ID 1627652.1)
  • The Column Prediction_Status, MDP_Matrix and Engine. How are they Related? Understand Prediction_status Values (Doc ID 1509754.1)

Also, see:
Demantra Gathering Statistics on Partitioned Objects Oracle RDBMS 11gR2 (Doc ID 1601596.1)
- Demantra 11g Statistics new Features and Best Practices Gather Schema Stats (Doc ID 1458911.1)

I would review all parameters mentioned in the docs above and:

1. Monitor the workstation memory consumption and CPU utilization as the worksheet is being loaded.
   * You may have to adjust the memory ceiling for Java
2. Manage MDP_MATRIX.  Are there dead/unused combinations?  When running the engine, you can manage the footprint of the input.  If MDP_MATRIX
   is carrying sizeable dead combinations and/or entries without a matching entry in SALES_DATA, you are increasing processing load.  Check out
   note 1509754.1.  The attachment explains the principle.
3. Using the notes above, can you cache?  Can you use filters?  Can you use open with? 
   A series can be cached, aggregated by item and cached in the branch_data_items table.  This improves performance of worksheets that are aggregated
   across locations and that do not have any location or matrix filtering.
4. Run the index advisor.  Does it suggest additional indexes? 
5. If you do not have the index advisor, produce an AWR.  The AWR should be taken when the user opens the worksheet.  For example, start the AWR process. 
   Wait 10-15 minutes.  Tell the user to open the worksheet.  After the open succeeds, wait 10 minutes.  Stop the AWR process.  What are the top SQLS? 
   What are the contentions?
6. Do you have your large tables on their own tablespace?  This means each large table has a tablespace to its self.  Each large index has a
   tablespace to its self.
7. The worksheet is retrieving rows to display.  Is there row chaining causing multiple block reads?  That should be revealed in the AWR or run the
   appropriate SQL.
8. Worksheet design is important.  The worksheet designers setup what they need.  However, that does not mean that the worksheet design blends well
   with available processing capabilities.  Know the forecast branch health.  I think this is discussed in 1509754.1.  The following SQL reveals the
   tree:

   select level_id,count(*) from mdp_matrix
   where prediction_status = 1
   group by level_id
   order by level_id

   If you have a branch that is 100000 and remaining branches at 5000 and 10000 that is a problem.  That would point to a setup/design issue.
   Meaning that if you have branch as a level and it just so happens that 1 branch indeed has 100,000 and the other 2 branches account for smaller
   volumne, 5000 and 10000, the chosen levels of the worksheet need to be revisted.  Perhaps a level lower than branch is better suited to
   processing the data.  While this and #2 above are probably out of your control, it will help explain the worksheet loading and engine processing
   time.
9. Reduce the amount of memory that your worksheet selects:
   - Remove series if possible
   - Reduce the span of time
   - Apply filters
10. Review all server and client expressions.  Are they affecting performance?
11. Run DROP_TEMPS

Save-the-date: Oracle Commerce workshop for EMEA partners: Amsterdam/Utrecht June 23-24-25th

$
0
0

Start Building Your Commerce Cloud Practice! We’re bringing our exclusive three-day workshop for our top partners in EMEA. The EMEA Oracle Commerce Cloud Service Workshop in Amsterdam/Utrecht, Netherlands on June 23, 24, and 25.

Join Oracle Commerce Cloud Product Management and Product Development over the course of 3 days as we dive into Oracle Commerce Cloud Service.  

Oracle Commerce Cloud Partners are key players in the success of this new solution. Oracle Commerce Cloud is focused on simplicity and rapid deployment.  With a prebuilt Responsive Design storefront, reusable widgets and intuitive tools to manage the catalog and experience, you can get your customers live in weeks, not months.

Day 1 – Amsterdam (airport hotel, allowing same day fly-in & fly-out), June 23

Day 1 is the chance for business and technical users to learn the basics about Oracle Commerce Cloud and our go-to-market strategy so you can identify new opportunities for your businesses.

Day 2 & 3 – Utrecht (NL), June 24 & 25

Day 2 & 3 is for technical users to get a deep dive into Oracle Commerce Cloud with topics such as designing sites, managing product catalogs, leveraging integration points, and working with the API-first architecture

Save the date, and start planning. Registration will open soon and seats will be extremely limited, so please register early.

Day 1 will be open to all OPN members; participation in Days 2 & 3 will be more limited due to space & training format considerations. Only a limited number of registered partners will be confirmed on day 2 & 3.

If you have any questions please contact richard.lefebvre@oracle.com

Facebook Friday: Top 10 ArchBeat Posts for the Week - May 10-16, 2015

$
0
0

Finish your coffee, then check out the Top 10 most popular updates from the OTN ArchBeat Facebook Page from the last seven days, May 15-21, 2015.

  1. Happy Birthday, Java!

  2. Help Wanted: OTN Systems Community Manager

    Want to work for OTN? The Oracle Technology Network is looking for a Systems Community Manager. For more detail, click the link to access the job listing on LinkedIn.

  3. Stream Explorer and JMS for inbound and outbound interaction | Lucas Jellema

    Oracle ACE Director Lucas Jellema examines the interaction between Stream Explorer and JMS, with a focus on a use case involving small temperature sensors distributed throughout a building to measure the local room temperature every few seconds and then report it over JMS.

  4. Podcast: API Management Roundtable

    The topic of API Management is getting hotter by the minute. In this four-part OTN ArchBeat Podcast four experts discuss what’s behind the increased interest, and offer some suggestions on how you can make API Management… well, manageable.

  5. Video: Getting started with Arquillian on WebLogic Server | Phil Zampino

    In this video Phil Zampino of the Oracle WebLogic Server Development Team shows you how to get started with Arquillian to test applications running in WebLogic Server.

  6. It was my great pleasure to announce software architect and NEOOUG seminar chairman Rumpi Gravenstein's new status as an Oracle ACE Associate during the ACE Dinner this week at the Great Lakes Oracle Conference in Cleveland, OH.

  7. Get Where You're Going: Training and certification decisions are key junctures on your career path

    Somewhere along the timeline that connects the latest version of you with your various prior releases are points at which you made certain decisions—among them choices about training and certification to enhance your skills and your marketability. In this Oracle Magazine article community members discuss their choices.

  8. Packing them in for Fabrix Analytix founder Jeffrey Needham's "Possibilities in Disruption" keynote at the Great Lakes Oracle Conference, May 20, 2015 in Cleveland, OH.

  9. Oracle ACE and BI expert Christian Screen leading a Great Lakes Oracle Conference session on Essbase and OBIEE.

  10. Video: Real Time Analytics with GoldenGate and Oracle BI Applications | Christian Screen

    Oracle ACE Christian Screen sets the bar high with this excellent Business Intelligence tech tip delivered in record-setting time.

Friday Spotlight: Oracle VM Server for SPARC Best Practices White Paper Updated

$
0
0
The white paper Oracle VM Server for SPARC Best Practices has been updated to reflect enhancements introduced with Oracle VM Server for SPARC 3.2.

This paper shows how to configure to meet demanding performance and availability requirements. Topics include:

  • Oracle VM Server for SPARC definitions, concepts and deployment options.
  • Software, hardware, and firmware requirements.
  • Best Practices for optimal performance.
  • Best Practices for resiliency and availability.

The paper includes specific recommendations, describes the reasons behind them, and illustrates them with examples taken from actual systems.

The update for Oracle VM Server for SPARC 3.2 describes enhanced multipath disk group performance, and I/O resiliency for SR-IOV devices. For further information, see the Oracle VM Server for SPARC document library page.


FRIDAY SPOTLIGHT: Getting Started with Docker on Oracle Linux

$
0
0

Happy Friday everyone! 

We have a great technical article for you this Friday.  In this article, you learn how to customize a Docker container image and use it to instantiate application instances across different Linux servers. This article describes how to create a Dockerfile, how to allocate runtime resources to containers, and how to establish a communication channel between two containers (for example, between web server and database containers). 

Ginny Henningsen quotes "Docker is exciting because it can easily capture a full application environment into a virtual container that can be deployed across different Linux servers. System administrators and software developers are learning that Docker can help them deploy application images on Linux quickly, reliably, and consistently—without dependency and portability problems that can inject delays into planned deployment schedules. Docker containers can define an application and its dependencies using a small text file (a Dockerfile) that can be moved to different Linux releases and quickly rebuilt, simplifying application portability. In this way, "Dockerized" applications are easily migrated to different Linux servers where they can execute on bare metal, in a virtual machine, or on Linux instances in the cloud. "

As shown in Figure 1, Docker containers consume fewer resources than "heavyweight" hypervisor-based solutions. Hypervisor-based solutions host a full-blown operating system instance in each virtual machine guest, but this also allows them to support different operating systems. (Oracle VM, for example, can host Oracle Linux, Oracle Solaris, and Microsoft Windows in virtual machines.) 

Read more 

Lab tours put spotlight on 'simple' in Oracle's simplified UI

Zero Based Budgeting (ZBB) Considerations within Hyperion Planning

$
0
0

Zero based budgeting (ZBB) applications are becoming increasingly popular as a way to develop a budget, especially for lower growth organizations that are interested in cutting costs.The most significant difference between ZBB applications and traditional budget applications are the level of details captured.Where traditional budgeting applications plan an expense item directly or using a price X quantity formula, ZBB applications will plan every line item related to that expense.For example, when budgeting supply expenses, a ZBB application will include values for each detailed line item, including pencils, pens, paper clips, etc.Given the level of detail required for ZBB applications, careful consideration needs to be taken in order to have optimal performance within Hyperion Planning.

The following questions need to be considered before designing a ZBB application within Hyperion Planning:

  • Does the additional ZBB detail require control over how the data is entered?

    • If yes, then add a separate line item dimension.

    • If no, then supporting detail functionality within Planning may be sufficient.

  • Does the detail have a direct relationship to an account?

    • If yes, then smart lists within the account dimension could be leveraged.

  • Is the ZBB detail relevant for the current budget?The application should include only those line items needed for the current budget and remaining line items should be purged.

  • Do all accounts require additional ZBB detail?

    • If no, consider having a separate ZBB plan type to store line item information for the account subset.

As indicated above, ZBB applications tend to require a separate line item dimension for storing the additional detail required.In addition, ZBB applications use smart lists extensively to capture non-numeric driver values used to evaluate the proposed budget items.The following lists the design characteristics of a ZBB application within Hyperion Planning:

  • ZBB applications require planning at a line item detail.  For non-ZBB applications, supporting detail is usually sufficient.  ZBB applications, however, need more control over how the line item detail is entered, making a sparse Line Item dimension necessary.

  • The number of line items for each account, which are known as packages in ZBB applications, will vary.  The supplies account or package might have 5 line items while the travel account or package might have 10 line items.  As a result, the Account dimension, which is typically called the Package dimension in ZBB, will need to be sparse to account for the differences in the amount of line item detail for a particular sub-account or sub-package.

  • ZBB applications typically store multiple drivers, both numeric and non-numeric, for a particular sub-package to provide sufficient justification for each budget line item.  Since the bulk of the ZBB values are calculated using driver values, the drivers are separated from the sparse Package dimension and placed in a dense Driver dimension to ensure the driver calculations are dense calculations.  The Driver dimension is also typically marked with the Accounts property member tag.

  • The design of the ZBB application assumes that all sub-package calculations are self-contained and have no dependencies on other sub-packages.  If the design does show sub-package dependencies, then the customer’s definition of a sub-package does not follow ZBB best practices and should be reviewed.

As described above, ZBB applications, unlike traditional budgeting applications, tend to place accounts (known as packages in ZBB applications) and account drivers in separate dimensions where the account members are stored in a sparse dimension and the account drivers are stored in a dense dimension marked with the Accounts property member tag.This design characteristic and the extensive use of smart lists to store non-numeric account driver values have the following performance impacts:

  • The main impact of the ZBB design from a PBCS and Planning perspective is that the Driver dimension stores the drivers across all sub-packages, and the drivers used for a particular sub-package is typically unique.  Therefore, the number of drivers populated for a particular sub-package will be a small subset of the total number of drivers.  As a result, a given block of cells will always be sparsely populated for a ZBB application, and performance could suffer by having PBCS and Essbase process more data cells than is necessary.

  • Another impact is on reporting ZBB data.  The non-numeric drivers are stored in the Driver dimension as smart lists.  A ZBB application can potentially have more than 50 smart lists, and each of these smart list items is typically queried in a standard or ad hoc report.  If each smart list is mapped to a dimension in an ASO reporting database, the ASO database could potentially have more than 60 dimensions.  However, the smart list reporting is typically mutually exclusive where a particular report will typically contain members of only one smart list dimension.  Users do not typically perform analyses across multiple smart list dimensions.

As described above, ZBB applications within Hyperion Planning will tend to have sparsely populated blocks as well as large numbers of smart lists that will be mapped to a dimension in an ASO reporting plan type.Careful consideration will need to be given when designing ZBB applications to minimize the impact of these potential performance items.

Open Source Comes To Boston

$
0
0

Guest Blogger Markus Eisele

DevNation, the open source, polyglot conference, is co-located with Red Hat Summit again this year, will 
take place on June 21-25, 2015 in Boston. There is a lot of
 community and open source involved as usual. You’ll find a mix of:


  • Technical sessions.

  • Hands-on labs.

  • Birds-of-a-feather panels specifically for developers.

  • Late-night hacking events.

Well known keynote-speakers include Venkat Subramaniam (Agile
Developer, INC.), Brianna Wu (CEO of GIANT SPACEKAT) and Felix Ehm
 (CERN). "One thing that ties us together is the passion we share for 
programming." explains Venkat in a supporting interview which got 
published on the DevNation Blog.
 This is an overall theme for the second edition of the conference this 
year. Beside an even broader coverage of different technologies and
 JVM based languages, the newly formed external program committee was a 
big help in putting together a compelling agenda. Some of  well-known speakers in the Java community:

  • Simon Maple (@sjmaple)

  • Rabea Gransberger (@rgransberger)

  • Christian Kaltepoth (@chkal)

  • David Blevins (@dblevins)

  • Tonya Rae Moore (@TonyaRaeMoore)

  • Joel Tosi (@joeltosi)

The location will be the The Hynes Convention Center in Boston. There will be plenty of space for all the amazing sessions. We have a lot of cool things
 planned: Hacking events, Birds-Of-A-Feather sessions, an evening 
event, keynotes, plenty of room for networking and discussions. If you
 want to get a first impression about what all this awesomeness looks 
like, feel free to look at some of the recorded sessions from last
 year. 

Registration is open and if you use the code: RKXGQS you will get a 
$150 discount as a frequent Java Source Blog reader.

Tivoli Storage Manager Supports Solaris 11.2

$
0
0


IBM recently announced support for Tivoli Storage Manager on Solaris 11.2. TSM Client is supported on both SPARC and X86, while TSM Server is supported only on SPARC.

This support is available in v7.1.1.200 of the product.

Sales Crediting and Why it Matters - Written by Sarah Wright

$
0
0

One of the most impactful ways of automating an incentive compensation process is something people often forget, or assume cannot be automated. Some customers refer to this as pre-processing or territory management or payment rules or book of business or simply my biggest HEADACHE. Most customers manage the process manually or with highly custom, hard-coded jobs maintained by an IT team. At Oracle, we call this process sales crediting and we have found a way to solve this problem.

The concept of sales crediting refers to the process of determining who gets credit for a certain event. In the context of a sales organization, an example would be: after an order is booked, the sales rep who sold the order gets credit for the sale. In some cases, this is a very straightforward process. However, in the reality of complex organizations today, often times the answer to “who gets credit?” is not so straightforward. As an example, a current customer compensates, on average, nine individuals on a single transaction. They have inside sales reps, field reps, sales managers, account executives, product specialists, technical specialists, industry experts, project leaders, etc. that often work together to close a single deal. How do we know each of them should receive credit? And how much credit? Customers like this utilize Oracle’s unique capabilities with the Incentive Compensation solution to manage this process in an automated fashion.

Within Oracle’s Incentive Compensation solution, we have a robust crediting engine that automates the process of managing crediting rules. This engine is unique to our solution and a concept our customers are leveraging more and more. Our solution also includes an intuitively designed user interface that gives customers the ability to manage all of their crediting rules in a simple, easy to manage hierarchy. There is great flexibility in how customers can define their rules in order to accommodate extensive complexity.

The incentive compensation process often starts with an anonymous transaction, meaning we cannot identify credit receivers (individuals who earn credit) from the data contained on the transaction. As an example, we get an order transaction that contains the following information: Order Number: 123, Revenue: $50,000, Customer: ABC, Inc., Date of Sale: 4/1/2015, Product Sold: Red Widgets. We do not know who sold the order or who should ultimately receive credit. However, there are business rules in place for this. Let say John Smith is the account owner of ABC, Inc. and every time an order is booked for that account, he gets credit. Additionally, so does his boss and his inside sales counterpart. Our solution will take in this transaction and process it through the defined business credit rules and determine all three individuals should receive credit.




In modern, complex organizations, these rules and definitions get exponentially complicated. The process of managing them manually is arduous and costly. One of our most complex customers compensates an average 42 resources per transaction and worked hand in hand with Oracle to implement our sales crediting technology. This customer had a $15M spend on processes related to variable compensation and it identified managing sales crediting as two-thirds of that spend. Once they implemented Oracle’s solution, they were able to cut their administrative costs in half.

Oracle’s unique solution to this problem is helping customers across industries cut costs and better align their sales crediting rules and incentive programs with their organizational goals and objectives.

Fluid UI Development Training Available

$
0
0
PeopleSoft's new Fluid user interface rocks.  The response from customers has been enthusiastic, and many would like to learn more about developing with Fluid.  Training is available from Oracle University both in classroom and live virtual training.  Note that Oracle University has scheduled classroom training in Belmont, California to correspond with our Open World conference at the end of October.  If you are planning to attend Open World, you have a great opportunity to attend this valuable training as part of your trip.

Oracle ZFS Storage Appliance Software Update OS 8.4

$
0
0

The new OS 8.4 software update for the Oracle ZFS Storage Appliance was released last week. 

My personal favorite change in this update is that the RESTful API service is now enabled by default.

Rest Service

Before this update users used to have to log in the ZFS Storage Appliance and enable the REST service before issuing REST commands.   If they forgot to do this REST commands would return the following error. :-(

503 Service Temporarily Unavailable

Now the REST API will always be available as long the management web server is active.   It makes managing the ZFS Storage Appliance with the REST API just that much easier.

Software updates can be found at the following location:   https://wikis.oracle.com/display/FishWorks/Software+Updates

Here you will also find the list of all the other new features, Release Notes, the download procedure, and the minimum and recommended minimum software versions for updating to software version 2013.1.4.0. Prior to updating, please carefully review all release notes.


现代最佳实践,成长,及数字化转型

$
0
0

为何最佳实践可能还不够好

Steve Cox, VP, Oracle Applications Business Unit

每个软件提供商都宣称他们有能力提供最佳实践。然而,不称作现代最佳实践,也不运用到诸如云、移动、社交等等驱动技术的最佳实践就不是真正的最佳实践。Oracle Cloud Go-To-Market副总裁Steve Cox说。他最近撰写了电子书《现代最佳实践讲解》来阐明为什么只有现代最佳实践才够好的论点。

问:在新发布的这本电子书里,您提到重点在于标准化。为何标准化如此重要?

Steve: 标准化带动可以论证成果。标准化是亨利·福特的流水线模式Deming全面质量管理的基础。十多年以前,Oracle达到超十亿美元利润的关键因素就在于对通用的业务流程进行标准化。

在现今数字化转型的时代里标准化也是如此适用。每个组织想要蓬勃发展就必须着眼每个流程和交易,看看是否可以通过数字化的方式来进行。这些组织的目标是相同的,因为他们都面临着由技术带来不断改变的消费者行为,因此需要不断地降低成本并提高客户体验的一致性

: 在现代最佳实践中,标准化是如何帮助组织进行数字化转型?

Digital TransformationSteve: 以数字化转型为目的,您必须要问,我们需将什么标准化以达到目的?现代最佳实践利用了每个数字化转型过程中都必不可少的驱动技术——社交分析大数据移动技术物联网。现代最佳实践可以为提供数字化转型道路提供路线规划指导。例如,它可以当作社会关系管理策略的执行图解指南,能够在客户所偏好的社交网络中全面听取客户声音及改善客户互动,并将这些执行方式贯穿整个组织,不仅局限于营销活动。

: 为什么要在Oracle.com的网站上发布现代最佳实践?

Steve: Oracle 致力于开放标准。这不仅包含为客户提供端到端业务流程的全面解决方案,还包含与客户进行知识分享的承诺。我们现代最佳实践的网页以及实用指南,便是如何采用现代业务经营方式的知识分享。

:哪些将成为驱动现代最佳实践的下一代新兴技术?

Image

Steve: 有很多。诸如室内定位系统设备的广泛采用将引发临近营销(proximity marketing)的新手段。转移到内存分析将使各种规模的组织能够实现他们一直想做但由于系统性能问题而无法达成的数据分析。成本更低且更强大的硬件将使企业能通过处理更多的交易来增加高峰时段利润。按照人经营业务的方式来调整的IT系统将扩展对新设备的支持,所以,举例而言,像我一样的人可在越野长跑期间在我的手表上进行报销的批核。再想象一下3D打印技术的演进对食品生产商制造商生产和供应链的影响。

: 现代最佳实践的下一步是什么?

Steve: Oracle有丰富的知识资源,而我们才刚刚开始分享我们现代最佳实践的格式。接下来,我们将公布在电子商务、销售绩效、运输、产品创新,以及项目等方面的更多内容。我们正在不断增加更多解决方案的演示,相关的白皮书和总监级别洞察材料,来帮助我们的客户了解现代最佳实践是如何能够指导他们进行业务的转型


Jim Lein, Director, Oracle Cloud GTMJim Lein

现代最佳实践利用新的功能如移动技术社交分析大数据物联网,使得您的企业有可能以更少的资源实现更多、更快的发展。它是灵活的,支持成长和创新,利用新的方式来实现持续优异的性能。

在这里所表达的观点为我的个���观点,未必反映Oracle的观点。


Alter Table Shrink Space Cascade and SmartScan

$
0
0

Over the years, updates can cause rows to become highly fragmented sapping performance on Exadata table scans.

The offload server and hence SmartScan get data to process 1 MB at a time. Because of this, SmartScan is only able to process row pieces that are available in the current 1 MB chunk that it processes at a time. Unlike RDBMS table scans, SmartScan is not able to initiate disk I/Os to retrieve further row pieces and, even if it could, it in unlikely that they would be present on the same cell.  When SmartScan finds it needs a row for a projected column that is not present in the blocks available to it, it will apply predicate on the row pieces it does have but if those predicate pass, it has to return the row unprocessed for the RDBMS to fetch the missing row pieces from the buffer cache.

There are three main tools available for cleaning up a segment (Alter Table Shrink, Alter Table Move, and export/import), but one of them isn't as helpful as you might have thought.  

Consider the following sequence of events where we update the 256th column to cause widespread fragmentation:

SQL> update t set c256 = 'abcdefghijklmnopqrstuvwxyz'; 
2000000 rows updated.
SQL> commit;
Commit complete.

SQL> analyze table t compute statistics;
Table analyzed.

SQL> select chain_cnt from dba_tables where table_name = 'T' and owner =
'FUSION';
CHAIN_CNT
----------
   2000000

SQL> select sum(blocks) from user_segments where segment_name = 'T';

SUM(BLOCKS)
-----------
     139264
SQL> alter table t enable row movement;
Table altered.
SQL> alter table t shrink space cascade;
Table altered.
SQL> analyze table t compute statistics;
Table analyzed.

SQL> select chain_cnt from dba_tables where table_name = 'T' and owner =
'FUSION';

CHAIN_CNT
----------
   1970068

1 row selected.

Note: 'chain_cnt" does not count chained rows, rather it counts rows whose row pieces are chained across more than one block.  A Row that is in three pieces but all three pieces are in the same block has a zero chain_cnt.

In this particular artificial scenario Shrink has not gained us much reduction in space used, and more importantly it hasn't reduced the kind of fragmentation that affects SmartScan performance.

This is because Shrink works in two phases. In Phase 1, the segment is scanned down from the end of the segment to the beginning. Rows with their head piece in the currently scanned block are moved together with all their row pieces. The segment is scanned from beginning upwards looking for space for the entire row. When it is unable to move any more entire rows, Phase 2 starts scanning down again from the end of the segment trying to move individual row pieces to blocks with space. This meant that while Phase 1 could potentially reduce chaining for relocated rows, Phase 2 was very unlikely to reduce the chain count and could in fact increase the chain_cnt. The moral of this is that Shrink really is for freeing up blocks close to the High Water Mark and not for cleaning up fragmented rows.

Now let's try Alter Table move with the same segment:

SQL> alter table t move; 

Table altered.

SQL> analyze table t compute statistics;
Table analyzed.

SQL> select chain_cnt from dba_tables where table_name = 'T' and owner =
'FUSION';
CHAIN_CNT
----------
     45976
1 row selected.

SQL> select sum(blocks) from user_segments where segment_name = 'T';
SUM(BLOCKS)
-----------
      92160

1 row selected.

OK, that did what we hoped: more space has been reclaimed but more importantly for SmartScan, the number of fragmented rows has been reduced considerably.

With the fix for 19433200, the mechanics of Shrink have been reworked and it is now better at reducing the chain_cnt. However, even with the improvements made, when faced with heavily fragmented rows, Alter Table Move or export/import are likely to provide significantly better table scan performance with SmartScan.

Browse Content for Oracle OpenWorld 2015

$
0
0

Oracle OpenWorld is getting closer! Proof? We just launched the Content Catalog for the 2015 event

No matter your business, Oracle has you covered. We've curated content into targeted programs to help you get the most from your event experience.

The programs for 2015 include:

If you haven't already, take some time to explore the new interface for the content and programs. Although you can't customize your schedule with the tool just yet, we hope you enjoy the wide variety of content, tailored to your role, industry or line of business. 

Happy content shopping!

PS: If you have yet to register for Oracle OpenWorld 2015, you still have time to take advantage of the Super Saver rate. The $800 discount is our best price for the event--it doesn't get better than this!  

Memory Usage with Oracle Database In-Memory

$
0
0

We often get asked the same two questions at conferences and customer presentations. That is  can't I just cache my tables in the buffer cache or put them in the keep pool, and how much memory do I need to allocate to the IM column store? While not directly related, I'm going to try and answer both questions in this post. First let's tackle the issue of "in-memory".

If all of my data is "in-memory" in the buffer cache then why do I need Database In-Memory?

The problem with this type of reasoning is that it ignores the real key to Database In-Memory and that is the columnar format, along with other optimizations that are part of the columnar format of Database In-Memory like compression, storage indexes and SIMD vector processing. The fact that the column store is in-memory is essentially just an enabler for the columnar format.

Let's take a look at an example to see what I mean. The following query makes use of our SSB schema, and specifically the LINEORDER table. The LINEORDER table has 23,996,604 rows and has been fully populated into the IM column store and fully populated into the buffer cache as well. First let's see the query run with a NO_INMEMORY hint against the buffer cache:


We can see that we performed a full table scan against the LINEORDER table and the query took 2.45 seconds on the database run in VirtualBox on my laptop. In the statistics section note that we did no physical I/O, and also note the amount of CPU used by the query.Next let's look at the same query run against the IM column store. The only change that I've made to the query is to remove the NO_INMEMORY hint.


Now we see that the execution plan tells us that we did a full in-memory scan of the LINEORDER table and that it took no time (of course this isn't really true, but it took so little time that SQL*Plus didn't record any time for the execution). We can also see that we only had to look at one IMCU (IM scan CUs memcompress for query low - IM scan CUs pruned) and that we only had to return one row (IM scan rows projected). Now obviously this is a best-case scenario, but it highlights the difference that the columnar format makes with Database In-Memory. One other thing to note is to look at the CPU usage. Look at how much less CPU was consumed for the same query.

What if we actually had to scan all of the values for the columns in the query? In other words, what if we had a query that couldn't take advantage of storage indexes and had to scan all of data for the columns in the query in all of the IMCUs for the LINEORDER table in the IM column store? Will that still be faster than scanning the data in the buffer cache? Recognizing that this is a contrived example, I think it is still worth exploring in light of our initial question.

I've taken the first query and removed the where clause predicates and surrounded each column returned with a COUNT function. This will generate a full table scan of the LINEORDER table for the three column values and return the total count for each column. Here's an example for the buffer cache:


Now here's an example of the query against the IM column store:


In this example we had to look at all of the data in the IMCUs for the columns in the query. Note that now the IM scan rows projected statistic shows all of the "rows" being returned. The in-memory query runs in 1.15 seconds versus 3.42 seconds for the "row" version. We still did no physical I/O so both formats were truly "in-memory".

However we still see that the IM column store is significantly faster and more efficient than accessing the data in row format from the buffer cache. Hopefully this helps answer the question of why just placing your row based data "in-memory" is not the same thing as using Database In-Memory.

And what about our second question.

How much memory do I need to allocate to the IM column store?

The simple answer is that you need enough memory allocated to hold the objects that you want to be able to run analytic queries on. Normally this generates two more questions, the first is how do I know how much memory each object will consume in the IM column store and the second is how do I tell which objects should be placed into the IM column store?

The answer to the first question is the Compression Advisor. The Compression Advisor can be run to determine how much space an object will consume in the IM column store based on the compression level chosen. But be warned only the 12c Compression Advisor is aware of the new In-Memory compression techniques.

The answer to the second question is to start with the In-Memory Advisor. Between the output of the In-Memory Advisor and the knowledge of your application you should be able to determine which objects are the best candidates for the IM column store.

Let's dig a little deeper though. Now that you know how much memory you will need for the IM column store, you need to figure out where you're going to get it. Unfortunately, the tendency that we have seen is to take the memory from existing usage.

In general this is a bad approach. Very few existing systems have unallocated memory on their database servers. If you are in this category then by all means use that memory, but more often than not the tendency is to "steal" the memory from something else. For instance the buffer cache, the SGA in general, or the memory allocated for the PGA. The problem with these approaches is that it is very likely to impact the performance of your existing workload. We recommend that the memory required for the IM column store should be in addition to your existing database memory allocations. Here is a formula that reflects this goal:

SGA_TARGET = SGA_TARGET(original) + INMEMORY_SIZE

In addition, sufficient PGA must be allocated to handle the memory requirements of parallel processing, sorting and aggregation that can occur due to the potentially large amounts of data being queried by in-memory analytical queries without spilling to disk. This is in addition to the PGA requirements of your existing workload. A good way to view your current PGA usage is to use your AWR reports from your running systems. In the Advisory Statistics section there is a PGA Memory Advisory section. The data from this section can provide good information about the current PGA usage. Alternatively you can provision enough memory for the maximum number of parallel server processes allowed on the system to allocate their full amount of memory using the following formula.

PGA_TARGET = MAX_PARALLEL_SERVERS * 2GB

Hopefully this will help in understanding and planning for memory usage when implementing Database In-Memory.

Viewing all 19780 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>