Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Welcome to Oracle Blogs

Welcome to the Oracle blogging community!

older | 1 | .... | 979 | 980 | (Page 981) | 982 | 983 | .... | 989 | newer

    0 0

    https://apexapps.oracle.com/pls/apex/f?p=44785:149:0::::P149_EVENT_ID:5448

    We are pleased to announce that enrollment is open for our newest Oracle Massive Open Online Course (MOOC): Java Coding and Concepts.

    The course starts April 27th! Enrollment is free!

    Enroll

    Do work with code periodically, but program by hacking other people's code? Are you new to programming and want to develop a deep understanding of key Java programming concepts without falling asleep in class? Would you like to see how Oracle cloud technology can accommodate the development needs of a project? If so, you may find this MOOC very helpful!

    This is our first MOOC designed for a foundations-level audience. It employs a game-based learning methodology to build your understanding and prepares you to think through coding problems far better than traditional lecturing. You'll learn key Java programming concepts, go behind the scenes to understand development practices, and apply your knowledege to coding labs. This course starts April 27th, and is short - just 4 weeks. But you will learn a lot, including:

    • Object Oriented Thinking and Class Design
    • Static vs Instance Variables
    • Inheritance
    • Lambda Expressions

    So click the Enroll button - read the full description of the course and watch the video - and we'll see you on April 27th!

     


    0 0

    In this post I am going to create a simple SpringBoot Application and deploy it to Oracle's Application Container Cloud Service. Oracle ACCS is a polyglot application hosting environment for a number of platforms including Java SE and Node.

    In this first part of the blog, I am going to create the SpringBoot application and test it locally.

    Requirements

    • Oracle ACCS account. You can get a 30-day trial for Oracle Cloud services including ACCS from here
    • A favorite text editor
    • JDK 1.8+
    • Maven 3.0+
    • Curl7+ 

    Create the Directory Structure

    Here I create the directory structure. Choose a name for app_home as you like.

    > mkdir -p <app_home>/src/main/java/demo

    pom.xml

    I use Maven for building the project and installing the dependencies. Under app_home create a pom.xml file and insert the code below.


    <project><modelVersion>4.0.0</modelVersion><groupId>com.example</groupId><artifactId>sample-accs</artifactId><version>1.0</version><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>1.3.6.RELEASE</version></parent><dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-rest</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-jpa</artifactId></dependency><dependency><groupId>com.h2database</groupId><artifactId>h2</artifactId></dependency></dependencies><build><plugins><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId></plugin></plugins></build></project>

    Set up the application

    Now I create the following Java artifacts under <app_home>/src/main/java/demo. For simplicity I use an H2 in-memory database for CRUD operations which is referenced as a dependency in pom.xml file.

    • Car.java: A class that represents a car entity with brand, model etc..
    • CarRepository.java : An interface that enables CRUD operations in application.
    • Application.java : Main class that runs the SpringBoot application.

    Insert the following code to Car.java file.


    package demo;
    
    import javax.persistence.Entity;
    import javax.persistence.GeneratedValue;
    import javax.persistence.GenerationType;
    import javax.persistence.Id;
    
    @Entity
    public class Car {
      @Id
      @GeneratedValue(strategy = GenerationType.AUTO)
      private long id;
    
      private String brand;
      private String model;
        private String year;
    
      public String getBrand() {
            return brand;
        }
     
        public void setBrand(String brand) {
            this.brand = brand;
        }
     
        public String getModel() {
            return model;
        }
     
        public void setModel(String model) {
            this.model = model;
        }
    
        public String getYear(){
            return year;
        }
    
        public void setYear(String year){
            this.year = year;
        }
    }
    
    

    The following goes to CarRepository.java file. SpringBoot takes care of implementing this interface that allows us to perform CRUD operations on our entity.


    package demo;
    import java.util.List;
    import org.springframework.data.repository.CrudRepository;
    import org.springframework.data.repository.query.Param;
    import org.springframework.data.rest.core.annotation.RepositoryRestResource;
    @RepositoryRestResource
    public interface CarRepository extends CrudRepository<Car, Long> {}

    Let’s POSTtwo cars. (Please remove underscores from the command)

    > c_u_r_l -H "Content-Type: application/json" -X POST -d '{"brand":"Volvo","model":"C80","year":"2015"}' localhost:8080/cars> c_u_r_l -H "Content-Type: application/json" -X POST -d '{"brand":"Peugeot","model":"308","year":"2016"}' localhost:8080/cars

    And GETthem.

    > c_u_r_l localhost:8080/cars
    {"_embedded" : {"cars" : [ {"brand" : "Volvo","model" : "C80","year" : "2015","_links" : {"self" : {"href" : "http://localhost:8080/cars/1"
     },"car" : {"href" : "http://localhost:8080/cars/1"
     }}},
    {"brand" : "Peugeot","model" : "308","year" : "2016","_links" : {"self" : {"href" : "http://localhost:8080/cars/2"
     },"car" : {"href" : "http://localhost:8080/cars/2"
     }}}
    ]
    ......

    The application is running fine locally. You can stop the server now by Ctrl+C in the command line. In the next part I will package the application to make it ready for deployment to Oracle Cloud.


    FacebookGoogle+TwitterLinkedInDeliciousDiggEmail

    0 0

    image


    Today I’m focusing my attention to ADF naming conventions.

    Beside this post I will write two more in order to cover as much as possible all areas of this subject. In the last post I will provide a PDF with all information covered in these series of posts.

    Motivation

    During ADF applications development we may encounter many development challenges. One of these challenges is about implementing a naming convention to be used by all involved project developers during implementation.

    Each developer have his own background and his own ideas on how things should be implemented. We want them to have freedom of thought in order to get the best approaches to reach the goal, but what we really don’t want is to have multiple ways of doing the same thing otherwise we might face really difficult challenges in the future, namely around software maintenance and bug tracing.

    Also, the developer roster may change during project development. For the new ones who enter we need to provide proper training. If we can follow conventions we will have shorter training periods and they will be brought to speed quicker while familiarizing with the application.

    After the application is deployed in the production environment, we face a new challenge, Maintenance and Support. Big headaches usually appear right there, and they can be even bigger if we don’t follow these important naming conventions in our applications’ code.

    I have found some information here about this topic but we needed more, and we needed to instantiate it to our projects, so we decided to defined our own ADF Naming Conventions, to be used organization-wide on our ADF projects.

    In this post I will share my experience and our ADF Naming convention rules regarding the following topics:

    ·Application & Project Namings

    ·Packages Namings

    ·Business Components Namings

    ·Read the complete article here.

    WebLogic Partner Community

    For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

    BlogTwitterLinkedInForumWiki

    Technorati Tags: WebLogic Community,Oracle,OPN,Jürgen Kress


    0 0

    image


    Oracle Applications User Experience Senior Director Ultan O’Broin (@ultan) keeps his finger on the pulse of the startup scene in EMEA with an eye to enabling that community with the OAUX outreach machine. Here he talks about user experience (UX) enablement that's on offer to startups to accelerate their SaaS and PaaS opportunities.

    When Life Gives You Lemons, Pivot

    Pivoting: That realization that a permanent income is preferable to remaining just a fascination.

    After over two decades of experience in the tech industry, half of it in the Valley, I still find it hard to predict what’s going to go down on any given day. Even if things go slightlypear-shaped, I rarely don’t have a #lovemyjob day. Having a sense of humor always helps . . . .

    Dogpatch Labs tech co-working space, Dublin. As good a community of happening startups as anywhere in Silicon Valley (Image: Ultan O'Broin).

    This is probably why HBO's Silicon Valley is favorite viewing of mine. I can relate to it: Not only does it resonate with my experience, it goes past the tech jargon and cuts close to the bone with those #ouch moments.

    In the first season of Silicon Valley, there's the famous TechCrunch Disrupt scene where Pied Piper's business development head JaredDunn responds to a "life giving you lemons" moment by advising the crew that their startup needs to “pivot” and pitch their middle-out compression solution in a different direction. Pivoting is about being adaptable and finding a good fit with the market. Read the complete article here.

    SOA & BPM Partner Community

    For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

    BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]Wiki

    Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress


    0 0

    General Ledger Summary Accounts sum up balances for multiple detail accounts of a single ledger.
    They can be used for online summary inquiries, for higher level budgetary control options or to speed up the processing of financial reports, Mass Allocations, Recurring Journals and many other GL processes.

    New Diagnostic is available to collect data related to Summary Templates and associated Summary Accounts, detail accounts and potential overlapping hierarchy issues. 

    Review details on how to install and use this Diagnostic from 

    Note 2190617.1 R12: General Ledger Summary Accounts Diagnostic



    0 0

    Across the globe, companies have embraced cloud computing and the agility it provides. By constantly recombining technology and cloud solutions, enterprises are able to provide an ongoing stream of updated services to engage their users. This new clouds based environments are great for end-users and developers, but very challenging for IT management since things are constantly changing.

    It’s easy to see why IT organizations can’t keep up. They are drowning in monitoring tools and data, but have no insight. Previous generations of management tools expect that human intelligence will draw conclusions out of the data, but human operators are already overwhelmed with the velocity and volume of alerts and data coming their way, and so they get tired and miss things.

    Other tools have attempted to apply more advanced analytical techniques, but have only done so to subsets of the data because it’s too compute-intensive to do it across the board, so human operators are still required to stitch together information out the data silos.

    It’s time for a new generation of systems management. There is a better way, and it is predicated on two very simple ideas:

    • Put all the data in one place
    • Use Machine Learning to interpret all the data, all the time.

    Oracle Management Cloud (OMC) is a suite of next-generation integrated monitoring, management, and analytics cloud services built on a scalable big data platform that provides real-time analysis, deep technical and business insights. With OMC, we can eliminate multiple information silos, resolve application issues faster, and deliver better applications.

     

    With Oracle Management Cloud, you get a complete view into all applications and systems across all environments, on premises and different clouds, enabling smarter insight and swifter actions. That means less risk, lower costs and less complexity;

    • Use OMC’s machine learning to baseline performance of your existing on-premises environments
    • Use OMC to monitor post-migration and ensure you exceed the baseline and continue to improve
    • Going forward, use OMC to monitor the end-to-end services which may traverse your (new) IaaS environments as well as other heterogeneous clouds and on-premises environments.

    Join this webcast to learn how services built on top of the Oracle Management Cloud unified platform work both separately and together to over the complete range of systems management activities, from discovery and monitoring to managing change, configuration and compliance to automated remediation and orchestration to IT operations analytics and capacity planning.

    Agenda:

    • Modern IT Operational Challenges
    • Next Generation Systems Management – Opportunity & Momentum
    • Oracle Management Cloud Platform Services
    • Use Cases & Personas
    • Partner Opportunities
    • Summary – Q&A

    Presenter:  

    Victor Ameh – FMW Consultant – Oracle Partner Hub Migration Center Nigeria

    Demo: Application Performance Diagnostics from APM Alert to Application Logs

    This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend.


    Date: Thursday,
    April 27th , 10am CEST (9am BST/11am EEST)


    REGISTER HERE:

    For any questions please contact us at partner.imc@beehiveonline.oracle.com


    0 0

    On Thursday, April 20th at 10:00 am PT, @oraclewebcenter and the Content and Experience Cloud team will be hosting a live Twitter Chat on “Is headless CMS signaling the end of WCM?” to explore how relevant web content management is with the introduction of decoupled/headless content management systems.

    Much like a live panel discussion, just a bit more free-form, Twitter Chats are concise live conversations on Twitter on pre-determined topics at a set date and time. While the panelists discuss, comment and tweet about the topic, other ‘tweeters’ interested in this topic (or those who are following us on Twitter) can follow along, raise a question, provide feedback or commentary or just chime in as they feel. We will be hosting the Twitter Chat @oraclewebcenter that has over 26K followers, so it should be an exciting discussion with good visibility. And we will then archive the conversation feed on our Digital Experience Platform blog for anyone to see and reference.

    As a valued customer, we would like to invite you to participate in the live discussion. If you/your peers and/or colleagues have an active Twitter handle and are interested in participating, will you please confirm by sending an RSVP to kellsey.ruppel@oracle.com?

    Additional details for how the Twitter Chat will work can be found below.

    If you aren’t familiar with what a Twitter Chat is, a Twitter Chat is where a group of Twitter users meet at a pre-determined time to discuss a certain topic, using a designated hashtag (#) for each tweet contributed. A host or moderator will pose questions (designated with Q1, Q2…) to prompt responses from participants (using A1, A2…) and encourage interaction among the group. Chats typically last an hour. At the most basic level, you can participate in a Twitter chat simply by entering the hashtag into a Twitter search and interacting with people there. But we would encourage you to use Tweetchat or another tool to organize and filter tweets into a stream for easier conversing. One of the major benefits of using Tweetchat is that it automatically adds the hashtag to your tweet, which can save you lots of time—Twitter Chats move fast! It also refreshes in real time.

    Here are some basic instructions to participate in the Twitter Chat:

    1.At the scheduled chat time, log onto www.tweetchat.com, or whatever other tweet chat tool you like, using your Twitter name and password. Enter the hashtag for the chat at the top of the screen (e.g. we are using #contentdglt for this Twitter Chat)

    2.Let your followers know you will be participating in a chat, then introduce yourself or wait for the chat to begin. The moderator (@oraclewebcenter) will start the conversation by asking a question. Here’s an example: Q1 Why is headless CMS gaining popularity? #contentdgtl

    3.When you answer questions make sure you indicate which question you are answering. For example: A1 This model allows breakthrough user experiences & gives developers great flexibility to innovate #contentdgtl

    4.Remember to add the chat hashtag (e.g. #contentdgtl) to the end of your tweets.

    5.Don’t be afraid to add your thoughts to the topic, ask follow-up questions, observe or just “retweet” what others saying. As long as your questions are on topic, they are welcome during the Twitter Chat.

    We are so excited to have you participate in this Twitter Chat!

    ·Topic: Is headless CMS signaling the end of WCM?

    ·Date: Thursday, April 20

    ·Time: 10:00am PT / 1:00pm ET

    ·Hashtag:#contentdgtl

    Host: @oraclewebcenter

    0 0

    Дорогиедрузья!

    Спасибобольшое, чточитаетенас!

    Мырадысообщить, чтотеперьнашблогбудетещеудобнее, ведьмывближайшеевремяпереходимкиспользованиюновойплатформы.

    Обновлениеконтентаможетзанятьнескольконедель. Поэтомузаранееприносимизвинения, еслиВынесможетеоткрытькакой-тоизпостов. Всепостыбудутдоступныдляпрочтенияпослеобновления.

    Оставайтесьнасвязииследитезанашиминовостями!

    Суваженим,

    командаблога Oracle вРоссиииСНГ


    0 0

    Chandan Ghosh describes McGraw Hill's journey to the cloud and why Oracle's Integrated Cloud platform with high performance, massive scalability, and reduced costs, was the best option for moving their entire IT footprint to the Oracle Cloud.

    McGraw Hill has been experiencing rapid grow in their digital educational subscriptions and needed a way to speed up their delivery model.  Find out how they were able to set up environments in only a few hours versus the 30 days it used to take.  With the simplicity of Oracle Cloud, they are also able to keep senior IT staff focused on value added work like creating new products instead of maintenance and administrative tasks. All this translates to a better bottom line and an improved experience for their customer...the modern digitally connected student.


    0 0

    The Oracle Data Integration Team is pleased to announce the availability of

    Oracle Data Integrator Cloud Service (ODICS) 17.2.1.

    Release Highlights


    ·ODICS 17.2.1 now includes ODI 12.2.1.2.6 (PS2+)

    ·RESTful Services and Business Intelligence Cloud Service are supported out of the box

    ·Salesforce.com connectivity is available after applying patch 24622481 (available on MOS)

    ·For a complete list of all the new features included in ODI 12.2.1.2.6  please refer to the following doc: New Features (PDF)

    Available Collateral


    ·ODICS homepage on cloud.oracle.com

    ·ODICS Data Sheet

    ·Blog Post on Oracle Data Integrator Cloud Service

    ·Webcast: Introducing Oracle Data Integrator Cloud Service

    ·PM Webcast: Oracle Data Integrator Cloud Service

    ·Press Release: Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive Real-Time Analytics

    ·Press Article: Oracle Adds Data Integrator Cloud Service to Cloud Platform Portfolio

    Additional Information


    For additional information about Oracle Data Integrator Cloud Service please visit the following resource:

    ODICS Documentation


    0 0

    The Cellmemory Stats in RDBMS 

    The RDBMS stats for Cellmemory are designed to closely follow the pattern used by the Inmemory stats 

    Query Stats 

     Each column in each one MB of disk blocks will be rewritten into one IMC format Column CU in flash and a set of Column CUs comprise an overall Compression Unit so these stats reflect the number of 1 MB rewrites that were processed (not the number of column CUs).

    1. "cellmemory IM scan CUs processed for query"
      - #1 MB chuncks scanned in MEMCOMPRESS FOR QUERY format
    2. "cellmemory IM scan CUs processed for capacity"
      - #1 MB chuncks scanned in MEMCOMPRESS FOR CAPACITY format
    3. "cellmemory IM scan CUs processed no memcompress"
      - #1 MB chuncks scanned in NO CELLMEMORY format (12.1.0.2 format)

    Load Stats

    1. "cellmemory IM load CUs for query"
      - #1 MB chunks successfully rewritten from 12.1.0.2 to MEMCOMPRESS FOR QUERY format  
    2. "cellmemory IM load CUs for capacity"
      - #1 MB chunks successfully rewritten from 12.1.0.2 to MEMCOMPRESS FOR CAPACITY format
    3. "cellmemory IM load CUs no memcompress"
      - #1 MB chunks successfully rewritten into 12.1.0.2 format

    Before a rewrite happens a routine is called that looks through the blocks in the 1 MB chunk and determines if it is eligible for write. Reasons it may not be include transactional metadata from the commit cache, the presence of blocks formats that can't be rewitten (although this list is getting smaller with each rpm), and the amount of space the rewrite will take up.

    A rewrite into 12.1.0.2 format must fit in the original 1 MB of flash cache. An IMC format rewrite is not permitted to exceed 8 MB. This limit is highly unlikely to be reached by MEMCOMPRESS FOR CAPACITY but could be reached when trying to rewrite HCC blocks with much greater than 8X original compression capacity into MEMCOMPRESS FOR QUERY format. This is one reason that the default is FOR CAPACITY.

    1. "cellmemory IM scan CUs rejected for query"
      - #1 MB chunks that could not be rewritten into MEMCOMPRESS FOR QUERY for whatever reason
    2. "cellmemory IM scan CUs rejected for capacity
      - #1 MB chunks that could not be rewritten into MEMCOMPRESS FOR CAPACITY for whatever reason
    3. "cellmemory IM scan CUs rejected no memcompress"
      - #1 MB chunks that could not even be rewritten into 12.1.0.2 format for whatever reason

     


    0 0

    Consider the scenario that you execute a query. You expect it to be fast - typically subsecond - but now it take not return until after 50 seconds (innodb_lock_wait_timeout seconds) and then it returns with an error:

    mysql> UPDATE world.City SET Population = Population + 999 WHERE ID = 130;
    ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

    You continue to investigate the issue using the sys.innodb_lock_waits view or the underlying Information Schema tables (INNODB_TRX, INNODB_LOCKS and INNODB_LOCK_WAITS).

    Note: The above Information Schema tables with lock and lock waits information have been moved to the Performance Schema as the data_locks and data_lock_waits tables. The sys schema view however works the same.

    However, when you query the locks information, the blocking query is returned as NULL. What does that mean and how to proceed from that to get information about the blocking transaction?

    Setting Up an Example

    Before proceeding, lets set up an example which will be investigated later in the blog. The example can be set up as (do not disconnect Connection 1 when the queries have been executed):

    1. Connection 1:
      Connection 1> START TRANSACTION;
      Query OK, 0 rows affected (0.00 sec)
      
      Connection 1> UPDATE world.City SET Population = Population + 1 WHERE ID = 130;
      Query OK, 1 row affected (0.00 sec)
      Rows matched: 1  Changed: 1  Warnings: 0
      
      Connection 1> UPDATE world.City SET Population = Population + 1 WHERE ID = 131;
      Query OK, 1 row affected (0.00 sec)
      Rows matched: 1  Changed: 1  Warnings: 0
      
      Connection 1> UPDATE world.City SET Population = Population + 1 WHERE ID = 132;
      Query OK, 1 row affected (0.00 sec)
      Rows matched: 1  Changed: 1  Warnings: 0
      
      Connection 1> UPDATE world.City SET Population = Population + 1 WHERE ID = 133;
      Query OK, 1 row affected (0.00 sec)
      Rows matched: 1  Changed: 1  Warnings: 0
    2. Connection 2 (blocks for innodb_lock_wait_timeout seconds):
      Connection 2> UPDATE world.City SET Population = Population + 999 WHERE ID = 130;
    3. The following output while Connection 2 is still blokcing from sys.innodb_lock_waits shows that the blocking query is NULL:
      Connection 3> SELECT * FROM sys.innodb_lock_waits\G
      *************************** 1. row ***************************                wait_started: 2017-04-15 09:54:53                    wait_age: 00:00:03               wait_age_secs: 3                locked_table: `world`.`City`                locked_index: PRIMARY                 locked_type: RECORD              waiting_trx_id: 2827         waiting_trx_started: 2017-04-15 09:54:53             waiting_trx_age: 00:00:03     waiting_trx_rows_locked: 1   waiting_trx_rows_modified: 0                 waiting_pid: 5               waiting_query: UPDATE world.City SET Populati ... opulation + 999 WHERE ID = 130             waiting_lock_id: 2827:24:6:41           waiting_lock_mode: X             blocking_trx_id: 2826                blocking_pid: 3              blocking_query: NULL            blocking_lock_id: 2826:24:6:41          blocking_lock_mode: X        blocking_trx_started: 2017-04-15 09:54:46            blocking_trx_age: 00:00:10    blocking_trx_rows_locked: 4  blocking_trx_rows_modified: 4     sql_kill_blocking_query: KILL QUERY 3
      sql_kill_blocking_connection: KILL 3
      1 row in set, 3 warnings (0.00 sec)
      
      Connection 3> SHOW WARNINGS;
      +---------+------+-----------------------------------------------------------------------------------------------+
      | Level   | Code | Message                                                                                       |
      +---------+------+-----------------------------------------------------------------------------------------------+
      | Warning | 1681 | 'INFORMATION_SCHEMA.INNODB_LOCK_WAITS' is deprecated and will be removed in a future release. |
      | Warning | 1681 | 'INFORMATION_SCHEMA.INNODB_LOCKS' is deprecated and will be removed in a future release.      |
      | Warning | 1681 | 'INFORMATION_SCHEMA.INNODB_LOCKS' is deprecated and will be removed in a future release.      |
      +---------+------+-----------------------------------------------------------------------------------------------+
      3 rows in set (0.00 sec)
      The warnings will only occur in the 5.7.14 and later as the InnoDB lock tables being moved to the Performance Schema in MySQL 8.0. It is recommended to use the sys.innodb_lock_waits view as that is updated accordingly in MySQL 8.0.

    Investigating Idle Transactions

    To investigate idle transactions, you need to use the Performance Schema to get this information. First determine the Performance Schema thread id for the blocking transaction. For this you need the blocking_pid, in the above example:

                    blocking_pid: 3

    and use this with the performance_schema.threads table like:

    Connection 3> SELECT THREAD_ID FROM performance_schema.threads WHERE PROCESSLIST_ID = 3;
    +-----------+
    | THREAD_ID |
    +-----------+
    |        28 |
    +-----------+
    1 row in set (0.00 sec)

    For the following queries insert the thread id found above for the THREAD_ID = ... where clauses.

    To get the latest query executed, use the events_statements_current table or the sys.session view:

    Connection 3> SELECT THREAD_ID, SQL_TEXT FROM performance_schema.events_statements_current WHERE THREAD_ID = 28;
    +-----------+------------------------------------------------------------------+
    | THREAD_ID | SQL_TEXT                                                         |
    +-----------+------------------------------------------------------------------+
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 133 |
    +-----------+------------------------------------------------------------------+
    1 row in set (0.00 sec)

    or:

    Connection 3> SELECT * FROM sys.session WHERE thd_id = 28\G
    *************************** 1. row ***************************
                    thd_id: 28
                   conn_id: 3
                      user: root@localhost
                        db: NULL
                   command: Sleep
                     state: NULL
                      time: 447
         current_statement: NULL
         statement_latency: NULL
                  progress: NULL
              lock_latency: 117.00 us
             rows_examined: 1
                 rows_sent: 0
             rows_affected: 1
                tmp_tables: 0
           tmp_disk_tables: 0
                 full_scan: NO
            last_statement: UPDATE world.City SET Population = Population + 1 WHERE ID = 133
    last_statement_latency: 321.06 us
            current_memory: 0 bytes
                 last_wait: NULL
         last_wait_latency: NULL
                    source: NULL
               trx_latency: NULL
                 trx_state: ACTIVE
            trx_autocommit: NO
                       pid: 7717
              program_name: mysql
    1 row in set (0.08 sec)

    In this case this does not explain why the lock is held as the last query update a different row then where the lock issue occurs. However if the events_statements_history consumer is enabled (it is by default in MySQL 5.7 and later), the events_statements_history table will include the last 10 statements (by default) executed for the connection:

    Connection 3> SELECT THREAD_ID, SQL_TEXT FROM performance_schema.events_statements_history WHERE THREAD_ID = 28 ORDER BY EVENT_ID;
    +-----------+------------------------------------------------------------------+
    | THREAD_ID | SQL_TEXT                                                         |
    +-----------+------------------------------------------------------------------+
    |        28 | SELECT DATABASE()                                                |
    |        28 | NULL                                                             |
    |        28 | show databases                                                   |
    |        28 | show tables                                                      |
    |        28 | START TRANSACTION                                                |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 130 |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 131 |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 132 |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 133 |
    +-----------+------------------------------------------------------------------+
    5 rows in set (0.00 sec)

    So now the history of the blocking transaction can be seen and it is possible to determine why the locking issue occur.

    Note: The history also includes some queries executed before the transaction started. These are not related to the locking issue.

    If transaction monitoring is also enabled (only available in MySQL 5.7 and later), it is possible to get more information about the transaction and automatically limit the query of the history to the current transaction. Transaction monitoring is not enabled by default. To enable it, use:

    mysql> UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME = 'events_transactions_current';
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    mysql> UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'transaction';
    Query OK, 1 row affected (0.00 sec)
    Rows matched: 1  Changed: 1  Warnings: 0

    Note: This must be done before either of the transactions is started. Only transaction started after the transaction monitoring is enabled will be instrumented.

    If the above was enabled before the blocking transaction started, you can get more details about the blocking transaction as:

    Connection 3> SELECT * FROM performance_schema.events_transactions_current WHERE THREAD_ID = 28\G
    *************************** 1. row ***************************                      THREAD_ID: 28                       EVENT_ID: 12                   END_EVENT_ID: NULL                     EVENT_NAME: transaction                          STATE: ACTIVE                         TRX_ID: NULL                           GTID: AUTOMATIC                  XID_FORMAT_ID: NULL                      XID_GTRID: NULL                      XID_BQUAL: NULL                       XA_STATE: NULL                         SOURCE: transaction.cc:209                    TIMER_START: NULL                      TIMER_END: NULL                     TIMER_WAIT: NULL                    ACCESS_MODE: READ WRITE                ISOLATION_LEVEL: REPEATABLE READ                     AUTOCOMMIT: NO           NUMBER_OF_SAVEPOINTS: 0
    NUMBER_OF_ROLLBACK_TO_SAVEPOINT: 0    NUMBER_OF_RELEASE_SAVEPOINT: 0          OBJECT_INSTANCE_BEGIN: NULL               NESTING_EVENT_ID: 11             NESTING_EVENT_TYPE: STATEMENT
    1 row in set (0.00 sec)

    And to get the statement history of the transaction:

    Connection 3> SELECT t.THREAD_ID, s.SQL_TEXT
                    FROM performance_schema.events_transactions_current t
                         INNER JOIN performance_schema.events_statements_history s ON s.THREAD_ID = t.THREAD_ID AND s.NESTING_EVENT_ID = t.EVENT_ID
                   WHERE t.THREAD_ID = 28 ORDER BY s.EVENT_ID;
    +-----------+------------------------------------------------------------------+
    | THREAD_ID | SQL_TEXT                                                         |
    +-----------+------------------------------------------------------------------+
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 130 |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 131 |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 132 |
    |        28 | UPDATE world.City SET Population = Population + 1 WHERE ID = 133 |
    +-----------+------------------------------------------------------------------+
    4 rows in set (0.00 sec)


    0 0

    image


    Five projects show the many use cases for cloud platforms.

    For Debra Lilley, the time feels right to stop thinking about building something with a cloud development platform, and start doing it. So what else to do but sign up for an event challenging teams to build a cloud-based project in a single day?

    “The best way to learn something is to dig in and have a go,” says Lilley, an Oracle ACE Director and vice president at systems integration firm Certus Solutions.

    That attitude explains why Lilley joined not one but two teams taking part in the OTN Developer Cloud Challenge, held outside of Amsterdam, the Netherlands, in early June 2016, as part of the AMIS Oracle Conference in the Netherlands. One of Lilley’s teams used Oracle Cloud Platform tools to craft an app for sharing flight and lodging details with your travel companions to an event such as this cloud challenge. The other worked on refining an extension to Oracle Human Capital Management Cloud for managing the HR policy for bringing complaints against senior staffers.

    Events such as the Amsterdam cloud challenge help systems integrators get hands-on with the tools and projects they’re increasingly going to use in their work with customers. Here are five projects from the challenge, with insights from the Oracle ACE Directors leading the projects on what they’re learning about cloud platforms.

    Continuous Innovation

    Project: A mobile app that lets you share travel plans with colleagues, so you can book at the same hotel or share a cab if you’re attending the same conference. Read the complete article here.

    SOA & BPM Partner Community

    For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

    BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]Wiki

    Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress


    0 0

    image

    As part of our communities we do offer free PaaS accounts (only for partners in Europe, Middle East and Africa. In case you are not part of EMEA please contact your local partner manager):

    ·Java Cloud Service & Mobile Cloud & Application Container Cloud Service PaaS Demo Accounts(WebLogic Community membership required)

    ·Integration Cloud Service & Process Cloud Service ad PaaS for SaaS PaaS Demo Accounts  (SOA Community membership required)

    Questions? Feel free to contact our Facebook chatbot - send us a message here. Watch the GSE Overview Video!Get an overview of what GSE is and how you can use GSE to help you sell. You can also get long running dedicated PaaS instances, therefore please send us details about your use cases. For instant access please request a sandbox demo.


    WebLogic Partner Community

    For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

    BlogTwitterLinkedInForumWiki

    Technorati Tags: WebLogic Community,Oracle,OPN,Jürgen Kress


    0 0

    Analyzers are designed by Support Experts to proactively assist with better diagnose of the Application environments. They may also be used as data collection tools by Support Engineers.

    The EBS Advanced Global Intercompany System (AGIS) Analyzer is a self-service health-check script that reviews Financials Common Modules (FUN) related data, analyzes current configurations and settings for the environment and provides feedback and recommendations on best practices.
    It is a non-invasive script, there are no INSERTs, UPDATEs or DELETEs performed, data is just reported on.

    DBAs, System Administrators, product specialists, business analysts, end-users of AGIS module - everyone would note the benefits of using this diagnostic tool:

    • It provides output of the following information:
    • Instance overview
    • FUN Product and Workflow Licenses
    • Setup details
    • Batches and Transaction details, Open Transactions
    • FUN Workflow Details, Interfaces
    • Files and Versions
    • Provides recommended actions and best practices
    • Runs as a standalone or Concurrent Request


    Download the latest version (200.2) from
    EBS Advanced Global Intercompany System (AGIS) Analyzer (Doc ID 2208479.1)


    0 0

    The Oracle Business Solution Lead (BSL) Summit was held at the Oracle Thames Valley Park, UK offices in February 2017.

    Oracle Applications User Experience (OAUX) Group Vice President Jeremy Ashley (@jrwashleyand OAUX HCM Senior Director Aylin Uysal (@aylinuysal)delivered a UX Futurist Keynote presentation. Topics covered included a strategic overview of the Oracle Applications Cloud UX approach, the visual evolution of the SaaS UX, wearables, conversational computing interaction, and leading insights into how machine learning will move things faster towards realizing that vision of a more human way of working.

    The event brought together the Oracle applications solution consultants from across EMEA and APAC who lead our large-scale, strategic, multi-pillar opportunities; all keen to hear about and leverage the OAUX message and share their know-how for winning more, bigger, and faster big-bet deals. Attendees learned about impactful approaches and skills to engage opportunities, how those big-bet deals and adoption can be accelerated, and participated in fun, interactive activities along the way.

    Jeremy and Aylin at BSL Summit

    Jeremy Ashley and Aylin Uysal deliver the OAUX message

    Jeremy and Aylin were supported onsite at TVP by Ana Tomescu (@annatomescu), OAUX Cloud UX PM based in Bucharest.


    0 0

    image


    Long awaited Oracle released its brand new Container Cloud Service (OCCS) recently. It uses docker as container technology and provides an easy to use browser interface to manage container based applications. It includes the following features

    • Host management and clustering and scaling capabilities across hosts
    • Ability to define stacks (think of a composition of two or more services)
    • Connect to private Docker registries like Docker Hub or Quay.io
    • Service registry
    • Many samples at GitHub specially target on OCCS
    • Dashboard, Monitoring of container environment

    Architecture of Oracle Container Cloud Service

    Container Cloud Instance / Hosts

    An instance of OCCS consists of minimum 1 MANAGER host and 1 WORKER host. The MANAGER host contains the OCCS Console (UI) you can access through the browser or via SSH (with limited access) for viewing log files of the OCCS Manager. From the Oracle Cloud Services View the host configuration looks like the following (after being provisioned; I am not going into the details here. It is just going through a wizard..) Read the complete article here.

    WebLogic Partner Community

    For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

    BlogTwitterLinkedInForumWiki

    Technorati Tags: WebLogic Community,Oracle,OPN,Jürgen Kress


    0 0

    image

    As part of our communities we do offer free PaaS accounts (only for partners in Europe, Middle East and Africa. In case you are not part of EMEA please contact your local partner manager):

    ·Integration Cloud Service & Process Cloud Service & SOA Cloud & IoT & PaaS for SaaS Service PaaS Demo Accounts  (Community membership required)

    ·Java Cloud Service & Application Cloud Container Service & Mobile Cloud Service PaaS Demo Accounts(Community membership required)

    Questions? Feel free to contact our Facebook chatbot - send us a message here.

    Watch the GSE Overview Video!Get an overview of what GSE is and how you can use GSE to help you sell. You can also get long running dedicated PaaS instances, therefore please send us details about your use cases. For instant access please request a sandbox demo

    SOA & BPM Partner Community

    For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center.

    BlogTwitterLinkedInimage[7][2][2][2]Facebookclip_image002[8][4][2][2][2]Wiki

    Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress


    0 0

    The Usable Apps blog will be transitioning to a new Oracle blogging platform and URL soon. 

    Change changes nothing about our passion to deliver

    (Source

    This current location will remain with content published up to the point of transition, so you can continue to reference the goodness! 

    Stay tuned. Thank you for your support.

    Ultan (@ultan


    0 0

    In the first part of this blog, I have created a simple SpringBoot application and tested it locally. In this second part I am going to deploy this application to Oracle Application Container Cloud Service.

    Package the App as a JAR file

    Type mvn package under app_home. This creates the sample-accs-1.0.jar package under <app_home>/target.

    Manifest.json

    This file tells Oracle ACCS how to run our application. Insert the following in manifest.json file and place it under <app_home>/target.

    {"runtime": {"majorVersion": "8"
        },"command": "java -jar sample-accs-1.0.jar","notes": "Sample Application"
    }

    Under <app_home>/target type the command below to zip the sample-accs-0.1.jar file and manifest.json file together.

    > zip accs.zip manifest.json sample-accs-1.0.jar

    Deploy to Oracle ACCS

    Go to https://apaas.europe.oraclecloud.com/apaas and sign in. The URL might be a little different for you based on your the data center. Once logged in, click Create Application and select Java SE. Give the app a name like SampleApp and choose accs.zip as the Archive file.

    You can also tell Oracle ACCS how many instances do you like to have to host your application. I will go with defaults here. Click Create.

    After a few minutes, the application is ready at the specified URL

    Same application can now be accessed at URL https://sampleapp-gse00003349.apaas.em2.oraclecloud.com/cars

    You can of course add or delete some cars using a REST client like Postman just like I did with the c_u_r_l utility.

    Summary

    Oracle ACCS provides a simple and efficient way to run your workloads in cloud. SpringBoot is one example of those that you can easily configure and host from a lightweight container that Oracle ACCS creates for you. The complete application can be downloaded from GitHub.


older | 1 | .... | 979 | 980 | (Page 981) | 982 | 983 | .... | 989 | newer