Feed aggregator

River Island Creates Single View of Inventory with Oracle Retail Planning

Oracle Press Releases - Tue, 2018-01-16 08:00
Press Release
River Island Creates Single View of Inventory with Oracle Retail Planning Fashion Retailer Aligns Planning Practices Across its Business Models

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 16, 2018

Today, Oracle announced that River Island deployed Oracle Retail Merchandise Financial Planning to support their Omnichannel growth and digital transformation. River Island operates a global portfolio of over 320 stores across the UK and Ireland and internationally throughout Asia, the Middle East and Europe. With a global footprint and multiple web, franchise and wholesale operations River Island required new tools to make more accurate and impactful inventory decisions for continued growth.

River Island had the vision and courage to become an Omnichannel retailer before it was vogue. River Island operates an award-winning online fashion retail site, employs one of the biggest in-house design teams on the British high street and maintains a deep commitment to nurturing new talent which have enabled the brand to become one of the most successful fashion retailers in the UK. As River Island experienced continued growth in the UK and internationally they understood the value and necessity of a single view of order and inventory to scale their business.

“We knew that Omnichannel was the future and had to make a strategic move. We partnered with Oracle to get there. A single view of inventory is the key to fulfilling demand and operating more effectively,” said Doug Gardner, Chief Information Officer, River Island. “Through this transformation, we needed to change the way our business worked. People had to come together and properly go through the design of the implementation.”

As an early adopter of Omnichannel planning, River Island partnered very closely with the Oracle Retail Consulting team to align towards business objectives correctly. Together, River Island established more-consistent and accurate planning processes to better understand how merchandise was performing across channels.

“Sometimes you have to have the courage to level set in the middle of an implementation. You want to get it right because you need a foundation to operate with going forward,” said Gardner. “By implementing Oracle Retail Merchandise Financial Planning correctly, we are now about to evaluate profitability, reduce markdowns and follow a single version of the truth for the whole business.”

“In our 2017 global consumer research ‘Retail in 4 Dimensions,’ we found that 43% of consumers are now shopping both online and in-store every week. The multichannel shopper spent nearly twice as much as a single channel shopper this fall. River Island took notice of these trends early and shifted their strategies to be more nimble and agile in the face of shifting consumer demands,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “Retail has advanced at an unprecedented pace with an evolution of strictly brick and mortar retail to a complex Omnichannel world where purchasing online and collecting orders how, where and when one wants has become the standard.”

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
Oracle Retail at NRF 2018

Oracle Retail will be showcasing the full suite of Oracle Retail solutions and cloud services at the National Retail Federation Big Show Jan. 14-16, 2017, in New York City at the Jacob K. Javitz Convention Center. Oracle Retail will be located at booth #3521. For more information check out: www.oracle.com/retail

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

which approach should i apply to fetch 6 millions records from select statement from various table

Tom Kyte - Mon, 2018-01-15 23:46
I have one requirement where i needs to fetch all the Install base records along with other details.currently in our system there are 6 millions are exist.So please help us to choose better approach for this.
Categories: DBA Blogs

duplicate rows

Tom Kyte - Mon, 2018-01-15 23:46
How to find duplicate rows in a table ?
Categories: DBA Blogs

Observing a "create table as select statement"

Tom Kyte - Mon, 2018-01-15 23:46
Hi Tom, In a customer project we are using quite a lot of "create table as select ...." (cannot change them), that are taking quite a while (some hours). I would like to observe the growth of these objects, while they are created, but I'm bumpin...
Categories: DBA Blogs

big file tablespace

Tom Kyte - Mon, 2018-01-15 23:46
I have a big file tablespace?which has 100G size?I wonder ,big file tablepspace vs normal small file tablespace?which is better? someone said, it is very difficult to recover if bad block occurs in a big file, and what's more, performance is worse th...
Categories: DBA Blogs

convert from char to varchar2 - retrieve space

Tom Kyte - Mon, 2018-01-15 23:46
Hi Tom; We are working on Oracle 11g standard edition. By mistake, several CHAR fields have been created in several large tables, which generated an increase in the space occupied by them. We convert the fields to VARCHAR2, but we can not recove...
Categories: DBA Blogs

JOIN with select first item

Tom Kyte - Mon, 2018-01-15 23:46
Hello, my first query "select Barcode as sample_barcodes" returns the following entries: <code> ----------------- SAMPLES_BARCODES 50027 50028 50029 ----------------- </code> my second query "select CampaignItemId, Barcode from MyView...
Categories: DBA Blogs

How to get MIN and MAX of Consecutive Numbers

Tom Kyte - Mon, 2018-01-15 23:46
Thanks for your awesome help on "Trying to split serial number ranges" question today. I have another opportunity to ask a question and it's somewhat related to my earlier question. I hope this would be an easy one. I have a table with the foll...
Categories: DBA Blogs

Coding Parallel Processing on 12c

Tom Kyte - Mon, 2018-01-15 23:46
Hello Tom, I gave the below link a try and applied the method on 12c. https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4248554900346593542 But it takes same time as serial processing. Could you please light me up what I'...
Categories: DBA Blogs

The top 5 reasons why you should submit an abstract for APEX at the Great Lakes Oracle Conference (GLOC)

Joel Kallman - Mon, 2018-01-15 18:50
APEX Developer Day at Great Lakes Oracle Conference 2017
The Northeast Ohio Oracle User's Group (NEOOUG) is easily one of my favorite user groups on the planet.  They've been graciously hosting me at their user group events since 2004 (when I first gave a demonstration on Oracle HTML DB 1.5!).  They are a large, active and passionate user group.  In the past 14 years, I've seen them grow from simple user group events to "Training Days" at the Cleveland State University campus to a nicely sized regional conference named Great Lakes Oracle Conference.

If you're into Oracle APEX, either on-premises, or in the Oracle Cloud, I encourage you to submit an abstract to speak at the Great Lakes Oracle Conference.  Here are my top 5 reasons why you should strongly consider this:
  1. There is a real hunger for Oracle APEX content at this conference.  There are countless customers in the immediate region who use Oracle APEX.  Last year, they had the first ever Oracle APEX Developer Day in advance of the conference, and it was sold out (100 attendees)!
  2. It's the largest Oracle user's conference in the Midwest US.  It draws people from all over Ohio, Michigan, Indiana, Kentucky and Pennsylvania.  There will be over 500 attendees at the conference in 2018.
  3. The Great Lakes Oracle Conference routinely gets world-class speakers from all over the world, both Oracle employees and Oracle ACEs.  As a speaker, you would be able to attend any session in any track.
  4. There are numerous tracks at the Great Lakes Oracle Conference, including APEX, Oracle Applications, Business Intelligence, DBA, Database Developer and Data Warehousing.
  5. Cleveland, Ohio is on the North Coast of the US.  There, you can visit Great Lakes Brewing Company, Market Garden Brewery, Platform Beer Company,  and the Rock & Roll Hall of Fame.

I come across so many people who say "why would anyone want to hear me talk about that?"  From case studies to lessons learned to best practices in your environment, it's all interesting and valuable.  Not everyone who attends the APEX sessions at GLOC are experts, so entry-level sessions are also welcome!

I encourage you to submit an abstract today.  The deadline for abstract submission is February 2, 2018.

Getting Started Provisioning Database Cloud Service Deployments and Java Cloud Service ...

Oracle Cloud Infrastructure has tailored solutions built from bottom to top with integration and elastic capability delivering a solid infrastructure base that can integrate and inter-operate with...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Luxury Retailer Chalhoub Modernizes the Customer Experience with Oracle Point of Service

Oracle Press Releases - Mon, 2018-01-15 11:50
Press Release
Luxury Retailer Chalhoub Modernizes the Customer Experience with Oracle Point of Service Xstore Point-of-Service Delivers a Highly Personalized, Mobile Customer Experience In-Store

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 15, 2018

Today, Oracle announced that luxury retailer Chalhoub has successfully deployed Oracle Retail Xstore Point-of-Service to modernize its customer experience. Chaloub’s Oracle Retail Xstore implementation is the first in the Middle East and the result of a six-month installation process in partnership with Logic Information Systems. The project for the Dubai storefront included deployment of more than 100 registers, 60 of which are mobile, bringing mobile checkout, integrated payment systems and improved store operations to the brand.

Chalhoub started as a family business licensing foreign brands in Damascus, Syria in 1955. It now runs a network of 650 retail stores with fashion and cosmetic lines including Chanel, Louis Vuitton and Christian Louboutin across the Middle East. Today the Chalhoub Group employs more than 12,000 people in 14 countries.

“With the help from Logic and Oracle, we migrated from the Oracle Retail Point of Service to latest version of Oracle Retail Xstore Point-of-Service. We can now deliver a modern mobile experience to our customers. By implementing Xstore, we are also empowering our store associates. The goal is to provide a highly personalized and engaged customer experience at the world's finest shoe metropolis, Level Shoes,” said Olivier Leblan, Group Chief Information Officer, Chalhoub.

“The Point of Service system must allow retailers to transact and interact with consumers as they choose. Whether using a traditional register, portable solution, tablet or handheld, it's point of service.,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “As our Retail in 4D research shows, more than half (52 percent) of retailers said they are arming their store employees with mobile technology. Congratulations to Chalhoub for deploying a modern customer experience.”

“We are thrilled to partner with Chalhoub and Oracle. Together we delivered on the vision to drive a better customer experience for Level and to establish a foundation to support Chalhoub,” said Saad Khan, General Manager of the Middle East, Logic Information Systems. “We look forward to the continued momentum for the Oracle Retail Xstore platform across the Middle East and Asia. We found the solution to be a great fit for the region.”

To learn more about Chalhoub’s implementation of Oracle Retail technology register here for a webinar on Tuesday February 20.

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
Oracle Retail at NRF 2018

Oracle Retail will be showcasing the full suite of Oracle Retail solutions and cloud services at the National Retail Federation Big Show Jan. 14-16, 2017, in New York City at the Jacob K. Javitz Convention Center. Oracle Retail will be located at booth #3521. For more information check out: www.oracle.com/retail

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Oracle and FreedomPay Deliver Future of Consumer Payments at New York’s Jacob K. Javits Center

Oracle Press Releases - Mon, 2018-01-15 10:50
Press Release
Oracle and FreedomPay Deliver Future of Consumer Payments at New York’s Jacob K. Javits Center Collaboration will Showcase New EMV Contact and Contactless Payment Capabilities at One of the Nation’s Busiest Convention Centers

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 15, 2018

Today at NRF 2018, Oracle Hospitality a trusted provider of hardware, software and services to hospitality operators and FreedomPay, a leader in secure commerce technology for lodging, gaming, retail, restaurants, stadiums and other hospitality merchants, announced their  collaboration to provide secure payment processing for conference attendees at the Jacob K. Javits Center in New York City. The companies have integrated FreedomPay’s Advanced Commerce Platform with Oracle Hospitality’s MICROS point-of-sale devices to provide full EMV support to over 175 events annually supporting over 35,000 global companies.

The customer facing devices, as well as the handheld pay-at-table devices, support EMV contact and contactless payments—enabling patrons from around the world to easily leverage their local bank-issued payment cards. Furthermore, the contactless EMV support enables customers to save valuable time by simply tapping their payment card on the reader versus inserting their card into the device, the combined payment solution enables consumers to utilize the payment options they desire.

The integration between FreedomPay and Oracle Hospitality is another example of Oracle delivering additional value for hospitality customers through integrations that extend the value of POS investments. In addition to providing durable and compact point-of-service terminals, Oracle Hospitality offers a fully integrated portfolio of hardware and software solutions that enable food and beverage operations to streamline managerial tasks, increase speed of service and elevate the guest experience.

“The NRF conference is about showcasing the technology that will build future customer experiences and with FreedomPay we’ll be highlighting the potential of contactless EMV payments in the hospitality industry at food vendors across the Jacob K. Javits Center,” said Laura Calin, vice president of strategy, Oracle Hospitality. “Contactless EMV payments represent an opportunity for retailers and food and beverage operators to improve the guest experience by accelerating the check out process during peak retail seasons and high volume hospitality applications.”

“Contactless payments deliver a fast, simple and secure experience at the point of sale, while also helping merchants increase speed of service and grow sales volume,” said Dan Sanford, vice president, consumer products, Visa. “We’re pleased that all Javits Center attendees will now be able to tap to pay quickly and easily with their contactless cards, mobile phones, or connected devices.”

The FreedomPay Advanced Commerce Platform is one of the first the first PCI-validated point-to-point encryption (P2PE) solution with EMV, NFC, Dynamic Currency Conversion and real-time data capabilities that delivers on a global scale. With P2PE, valuable consumer payment data is protected from the moment the card is inserted into the MICROS point-of-sale device, in transit, and at rest in the merchant’s environment. FreedomPay is Platinum level member of Oracle PartnerNetwork (OPN).

“Over the past 15 years, FreedomPay developed a strong relationship with the team at Oracle—combining our point-to-point encryption expertise and secure payment processing capabilities with their MICROS terminals,” said Chris Kronenthal, president and chief technology officer at FreedomPay. “This installation at the Javits Center emphasizes the payment protection capabilities that FreedomPay and Oracle are delivering coast-to-coast.”

FreedomPay and Oracle help secure payments for consumers across multiple verticals throughout the United States, including food and beverage, hospitality, retail and travel. The “Secured by FreedomPay” image appearing on the Oracle MICROS point-of-sale devices assures customers that their transactions are protected to exceed some of the most stringent payment transaction security requirements currently available. FreedomPay is Platinum level member of Oracle PartnerNetwork (OPN).

NRF 2018 attendees who spot the “Secured by FreedomPay” image on payment terminals within the Javits Center are encouraged to participate in FreedomPay’s #SecuredbyFreedomPay” social media prize giveaway. Additional details are available at http://corporate.freedompay.com/blog_article/show-us-you-are-securedbyfreedompay/.

Oracle Hospitality hardware solutions including the recently announced Oracle MICROS Compact Workstation 310 will be available for demo at the National Retail Federation Big Show Jan. 14-16, 2018, in New York City at the Jacob K. Javitz Convention Center. Oracle Hospitality will be located within the Oracle Retail booth #3521.

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
Christy Pittman
W2 Communications for FreedomPay
703-877-8108
christy@w2comm.com
About FreedomPay

The FreedomPay Commerce Platform is the best way for merchants to simplify complex payment environments. Validated by the PCI Security Standards Council for Point-to-Point Encryption (P2PE) along with EMV, NFC and DCC capabilities, global leaders in retail, hospitality, gaming, education, healthcare and financial services trust FreedomPay to deliver unmatched security and advanced value added services. With broad integrations across top point-of-sale, device manufacturers and payment processors, supported by rapid API adoption, FreedomPay is driving the future of commerce and customer interaction. For more information, go to www.freedompay.com.

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility.

For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle’s partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program—an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Christy Pittman

  • 703-877-8108

Oracle Celebrates Continued Hardware Innovation with New MICROS Compact Workstation 310 Point-of-Sale

Oracle Press Releases - Mon, 2018-01-15 10:50
Press Release
Oracle Celebrates Continued Hardware Innovation with New MICROS Compact Workstation 310 Point-of-Sale New POS Engineered for Smaller Footprints and Portability to Join Existing Portfolio of Workstations on Display at NRF 2018

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 15, 2018

Oracle today introduced the new Oracle MICROS Compact Workstation 310 point-of-sale (POS) terminal. A high-performance, cost-effective workstation built with superior durability in a sleek, compact form factor, the Oracle MICROS Compact Workstation 310 is ideal for hospitality and retail applications that feature a limited menu or assortment and require a smaller footprint, terminal portability or additional capacity during peak trading or a sporting event. The Oracle MICROS Compact Workstation 310 will be available to demo at the National Retail Federation conference in New York and joins Oracle’s existing hardware portfolio on display at booth #3521 including Oracle MICROS Workstation 6 and the Oracle MICROS Tablet 720 showcasing Oracle Retail Xstore Point-of-Service with our Oracle Retail omnichannel cloud services.

Oracle MICROS Compact Workstation 310

The Oracle MICROS Compact Workstation 310 runs Windows 10® IoT Enterprise with Oracle Hospitality Simphony 2.9.2 HF6, 2.9.3 HF1, 2.10, Simphony FE 1.7.3, and RES 5.5.1, and Oracle Retail Xstore Point of Service v17 software. The all-in-one thin client and 10.1” display is designed to accommodate limited counter space and afford maximum portability. Its connectivity and operating system enable ease of provisioning and management for IT, while delivering a fast, rich, familiar Windows 10 user experience.

“Guest services and the shopping experiences are changing in the hospitality and retail sectors, with consumers demanding more speed and convenience. Oracle is extending our hardware portfolio so that our customers can adapt to those changes,” said Mike Webster, senior vice president and GM Oracle Retail and Hospitality. “The Oracle MICROS Compact Workstation 310 delivers a portable, rugged and intuitive experience that is perfect for scenarios with high volumes of customers and limited menus or assortments including stadiums, pop up stores, mall kiosks, theme parks, sidewalk sales and promotional events.”

Key features of the new Oracle MICROS Compact Workstation 310 include:

  • Best Price Performance: Purpose-built to help ensure businesses aren’t over paying for the best customer experience, the 310 is engineered end-to-end for stress free administration and includes a powerful dual core processor and integrated graphics engine for an exceptional user experience.
  • Built To Last: The 310 is built to withstand extreme temperatures for outdoor use and is protected against impact, dust, grease and grime build up. The long product lifecycle and low meantime before failure (no moving parts) aims to lower total cost of ownership by reducing the number of refresh cycles.
  • Elegant, Simple Design: Made with a sleek, industrial design and small footprint, the 310 is aesthetically pleasing and maximizes counter space. Its portability also allows for anywhere, anytime transactions to capitalize on profitable locations, and its intuitive user interface is easy-to-use for full-time or seasonal employees.
  • Easy to Set Up: End to end ecosystem: Hardware, Software, Cloud, and Services enables easier solution set up and support.  Client applications manager (CAL), advocated offering and Oracle validated SW updates allow easy device provision and administration.

The Oracle MICROS Compact Workstation 310 joins Oracle’s existing portfolio of POS workstations, including the Oracle MICROS Workstation 650, the Oracle MICROS Workstation 620 and the Oracle Workstation 610 and complements the Oracle MICROS Tablet 720 Series 7 inch Tablet.

Continued Retail POS Momentum

“Oracle Retail is extending our hardware portfolio to deliver the innovation of the Oracle Retail Xstore Point-of-Service Platform with omnichannel cloud services. The Workstation 310 has ample connectivity for multiple peripherals, and is fully supported beginning with Oracle Retail Xstore Point-of-Service,” said Jeff Warren, Vice President Strategy and Solutions, Oracle Retail. “This allows the retailer to have a consistent software implementation with the benefits of Xstore in a portable small footprint point of sale workstation.”

Global customers continue to adopt Oracle Xstore POS with Oracle MICROS hardware including:

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility. For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Private Functions and ACCESSIBLE BY Packages in 12c

The Anti-Kyte - Mon, 2018-01-15 07:48

My recent post about PLS-00231 prompted an entirely reasonable question from Andrew :

“OK so the obvious question why [can’t you reference a private function in SQL] and doesn’t that defeat the objective of having it as a private function, and if so what about other ways of achieving the same goal ?”

I’ll be honest – that particular post was really just a note to self. I tend to write package members as public initially so that I can test them by calling them directly.
Once I’ve finished coding the package, I’ll then go through and make all of the helper package members private. My note was simply to remind myself that the PLS-00231 error when compiling a package usually means that I’ve referenced a function in a SQL statement and then made it private.

So, we know that a PL/SQL function can only be called in a SQL statement if it’s a schema level object or it’s definied in the package header because that’s the definition of a Public function in PL/SQL. Or at least it was…

In formulating an answer to Andrew’s question, it became apparent that the nature of Private functions have evolved a bit in 12c.

So, what I’m going to look at here is :

  • What are Private and Public package members in PL/SQL and why you might want to keep a package member private
  • How 12c language features change our definition of private and public in terms of PL/SQL objects
  • Hopefully provide some up-to-date answers for Andrew

Private and Public in the olden days

As most real-world PL/SQL functions are written within the context of a package, this is where we’ll focus our attention.

From the time that PL/SQL stored program units were introduced into Oracle, right up to and including 11g, the definition was simple.

A PL/SQL package member ( function or procedure) was public if it’s specification was declared in the package header.
Otherwise, it was private.
A private package member can only be referenced from inside it’s package.

A private package member might be used to encapsulate some functionality that is used in multiple places inside your package but not outside of it.
These “helper” functions tend to be quite common.
Another reason for using a private function would be to reduce clutter in the package signature. If your package is serving as an API to some business functionality, having few public members as entry points helps to ensure that the API is used as intended.

Of course, a private package member cannot be referenced in a SQL query, even from inside the package…

Changes in 12c and (probably) beyond

The ability to use PL/SQL constructs in SQL with clauses provided by 12c manages to take some of the certainty out of our definition of public and private. For example…

with function catchphrase return varchar2 is
    begin
        return 'I have a cunning plan which cannot fail';
    end;
select catchphrase 
from dual
/

…in 12c rewards you with :

CATCHPHRASE                                       
--------------------------------------------------
I have a cunning plan which cannot fail

Possibly more significant is the ability to create packages that are useable only by certain other stored program units using the ACCESSIBLE BY clause.

Using this new feature, we can split out our helper package members from the main package :

create or replace package baldrick_helper 
    accessible by (package baldrick)
as
    function catchphrase return varchar2;
end baldrick_helper;
/

create or replace package body baldrick_helper 
as    
    function catchphrase return varchar2
    is
    begin
        return 'I have a cunning plan which cannot fail';
    end catchphrase;
end baldrick_helper;
/

As well as reducing the size of individual packages, it should also mean that we can now reference the catchphrase function directly in a SQL statement right ? After all, it’s declared in the package header.

create or replace package baldrick 
as
    procedure cunning_plan;
end baldrick;
/

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
        select baldrick_helper.catchphrase
        into optimism
        from dual;
        
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

This compiles without error. However, when we try to run it we get :

set serveroutput on size unlimited
exec baldrick.cunning_plan;

ORA-06553: PLS-904: insufficient privilege to access object BALDRICK_HELPER
ORA-06512: at "MIKE.BALDRICK", line 5
ORA-06512: at line 1

Although the function is declared in the package header, it appears to remain private due to the use of the ACCESSIBLE BY whitelist. Therefore, if you want to reference it, you need to do it in straight PL/SQL :

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
    optimism := baldrick_helper.catchphrase;
        
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

This works as expected :

set serveroutput on size unlimited
exec baldrick.cunning_plan;

I have a cunning plan which cannot fail


PL/SQL procedure successfully completed.
Answers for Andrew

If your goal is to reference a PL/SQL package member in a SQL statement then it must be public.
In 12c this means it must be declared in the header of a package which is not defined using an ACCESSIBLE BY clause.

On the other hand, if your goal is to keep your package member private then you cannot reference it in a SQL statement.
In 12c, you do have the option of re-defining it in a with clause as mentioned earlier. However, this only works in straight SQL.
As far as code in a package is concerned, you can’t use an in-line with clause as a wrapper for the call to the private function like this…

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
        with function cheating return varchar2 is
        begin 
            return baldrick_helper.catchphrase;
        end;     
        begin
        select catchphrase
        into optimism
        from dual;
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

…because it’s not currently supported in PL/SQL.

Histogram Hassle

Jonathan Lewis - Mon, 2018-01-15 07:01

I came across a simple performance problem recently that ended up highlighting a problem with the 12c hybrid histogram algorithm. It was a problem that I had mentioned in passing a few years ago, but only in the context of Top-N histograms and without paying attention to the consequences. In fact I should have noticed the same threat in a recent article by Maria Colgan that mentioned the problems introduced in 12c by the option “for all columns size repeat”.

So here’s the context (note – all numbers used in this example are approximations to make the arithmetic obvious).  The client had a query with a predicate like the follwing:

    t4.columnA = :b1
and t6.columnB = :b2

The optimizer was choosing to drive the query through an indexed access path into t6, which returned ca. 1,000,000 rows before joining (two tables later) to t4 at which point all but a couple of rows remained – typical execution time was in the order of tens of minutes. A /*+ leading(t4) */ hint to start on t4 with an index that returned two rows reduced the response time to the classic “sub-second”.

The problem had arisen because the optimizer had estimated a cardinality of 2 rows for the index on t6 and the reason for this was that, on average, that was the correct number. There were 2,000,000 rows in the table with 1,000,000 distinct values. It was just very unlucky that one of the values appeared 1,000,000 times and that was the value the users always wanted to query – and there was no histogram on the column to tell the optimizer that there was a massive skew in the data distibribution.

Problem solved – all I had to do was set a table preference for this table to add a histogram to this column and gather stats. Since there were so many distinct values and so much “non-popular” data in the table the optimizer should end up with a hybrid histogram that would highlight this value. I left instructions for the required test and waited for the email telling me that my suggestion was brilliant and the results were fantastic… I got an email telling me it hadn’t worked.

Here’s a model of the situation – I’ve created a table with 2 million rows and a column where every other row contains the same value but otherwise contains the rownum. Because the client code was using a varchar2() column I’ve done the same here, converting the numbers to character strings left-padded with zeros. There are a few rows (about 20) where the column value is higher than the very popular value.


rem
rem     Script:         histogram_problem_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 2e4
)
select
        rownum  as id,
        case
                when mod(rownum,2) = 0
                        then '999960'
                        else lpad(rownum,6,'0')
        end     as bad_col
from
        generator       v1,
        generator       v2
where
        rownum <= 2e6
;

Having created the data I’m going to create a histogram on the bad_col – specifying 254 columns – then query user_tab_histograms for the resulting histogram (from which I’ll delete a huge chunk of boring rows in the middle):


begin

        dbms_stats.gather_table_stats(
                ownname         => 'TEST_USER',
                tabname         => 'T1',
                method_opt      => 'for columns bad_col size 254'
        );

end;
/

select
        column_name, histogram, sample_size
from
        user_tab_columns
where
        table_name = 'T1'
;

column end_av format a12

select
        endpoint_number         end_pt,
        to_char(endpoint_value,'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') end_val,
        endpoint_actual_value   end_av,
        endpoint_repeat_count   end_rpt
from
        user_tab_histograms
where
        table_name = 'T1'
and     column_name = 'BAD_COL'
order by
        endpoint_number
;


COLUMN_NAME          HISTOGRAM             Sample
-------------------- --------------- ------------
BAD_COL              HYBRID                 5,513
ID                   NONE               2,000,000

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
         1  303030303031001f0fe211e0800000 000001                1
        12  3030383938311550648a5e3d200000 008981                1
        23  303135323034f8f5cbccd2b4a00000 015205                1
        33  3032333035311c91ae91eb54000000 023051                1
        44  303239373236f60586ef3a0ae00000 029727                1
...
      2685  3938343731391ba0f38234fde00000 984719                1
      2695  39393235303309023378c0a1400000 992503                1
      2704  3939373537370c2db4ae83e2000000 997577                1
      5513  393939393938f86f9b35437a800000 999999                1

254 rows selected.

So we have a hybrid histogram, we’ve sampled 5,513 rows to build the histogram, we have 254 buckets in the histogram report, and the final row in the histogram is end point 5513 (matching the sample size). The first row of the histogram shows us the (real) low value in the column and the last row of the histogram reports the (real) high value. But there’s something very odd about the histogram – we know that ‘999960’ is the one popular value, occurring 50% of the time in the data, but it doesn’t appear in the histogram at all.

Looking more closely we see that every bucket covers a range of about 11 (sometimes 9 or 10) rows from the sample, and the highest value in each bucket appears just once; but the last bucket covers 2,809 rows from the sample with the highest value in the bucket appearing just once. We expect a hybrid histogram to have buckets which (at least initially) are all roughly the same size – i.e. “sample size”/”number of buckets” – with some buckets being larger by something like the amount that appears in their repeat count, so it doesn’t seem right that we have an enormous bucket with a repeat count of just 1. Something is broken.

The problem is that the sample didn’t find the low and high values for the column – although the initial full tablescan did, of course – so Oracle has “injected” the low and high values into the histogram fiddling with the contents of the first and last buckets. At the bottom end of the histogram this hasn’t really caused any problems (in our case), but at the top end it has taken the big bucket for our very popular ‘999960’ and apparently simply replaced the value with the high value of ‘999999’ and a repeat count of 1.

As an indication of the truth of this claim, here are the last few rows of the histogram if I repeat the experiment but, before gathering the histogram, delete the rows where bad_col is greater than ‘999960’. (Oracle’s sample is random, of course, and has changed slightly for this run.)

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
...
      2641  3938373731371650183cf7a0a00000 987717                1
      2652  3939353032310e65c1acf984a00000 995021                1
      2661  393938393433125319cc9f5ba00000 998943                1
      5426  393939393630078c23b063cf600000 999960             2764

Similarly, if I inserted a few hundred rows with a higher value than my popular value (in this case I thought 500 rows would be a fairly safe bet as the sample was about one in 360 rows) I got a histogram which started with a bucket about the popular bucket, so the problem of that bucket being hacked to the high value was less significant:


    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
...
      2718  393736313130fe68d8cfd6e4000000 976111                1
      2729  393836373630ebfe9c2b7b94c00000 986761                1
      2740  39393330323515efa3c99771600000 993025                1
      5495  393939393630078c23b063cf600000 999960             2747
      5497  393939393938f86f9b35437a800000 999999                1

Bottom line, then: if you have an important popular value in a column and there aren’t very many rows with a higher value, you may find that Oracle loses sight of the popular value as it fudges the column’s high value into the final bucket.

Workaround

I did consider writing a bit of PL/SQL for the client to fake a realistic frequency histogram, but decided that that wouldn’t be particularly friendly to future DBAs who might have to cope with changes. Luckily the site doesn’t gather stats using the automatic scheduler job and only rarely updates stats anyway, so I suggested we create a histogram on the column using an estimate_percent of 100. This took about 8 minutes to run – for reasons that I will go into in a moment – after which I suggested we lock stats on the table and document the fact that when stats are collected on this table it’s got to be a two-pass job – the normal gather with its auto_sample_size to start with, then a 100% sample for this column to gather the histogram:


begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt       => 'for columns bad_col size 254',
                estimate_percent => 100,
                cascade          => false
        );
end;
/

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
...
       125  39363839393911e01d15b75c600000 968999                0
       126  393834373530e98510b6f19a000000 984751                0
       253  393939393630078c23b063cf600000 999960                0
       254  393939393938f86f9b35437a800000 999999                0

129 rows selected.

This took a lot longer, of course, and produced an old-style height-balanced histogram. Part of the time came from the increased volume of data that had to be processed, part of it came from a suprise (which also appeared, in a different guise, in the code that created the original hybrid histogram).

I had specifically chosen the method_opt to gather for nothing but the single column. In fact whether I forced the “legact” (height-balanced) code or the modern (hybrid) code, I got a full tablescan that did some processing of EVERY column in the table and then threw most of the results away. Here are fragements of the SQL – old version first:


select /*+  
            no_parallel(t) no_parallel_index(t) dbms_stats
            cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring 
            xmlindex_sel_idx_tbl no_substrb_pad  
       */
       count(*), 
       count("ID"), sum(sys_op_opnsize("ID")),      
       count("BAD_COL"), sum(sys_op_opnsize("BAD_COL"))    
       ...
from
       "TEST_USER"."T1" t


select /*+
           full(t)    no_parallel(t) no_parallel_index(t) dbms_stats
           cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
           xmlindex_sel_idx_tbl no_substrb_pad
       */
       to_char(count("ID")),
       to_char(count("BAD_COL")),
       substrb(dump(min("BAD_COL"),16,0,64),1,240),
       substrb(dump(max("BAD_COL"),16,0,64),1,240),
       ...
       count(rowidtochar(rowid)) 
from
       "TEST_USER"."T1" t  /* ACL,TOPN,NIL,NIL,RWID,U,U254U*/

The new code only used the substrb() functions on the bad_col, but all other columns in the table were subject to the to_char(count()).
The old code applied the count() and sys_op_opnsize() to every column in the table.

This initial scan was a bit expensive – and disappointing – for the client since their table had 290 columns (which means intra-block chaining as a minimum) and had been updated so much that 45% of the rows in the table had to be “continued fetches”. I can’t think why every column had to be processed like this, but if they hadn’t been that would have saved a lot of CPU and I/O since the client’s critical column was very near the start of the table.

Finally

This problem with the popular value going missing is a known issue, for which there is a bug number, but there is further work going on in the same area which means this particular detail is being rolled into another bug fix. More news when it becomes available.

 

 

Using FBA with Materialized Views

Tom Kyte - Mon, 2018-01-15 05:26
Please refer to the LiveSQL link. NB some of the statement do not work because the user has insufficient privs. to create and manage Flashback areas in the LiveSQL environment. The code creates a table, inserts data , creates a flashback archive t...
Categories: DBA Blogs

Moving tables ONLINE on filegroup with constraints and LOB data

Yann Neuhaus - Mon, 2018-01-15 00:20

Let’s start this new week by going back to a discussion with one of my customers a couple of days ago about moving several tables into different filegroups. Let’s say that some of them contained LOB data. Let’s add to the game another customer requirement: moving all of them ONLINE to avoid impacting the data availability during the migration process. The concerned tables had schema constraints as primary key and foreign keys and non-clustered indexes as well. So a pretty common schema we may deal with daily at customer shops.

Firstly, let’s say that the first topic of the discussion didn’t focus on moving non-clustered indexes on a different filegroup (pretty well-known from my customer) but on how to manage moving constraints online without integrity issues. The main reason of that came from different pointers found by my customer on the internet where we have to first drop such constraints and then to recreate them (by using TO MOVE clause) and that’s whay he was not very confident to move such constraints without introducing integrity issues.

Let’s illustrate this scenario with the following demonstration. I will use a dbo.TransactionHistory2 table that I want to move ONLINE from the primary to the FG1 filegroup. There is a primary key constraint on the TransactionID column as well as foreign key on the ProductID column that refers to dbo.bigProduct table and the ProductID column.

EXEC sp_helpconstraint 'dbo.bigTransactionHistory2';

blog 125 - 1 - bigTransactionHistory2 PK FK

Here a picture of indexes existing on the dbo.bigTransactionHistory2 table:

EXEC sp_helpindex 'dbo.bigTransactionHistory2';

blog 125 - 2 - bigTransactionHistory2 indexes

Let’s say that the pk_big_TranactionHistory_TransactionID unique clustered index is tied to the primary key constraint.

Let’s start by using the first approach based on the WITH MOVE clause .

ALTER TABLE dbo.bigTransactionHistory2 DROP CONSTRAINT pk_bigTransactionHistory_TransactionID WITH (MOVE TO FG1, ONLINE = ON);

--> No constraint to avoid duplicates

ALTER TABLE dbo.bigTransactionHistory2 ADD CONSTRAINT pk_bigTransactionHistory_TransactionID PRIMARY KEY(TransactionDate, TransactionID)
WITH (ONLINE = ON);

By looking further at the script performed  we may quickly figure out that this approach may lead to introduce duplicate entries between the drop constraint step and the move table on the FG1 filegroup and  create constraint step.

We might address this issue by encapsulating the above command within a transaction. But obviously this method has cost: we have good chance to create a long blocking scenario – depending on the amount of data – and leading temporary to data unavailability. The second drawback concerns the performance. Indeed, we first drop the primary key constraint meaning we are dropping the underlying clustered index structure in the background. Going this way implies to rebuild also related non-clustered indexes to update the leaf level with row ids and to rebuild them again when re-adding the primary key constraint in the second step.

From my point of view there is a better way to go through if we want all the steps to be performed efficiently and ONLINE including the guarantee that constraints will continue to ensure checks during all the moving process.

Firstly, let’s move the primary key by using a one-step command. The same applies to the UNIQUE constraints. In fact, moving such constraint requires only to rebuild the corresponding index with the parameters DROP_EXISTING and ONLINE parameters to preserve the constraint functionality. In this case, my non-clustered indexes are not touched by the operation because we don’t have to update the leaf level as the previous method.

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionDate] ASC, [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

In addition, the good news is if we try to introduce a duplicate key while the index is rebuilding on the FG1 filegroup we will face the following error as expected:

Msg 2627, Level 14, State 1, Line 3
Violation of PRIMARY KEY constraint ‘pk_bigTransactionHistory_TransactionID’.
Cannot insert duplicate key in object ‘dbo.bigTransactionHistory2′. The duplicate key value is (Jan 1 2005 12:00AM, 1).

So now we may safely move the additional structures represented by the non-clustered index. We just have to execute the following command to move ONLINE the corresponding physical structure:

CREATE INDEX [idx_bigTransactionHistory2_ProductID]
ON dbo.bigTransactionHistory2 ( ProductID ) 
WITH (DROP_EXISTING = ON, ONLINE = ON)
ON [FG1]

 

Le’ts continue with the second scenario that consisted in moving a table ONLINE on a different filegroup with LOB data. Moving such data may be more complex as we may expect. The good news is SQL Server 2012 has introduced ONLINE operation capabilities and my customer run on SQL Server 2014.

For the demonstration let’s going back to the previous demo and let’s introduce a new [other infos] column with VARCHAR(MAX) data. Here the new definition of the dbo.bigTransactionHistory2 table:

CREATE TABLE [dbo].[bigTransactionHistory2](
	[TransactionID] [bigint] NOT NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NOT NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[other infos] [varchar](max) NULL,
 CONSTRAINT [pk_bigTransactionHistory_TransactionID] PRIMARY KEY CLUSTERED 
(
	[TransactionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Let’s take a look at the table’s underlying structure:

SELECT 
	OBJECT_NAME(p.object_id) AS table_name,
	p.index_id,
	p.rows,
	au.type_desc AS alloc_unit_type,
	au.used_pages,
	fg.name AS fg_name
FROM 
	sys.partitions as p
JOIN 
	sys.allocation_units AS au on p.hobt_id = au.container_id
JOIN	
	sys.filegroups AS fg on fg.data_space_id = au.data_space_id
WHERE
	p.object_id = OBJECT_ID('bigTransactionHistory2')
ORDER BY
	table_name, index_id, alloc_unit_type

 

blog 125 - 3 - bigTransactionHistory2 with LOB

A new LOB_DATA allocation unit type is there and indicates the table contains LOB data for all the index structures. At this stage, we may think that going to the previous way to move online the unique clustered index is sufficient but it is not according the output below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

blog 125 - 4 - bigTransactionHistory2 move LOB data

In fact, only data in IN_ROW_DATA allocation units moved from the PRIMARY to FG1 filegroup. In this context, moving LOB data is a non-trivial operation and I had to use a solution based on one proposed here by Kimberly L. Tripp from SQLSkills (definitely one of my favorite sources for tricky scenarios). So partitioning is the way to go. In respect of the solution fom SQLSkills I created a temporary partition function and scheme as shown below:

SELECT MAX([TransactionID])
FROM dbo.bigTransactionHistory2
-- 6910883
GO


CREATE PARTITION FUNCTION pf_bigTransaction_history2_temp (BIGINT)
AS RANGE RIGHT FOR VALUES (6920000)
GO

CREATE PARTITION SCHEME ps_bigTransaction_history2_temp
AS PARTITION pf_bigTransaction_history2_temp
TO ( [FG1], [PRIMARY] )
GO

Applying the scheme to the dbo.bigTransactionHistory2 table will allow us to move all data (IN_ROW_DATA and LOB_DATA) from the PRIMARY to FG1 filegroup as shown below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON ps_bigTransaction_history2_temp ([TransactionID])

Looking quickly at the storage configuration confirms this time all data moved to the right FG1.

blog 125 - 5 - bigTransactionHistory2 partitioning

Let’s finally remove the temporary partitioning configuration from the table (remember that all operations are performed ONLINE)

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1]

-- Remove underlying partition configuration
DROP PARTITION SCHEME ps_bigTransaction_history2_temp;
DROP PARTITION FUNCTION pf_bigTransaction_history2_temp;
GO

blog 125 - 6 - bigTransactionHistory2 last config

Finally, you can apply the same method for all non-clustered indexes that contain LOB data …

Cheers

 

 

 

 

 

 

 

 

Cet article Moving tables ONLINE on filegroup with constraints and LOB data est apparu en premier sur Blog dbi services.

Spectre and Meltdown on Oracle Public Cloud UEK

Yann Neuhaus - Sun, 2018-01-14 14:12

In the last post I published the strange results I had when testing physical I/O with the latest Spectre and Meltdown patches. There is the logical I/O with SLOB cached reads.

Logical reads

I’ve run some SLOB cache reads with the latest patches, as well as with only KPTI disabled, and with KPTI, IBRS and IBPB disabled.
I am on the Oracle Public Cloud DBaaS with 4 OCPU

DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 670,001.2
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 671,145.4
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 672,464.0
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 685,706.7 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,291.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,386.4 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 699,301.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,773.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,908.2 nopti noibrs noibpb

This is what I expected: when disabling the mitigation for Meltdown (PTI), and for some of the Spectre (IBRS and IBPB), I have a slightly better performance – about 5%. This is with only one SLOB session.

However, with 2 sessions I have something completely different:

DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,235,637.8 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,237,689.6 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,243,464.3 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,251,485.1
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,253,477.0
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,271,986.7

This is not a saturation situation here. My VM shape is 4 OCPUs, which is supposed to be the equivalent of 4 hyperthreaded cores.

And this figure is even worse with 4 sessions (all cores used) and more:

DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks) : 2,268,272.3 nopti noibrs noibpb
DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks): 2,415,044.8


DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks) : 3,353,985.7 nopti noibrs noibpb
DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks): 3,540,736.5


DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks) : 4,365,752.3 nopti noibrs noibpb
DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks): 4,519,340.7

The graph from those is here:
CaptureOPCLIO001

If I compare with the Oracle PaaS I tested last year (https://blog.dbi-services.com/oracle-public-cloud-liops-with-4-ocpu-in-paas/) which was on Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz you can also see a nice improvement here on Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz.

This test was on 4.1.12-112.14.10.el7uek.x86_64 and Oracle Linux has now released a new update: 4.1.12-112.14.11.el7uek

 

Cet article Spectre and Meltdown on Oracle Public Cloud UEK est apparu en premier sur Blog dbi services.

Docker-CE: How to modify containers with overlays / How to add directories to a standard docker image

Dietrich Schroff - Sun, 2018-01-14 13:01
After some experiments with docker i wanted to run a tomcat with my own configuration (e.g. memory settings, ports, ...).


My first idea was: Download tomcat, configure everything and then build an image.
BUT: After i learned how to use the -v (--volume) flag for adding some file via the docker command to an image i was wondering, wether creating a new image with only the additional files on top of standard tomcat docker image.

So first step is to take a look at all local images:
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
558MB
friendlyhello       latest              976ee2bb47bf        3 days ago          148MB
tomcat              latest              11df4b40749f        8 days ago          558MB
I can use tomcat:latest. (if it is not there just pull it: docker pull tomcat)
Next step is to create a directory and add all the directories which you want to override.
For my example:
mkdir conftomcat
cd conftomcat
mkdir binInto the bin directory i put all the files from the tomcat standard container:
# ls bin
bootstrap.jar  catalina-tasks.xml  commons-daemon-native.tar.gz  daemon.sh  setclasspath.sh  startup.sh       tool-wrapper.sh
catalina.sh    commons-daemon.jar  configtest.sh                 digest.sh  shutdown.sh      tomcat-juli.jar  version.sh
Inside the catalina.sh i added -Xmx384M.
In conftomcat i created the following Dockerfile:
FROM tomcat:latest
WORKDIR /usr/local/tomcat/bin
ADD status /usr/local/tomcat/webapps/mystatus
ADD bin /usr/local/tomcat/bin
ENTRYPOINT [ "/usr/local/tomcat/bin/catalina.sh" ]
CMD [ "run"]And as you can see i added my index.jsp which is inside status (s. this posting).
Ok. Let's see, if my plan works:
#docker build  -t  mytomcat .ending build context to Docker daemon  375.8kB
Step 1/6 : FROM tomcat:latest
 ---> 11df4b40749f
Step 2/6 : WORKDIR /usr/local/tomcat/bin
 ---> Using cache
 ---> 5696a9ab99cb
Step 3/6 : ADD status /usr/local/tomcat/webapps/mystatus
 ---> 1bceea5af515
Step 4/6 : ADD bin /usr/local/tomcat/bin
 ---> e8d3a386a7f0
Step 5/6 : ENTRYPOINT [ "/usr/local/tomcat/bin/catalina.sh" ]
 ---> Running in a04038032bb7
Removing intermediate container a04038032bb7
 ---> 4c8fda05df18
Step 6/6 : CMD [ "run"]
 ---> Running in cce378648e7a
Removing intermediate container cce378648e7a
 ---> 72ecfe2aa4a7
Successfully built 72ecfe2aa4a7
Successfully tagged mytomcat:latest
and then start:
docker run -p 4001:8080 mytomcat Let's check the memory settings:
$ ps aux|grep java
root      2313 20.7  8.0 2418472 81236 ?       Ssl  19:51   0:02 /docker-java-home/jre/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xmx394M -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start
Yes - changed to 384M.
And check the jsp:



Yippie!
As you can see, i have the standard tomcat running with an override inside the configuration to 384M. So it should be easy to add certificates, WARs, ... to such a standard container.

Pages

Subscribe to Oracle FAQ aggregator