Feed aggregator

Coding Parallel Processing on 12c

Tom Kyte - Mon, 2018-01-15 23:46
Hello Tom, I gave the below link a try and applied the method on 12c. https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4248554900346593542 But it takes same time as serial processing. Could you please light me up what I'...
Categories: DBA Blogs

The top 5 reasons why you should submit an abstract for APEX at the Great Lakes Oracle Conference (GLOC)

Joel Kallman - Mon, 2018-01-15 18:50
APEX Developer Day at Great Lakes Oracle Conference 2017
The Northeast Ohio Oracle User's Group (NEOOUG) is easily one of my favorite user groups on the planet.  They've been graciously hosting me at their user group events since 2004 (when I first gave a demonstration on Oracle HTML DB 1.5!).  They are a large, active and passionate user group.  In the past 14 years, I've seen them grow from simple user group events to "Training Days" at the Cleveland State University campus to a nicely sized regional conference named Great Lakes Oracle Conference.

If you're into Oracle APEX, either on-premises, or in the Oracle Cloud, I encourage you to submit an abstract to speak at the Great Lakes Oracle Conference.  Here are my top 5 reasons why you should strongly consider this:
  1. There is a real hunger for Oracle APEX content at this conference.  There are countless customers in the immediate region who use Oracle APEX.  Last year, they had the first ever Oracle APEX Developer Day in advance of the conference, and it was sold out (100 attendees)!
  2. It's the largest Oracle user's conference in the Midwest US.  It draws people from all over Ohio, Michigan, Indiana, Kentucky and Pennsylvania.  There will be over 500 attendees at the conference in 2018.
  3. The Great Lakes Oracle Conference routinely gets world-class speakers from all over the world, both Oracle employees and Oracle ACEs.  As a speaker, you would be able to attend any session in any track.
  4. There are numerous tracks at the Great Lakes Oracle Conference, including APEX, Oracle Applications, Business Intelligence, DBA, Database Developer and Data Warehousing.
  5. Cleveland, Ohio is on the North Coast of the US.  There, you can visit Great Lakes Brewing Company, Market Garden Brewery, Platform Beer Company,  and the Rock & Roll Hall of Fame.

I come across so many people who say "why would anyone want to hear me talk about that?"  From case studies to lessons learned to best practices in your environment, it's all interesting and valuable.  Not everyone who attends the APEX sessions at GLOC are experts, so entry-level sessions are also welcome!

I encourage you to submit an abstract today.  The deadline for abstract submission is February 2, 2018.

Getting Started Provisioning Database Cloud Service Deployments and Java Cloud Service ...

Oracle Cloud Infrastructure has tailored solutions built from bottom to top with integration and elastic capability delivering a solid infrastructure base that can integrate and inter-operate with...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Luxury Retailer Chalhoub Modernizes the Customer Experience with Oracle Point of Service

Oracle Press Releases - Mon, 2018-01-15 11:50
Press Release
Luxury Retailer Chalhoub Modernizes the Customer Experience with Oracle Point of Service Xstore Point-of-Service Delivers a Highly Personalized, Mobile Customer Experience In-Store

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 15, 2018

Today, Oracle announced that luxury retailer Chalhoub has successfully deployed Oracle Retail Xstore Point-of-Service to modernize its customer experience. Chaloub’s Oracle Retail Xstore implementation is the first in the Middle East and the result of a six-month installation process in partnership with Logic Information Systems. The project for the Dubai storefront included deployment of more than 100 registers, 60 of which are mobile, bringing mobile checkout, integrated payment systems and improved store operations to the brand.

Chalhoub started as a family business licensing foreign brands in Damascus, Syria in 1955. It now runs a network of 650 retail stores with fashion and cosmetic lines including Chanel, Louis Vuitton and Christian Louboutin across the Middle East. Today the Chalhoub Group employs more than 12,000 people in 14 countries.

“With the help from Logic and Oracle, we migrated from the Oracle Retail Point of Service to latest version of Oracle Retail Xstore Point-of-Service. We can now deliver a modern mobile experience to our customers. By implementing Xstore, we are also empowering our store associates. The goal is to provide a highly personalized and engaged customer experience at the world's finest shoe metropolis, Level Shoes,” said Olivier Leblan, Group Chief Information Officer, Chalhoub.

“The Point of Service system must allow retailers to transact and interact with consumers as they choose. Whether using a traditional register, portable solution, tablet or handheld, it's point of service.,” said Ray Carlin, Senior Vice President and General Manager, Oracle Retail. “As our Retail in 4D research shows, more than half (52 percent) of retailers said they are arming their store employees with mobile technology. Congratulations to Chalhoub for deploying a modern customer experience.”

“We are thrilled to partner with Chalhoub and Oracle. Together we delivered on the vision to drive a better customer experience for Level and to establish a foundation to support Chalhoub,” said Saad Khan, General Manager of the Middle East, Logic Information Systems. “We look forward to the continued momentum for the Oracle Retail Xstore platform across the Middle East and Asia. We found the solution to be a great fit for the region.”

To learn more about Chalhoub’s implementation of Oracle Retail technology register here for a webinar on Tuesday February 20.

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
Oracle Retail at NRF 2018

Oracle Retail will be showcasing the full suite of Oracle Retail solutions and cloud services at the National Retail Federation Big Show Jan. 14-16, 2017, in New York City at the Jacob K. Javitz Convention Center. Oracle Retail will be located at booth #3521. For more information check out: www.oracle.com/retail

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Oracle and FreedomPay Deliver Future of Consumer Payments at New York’s Jacob K. Javits Center

Oracle Press Releases - Mon, 2018-01-15 10:50
Press Release
Oracle and FreedomPay Deliver Future of Consumer Payments at New York’s Jacob K. Javits Center Collaboration will Showcase New EMV Contact and Contactless Payment Capabilities at One of the Nation’s Busiest Convention Centers

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 15, 2018

Today at NRF 2018, Oracle Hospitality a trusted provider of hardware, software and services to hospitality operators and FreedomPay, a leader in secure commerce technology for lodging, gaming, retail, restaurants, stadiums and other hospitality merchants, announced their  collaboration to provide secure payment processing for conference attendees at the Jacob K. Javits Center in New York City. The companies have integrated FreedomPay’s Advanced Commerce Platform with Oracle Hospitality’s MICROS point-of-sale devices to provide full EMV support to over 175 events annually supporting over 35,000 global companies.

The customer facing devices, as well as the handheld pay-at-table devices, support EMV contact and contactless payments—enabling patrons from around the world to easily leverage their local bank-issued payment cards. Furthermore, the contactless EMV support enables customers to save valuable time by simply tapping their payment card on the reader versus inserting their card into the device, the combined payment solution enables consumers to utilize the payment options they desire.

The integration between FreedomPay and Oracle Hospitality is another example of Oracle delivering additional value for hospitality customers through integrations that extend the value of POS investments. In addition to providing durable and compact point-of-service terminals, Oracle Hospitality offers a fully integrated portfolio of hardware and software solutions that enable food and beverage operations to streamline managerial tasks, increase speed of service and elevate the guest experience.

“The NRF conference is about showcasing the technology that will build future customer experiences and with FreedomPay we’ll be highlighting the potential of contactless EMV payments in the hospitality industry at food vendors across the Jacob K. Javits Center,” said Laura Calin, vice president of strategy, Oracle Hospitality. “Contactless EMV payments represent an opportunity for retailers and food and beverage operators to improve the guest experience by accelerating the check out process during peak retail seasons and high volume hospitality applications.”

“Contactless payments deliver a fast, simple and secure experience at the point of sale, while also helping merchants increase speed of service and grow sales volume,” said Dan Sanford, vice president, consumer products, Visa. “We’re pleased that all Javits Center attendees will now be able to tap to pay quickly and easily with their contactless cards, mobile phones, or connected devices.”

The FreedomPay Advanced Commerce Platform is one of the first the first PCI-validated point-to-point encryption (P2PE) solution with EMV, NFC, Dynamic Currency Conversion and real-time data capabilities that delivers on a global scale. With P2PE, valuable consumer payment data is protected from the moment the card is inserted into the MICROS point-of-sale device, in transit, and at rest in the merchant’s environment. FreedomPay is Platinum level member of Oracle PartnerNetwork (OPN).

“Over the past 15 years, FreedomPay developed a strong relationship with the team at Oracle—combining our point-to-point encryption expertise and secure payment processing capabilities with their MICROS terminals,” said Chris Kronenthal, president and chief technology officer at FreedomPay. “This installation at the Javits Center emphasizes the payment protection capabilities that FreedomPay and Oracle are delivering coast-to-coast.”

FreedomPay and Oracle help secure payments for consumers across multiple verticals throughout the United States, including food and beverage, hospitality, retail and travel. The “Secured by FreedomPay” image appearing on the Oracle MICROS point-of-sale devices assures customers that their transactions are protected to exceed some of the most stringent payment transaction security requirements currently available. FreedomPay is Platinum level member of Oracle PartnerNetwork (OPN).

NRF 2018 attendees who spot the “Secured by FreedomPay” image on payment terminals within the Javits Center are encouraged to participate in FreedomPay’s #SecuredbyFreedomPay” social media prize giveaway. Additional details are available at http://corporate.freedompay.com/blog_article/show-us-you-are-securedbyfreedompay/.

Oracle Hospitality hardware solutions including the recently announced Oracle MICROS Compact Workstation 310 will be available for demo at the National Retail Federation Big Show Jan. 14-16, 2018, in New York City at the Jacob K. Javitz Convention Center. Oracle Hospitality will be located within the Oracle Retail booth #3521.

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
Christy Pittman
W2 Communications for FreedomPay
703-877-8108
christy@w2comm.com
About FreedomPay

The FreedomPay Commerce Platform is the best way for merchants to simplify complex payment environments. Validated by the PCI Security Standards Council for Point-to-Point Encryption (P2PE) along with EMV, NFC and DCC capabilities, global leaders in retail, hospitality, gaming, education, healthcare and financial services trust FreedomPay to deliver unmatched security and advanced value added services. With broad integrations across top point-of-sale, device manufacturers and payment processors, supported by rapid API adoption, FreedomPay is driving the future of commerce and customer interaction. For more information, go to www.freedompay.com.

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility.

For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle’s partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program—an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Christy Pittman

  • 703-877-8108

Oracle Celebrates Continued Hardware Innovation with New MICROS Compact Workstation 310 Point-of-Sale

Oracle Press Releases - Mon, 2018-01-15 10:50
Press Release
Oracle Celebrates Continued Hardware Innovation with New MICROS Compact Workstation 310 Point-of-Sale New POS Engineered for Smaller Footprints and Portability to Join Existing Portfolio of Workstations on Display at NRF 2018

NATIONAL RETAIL FEDERATION ANNUAL CONFERENCE – New York—Jan 15, 2018

Oracle today introduced the new Oracle MICROS Compact Workstation 310 point-of-sale (POS) terminal. A high-performance, cost-effective workstation built with superior durability in a sleek, compact form factor, the Oracle MICROS Compact Workstation 310 is ideal for hospitality and retail applications that feature a limited menu or assortment and require a smaller footprint, terminal portability or additional capacity during peak trading or a sporting event. The Oracle MICROS Compact Workstation 310 will be available to demo at the National Retail Federation conference in New York and joins Oracle’s existing hardware portfolio on display at booth #3521 including Oracle MICROS Workstation 6 and the Oracle MICROS Tablet 720 showcasing Oracle Retail Xstore Point-of-Service with our Oracle Retail omnichannel cloud services.

Oracle MICROS Compact Workstation 310

The Oracle MICROS Compact Workstation 310 runs Windows 10® IoT Enterprise with Oracle Hospitality Simphony 2.9.2 HF6, 2.9.3 HF1, 2.10, Simphony FE 1.7.3, and RES 5.5.1, and Oracle Retail Xstore Point of Service v17 software. The all-in-one thin client and 10.1” display is designed to accommodate limited counter space and afford maximum portability. Its connectivity and operating system enable ease of provisioning and management for IT, while delivering a fast, rich, familiar Windows 10 user experience.

“Guest services and the shopping experiences are changing in the hospitality and retail sectors, with consumers demanding more speed and convenience. Oracle is extending our hardware portfolio so that our customers can adapt to those changes,” said Mike Webster, senior vice president and GM Oracle Retail and Hospitality. “The Oracle MICROS Compact Workstation 310 delivers a portable, rugged and intuitive experience that is perfect for scenarios with high volumes of customers and limited menus or assortments including stadiums, pop up stores, mall kiosks, theme parks, sidewalk sales and promotional events.”

Key features of the new Oracle MICROS Compact Workstation 310 include:

  • Best Price Performance: Purpose-built to help ensure businesses aren’t over paying for the best customer experience, the 310 is engineered end-to-end for stress free administration and includes a powerful dual core processor and integrated graphics engine for an exceptional user experience.
  • Built To Last: The 310 is built to withstand extreme temperatures for outdoor use and is protected against impact, dust, grease and grime build up. The long product lifecycle and low meantime before failure (no moving parts) aims to lower total cost of ownership by reducing the number of refresh cycles.
  • Elegant, Simple Design: Made with a sleek, industrial design and small footprint, the 310 is aesthetically pleasing and maximizes counter space. Its portability also allows for anywhere, anytime transactions to capitalize on profitable locations, and its intuitive user interface is easy-to-use for full-time or seasonal employees.
  • Easy to Set Up: End to end ecosystem: Hardware, Software, Cloud, and Services enables easier solution set up and support.  Client applications manager (CAL), advocated offering and Oracle validated SW updates allow easy device provision and administration.

The Oracle MICROS Compact Workstation 310 joins Oracle’s existing portfolio of POS workstations, including the Oracle MICROS Workstation 650, the Oracle MICROS Workstation 620 and the Oracle Workstation 610 and complements the Oracle MICROS Tablet 720 Series 7 inch Tablet.

Continued Retail POS Momentum

“Oracle Retail is extending our hardware portfolio to deliver the innovation of the Oracle Retail Xstore Point-of-Service Platform with omnichannel cloud services. The Workstation 310 has ample connectivity for multiple peripherals, and is fully supported beginning with Oracle Retail Xstore Point-of-Service,” said Jeff Warren, Vice President Strategy and Solutions, Oracle Retail. “This allows the retailer to have a consistent software implementation with the benefits of Xstore in a portable small footprint point of sale workstation.”

Global customers continue to adopt Oracle Xstore POS with Oracle MICROS hardware including:

Contact Info
Matt Torres
Oracle PR
+1.415.595.1584
matt.torres@oracle.com
About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility. For more information about Oracle Hospitality, please visit www.Oracle.com/Hospitality.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • +1.415.595.1584

Private Functions and ACCESSIBLE BY Packages in 12c

The Anti-Kyte - Mon, 2018-01-15 07:48

My recent post about PLS-00231 prompted an entirely reasonable question from Andrew :

“OK so the obvious question why [can’t you reference a private function in SQL] and doesn’t that defeat the objective of having it as a private function, and if so what about other ways of achieving the same goal ?”

I’ll be honest – that particular post was really just a note to self. I tend to write package members as public initially so that I can test them by calling them directly.
Once I’ve finished coding the package, I’ll then go through and make all of the helper package members private. My note was simply to remind myself that the PLS-00231 error when compiling a package usually means that I’ve referenced a function in a SQL statement and then made it private.

So, we know that a PL/SQL function can only be called in a SQL statement if it’s a schema level object or it’s definied in the package header because that’s the definition of a Public function in PL/SQL. Or at least it was…

In formulating an answer to Andrew’s question, it became apparent that the nature of Private functions have evolved a bit in 12c.

So, what I’m going to look at here is :

  • What are Private and Public package members in PL/SQL and why you might want to keep a package member private
  • How 12c language features change our definition of private and public in terms of PL/SQL objects
  • Hopefully provide some up-to-date answers for Andrew

Private and Public in the olden days

As most real-world PL/SQL functions are written within the context of a package, this is where we’ll focus our attention.

From the time that PL/SQL stored program units were introduced into Oracle, right up to and including 11g, the definition was simple.

A PL/SQL package member ( function or procedure) was public if it’s specification was declared in the package header.
Otherwise, it was private.
A private package member can only be referenced from inside it’s package.

A private package member might be used to encapsulate some functionality that is used in multiple places inside your package but not outside of it.
These “helper” functions tend to be quite common.
Another reason for using a private function would be to reduce clutter in the package signature. If your package is serving as an API to some business functionality, having few public members as entry points helps to ensure that the API is used as intended.

Of course, a private package member cannot be referenced in a SQL query, even from inside the package…

Changes in 12c and (probably) beyond

The ability to use PL/SQL constructs in SQL with clauses provided by 12c manages to take some of the certainty out of our definition of public and private. For example…

with function catchphrase return varchar2 is
    begin
        return 'I have a cunning plan which cannot fail';
    end;
select catchphrase 
from dual
/

…in 12c rewards you with :

CATCHPHRASE                                       
--------------------------------------------------
I have a cunning plan which cannot fail

Possibly more significant is the ability to create packages that are useable only by certain other stored program units using the ACCESSIBLE BY clause.

Using this new feature, we can split out our helper package members from the main package :

create or replace package baldrick_helper 
    accessible by (package baldrick)
as
    function catchphrase return varchar2;
end baldrick_helper;
/

create or replace package body baldrick_helper 
as    
    function catchphrase return varchar2
    is
    begin
        return 'I have a cunning plan which cannot fail';
    end catchphrase;
end baldrick_helper;
/

As well as reducing the size of individual packages, it should also mean that we can now reference the catchphrase function directly in a SQL statement right ? After all, it’s declared in the package header.

create or replace package baldrick 
as
    procedure cunning_plan;
end baldrick;
/

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
        select baldrick_helper.catchphrase
        into optimism
        from dual;
        
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

This compiles without error. However, when we try to run it we get :

set serveroutput on size unlimited
exec baldrick.cunning_plan;

ORA-06553: PLS-904: insufficient privilege to access object BALDRICK_HELPER
ORA-06512: at "MIKE.BALDRICK", line 5
ORA-06512: at line 1

Although the function is declared in the package header, it appears to remain private due to the use of the ACCESSIBLE BY whitelist. Therefore, if you want to reference it, you need to do it in straight PL/SQL :

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
    optimism := baldrick_helper.catchphrase;
        
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

This works as expected :

set serveroutput on size unlimited
exec baldrick.cunning_plan;

I have a cunning plan which cannot fail


PL/SQL procedure successfully completed.
Answers for Andrew

If your goal is to reference a PL/SQL package member in a SQL statement then it must be public.
In 12c this means it must be declared in the header of a package which is not defined using an ACCESSIBLE BY clause.

On the other hand, if your goal is to keep your package member private then you cannot reference it in a SQL statement.
In 12c, you do have the option of re-defining it in a with clause as mentioned earlier. However, this only works in straight SQL.
As far as code in a package is concerned, you can’t use an in-line with clause as a wrapper for the call to the private function like this…

create or replace package body baldrick as
    procedure cunning_plan is
        optimism varchar2(100);
    begin
        with function cheating return varchar2 is
        begin 
            return baldrick_helper.catchphrase;
        end;     
        begin
        select catchphrase
        into optimism
        from dual;
        dbms_output.put_line(optimism);
    end cunning_plan;
end baldrick;
/

…because it’s not currently supported in PL/SQL.

Histogram Hassle

Jonathan Lewis - Mon, 2018-01-15 07:01

I came across a simple performance problem recently that ended up highlighting a problem with the 12c hybrid histogram algorithm. It was a problem that I had mentioned in passing a few years ago, but only in the context of Top-N histograms and without paying attention to the consequences. In fact I should have noticed the same threat in a recent article by Maria Colgan that mentioned the problems introduced in 12c by the option “for all columns size repeat”.

So here’s the context (note – all numbers used in this example are approximations to make the arithmetic obvious).  The client had a query with a predicate like the follwing:

    t4.columnA = :b1
and t6.columnB = :b2

The optimizer was choosing to drive the query through an indexed access path into t6, which returned ca. 1,000,000 rows before joining (two tables later) to t4 at which point all but a couple of rows remained – typical execution time was in the order of tens of minutes. A /*+ leading(t4) */ hint to start on t4 with an index that returned two rows reduced the response time to the classic “sub-second”.

The problem had arisen because the optimizer had estimated a cardinality of 2 rows for the index on t6 and the reason for this was that, on average, that was the correct number. There were 2,000,000 rows in the table with 1,000,000 distinct values. It was just very unlucky that one of the values appeared 1,000,000 times and that was the value the users always wanted to query – and there was no histogram on the column to tell the optimizer that there was a massive skew in the data distibribution.

Problem solved – all I had to do was set a table preference for this table to add a histogram to this column and gather stats. Since there were so many distinct values and so much “non-popular” data in the table the optimizer should end up with a hybrid histogram that would highlight this value. I left instructions for the required test and waited for the email telling me that my suggestion was brilliant and the results were fantastic… I got an email telling me it hadn’t worked.

Here’s a model of the situation – I’ve created a table with 2 million rows and a column where every other row contains the same value but otherwise contains the rownum. Because the client code was using a varchar2() column I’ve done the same here, converting the numbers to character strings left-padded with zeros. There are a few rows (about 20) where the column value is higher than the very popular value.


rem
rem     Script:         histogram_problem_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 2e4
)
select
        rownum  as id,
        case
                when mod(rownum,2) = 0
                        then '999960'
                        else lpad(rownum,6,'0')
        end     as bad_col
from
        generator       v1,
        generator       v2
where
        rownum <= 2e6
;

Having created the data I’m going to create a histogram on the bad_col – specifying 254 columns – then query user_tab_histograms for the resulting histogram (from which I’ll delete a huge chunk of boring rows in the middle):


begin

        dbms_stats.gather_table_stats(
                ownname         => 'TEST_USER',
                tabname         => 'T1',
                method_opt      => 'for columns bad_col size 254'
        );

end;
/

select
        column_name, histogram, sample_size
from
        user_tab_columns
where
        table_name = 'T1'
;

column end_av format a12

select
        endpoint_number         end_pt,
        to_char(endpoint_value,'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx') end_val,
        endpoint_actual_value   end_av,
        endpoint_repeat_count   end_rpt
from
        user_tab_histograms
where
        table_name = 'T1'
and     column_name = 'BAD_COL'
order by
        endpoint_number
;


COLUMN_NAME          HISTOGRAM             Sample
-------------------- --------------- ------------
BAD_COL              HYBRID                 5,513
ID                   NONE               2,000,000

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
         1  303030303031001f0fe211e0800000 000001                1
        12  3030383938311550648a5e3d200000 008981                1
        23  303135323034f8f5cbccd2b4a00000 015205                1
        33  3032333035311c91ae91eb54000000 023051                1
        44  303239373236f60586ef3a0ae00000 029727                1
...
      2685  3938343731391ba0f38234fde00000 984719                1
      2695  39393235303309023378c0a1400000 992503                1
      2704  3939373537370c2db4ae83e2000000 997577                1
      5513  393939393938f86f9b35437a800000 999999                1

254 rows selected.

So we have a hybrid histogram, we’ve sampled 5,513 rows to build the histogram, we have 254 buckets in the histogram report, and the final row in the histogram is end point 5513 (matching the sample size). The first row of the histogram shows us the (real) low value in the column and the last row of the histogram reports the (real) high value. But there’s something very odd about the histogram – we know that ‘999960’ is the one popular value, occurring 50% of the time in the data, but it doesn’t appear in the histogram at all.

Looking more closely we see that every bucket covers a range of about 11 (sometimes 9 or 10) rows from the sample, and the highest value in each bucket appears just once; but the last bucket covers 2,809 rows from the sample with the highest value in the bucket appearing just once. We expect a hybrid histogram to have buckets which (at least initially) are all roughly the same size – i.e. “sample size”/”number of buckets” – with some buckets being larger by something like the amount that appears in their repeat count, so it doesn’t seem right that we have an enormous bucket with a repeat count of just 1. Something is broken.

The problem is that the sample didn’t find the low and high values for the column – although the initial full tablescan did, of course – so Oracle has “injected” the low and high values into the histogram fiddling with the contents of the first and last buckets. At the bottom end of the histogram this hasn’t really caused any problems (in our case), but at the top end it has taken the big bucket for our very popular ‘999960’ and apparently simply replaced the value with the high value of ‘999999’ and a repeat count of 1.

As an indication of the truth of this claim, here are the last few rows of the histogram if I repeat the experiment but, before gathering the histogram, delete the rows where bad_col is greater than ‘999960’. (Oracle’s sample is random, of course, and has changed slightly for this run.)

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
...
      2641  3938373731371650183cf7a0a00000 987717                1
      2652  3939353032310e65c1acf984a00000 995021                1
      2661  393938393433125319cc9f5ba00000 998943                1
      5426  393939393630078c23b063cf600000 999960             2764

Similarly, if I inserted a few hundred rows with a higher value than my popular value (in this case I thought 500 rows would be a fairly safe bet as the sample was about one in 360 rows) I got a histogram which started with a bucket about the popular bucket, so the problem of that bucket being hacked to the high value was less significant:


    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
...
      2718  393736313130fe68d8cfd6e4000000 976111                1
      2729  393836373630ebfe9c2b7b94c00000 986761                1
      2740  39393330323515efa3c99771600000 993025                1
      5495  393939393630078c23b063cf600000 999960             2747
      5497  393939393938f86f9b35437a800000 999999                1

Bottom line, then: if you have an important popular value in a column and there aren’t very many rows with a higher value, you may find that Oracle loses sight of the popular value as it fudges the column’s high value into the final bucket.

Workaround

I did consider writing a bit of PL/SQL for the client to fake a realistic frequency histogram, but decided that that wouldn’t be particularly friendly to future DBAs who might have to cope with changes. Luckily the site doesn’t gather stats using the automatic scheduler job and only rarely updates stats anyway, so I suggested we create a histogram on the column using an estimate_percent of 100. This took about 8 minutes to run – for reasons that I will go into in a moment – after which I suggested we lock stats on the table and document the fact that when stats are collected on this table it’s got to be a two-pass job – the normal gather with its auto_sample_size to start with, then a 100% sample for this column to gather the histogram:


begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                method_opt       => 'for columns bad_col size 254',
                estimate_percent => 100,
                cascade          => false
        );
end;
/

    END_PT END_VAL                         END_AV          END_RPT
---------- ------------------------------- ------------ ----------
...
       125  39363839393911e01d15b75c600000 968999                0
       126  393834373530e98510b6f19a000000 984751                0
       253  393939393630078c23b063cf600000 999960                0
       254  393939393938f86f9b35437a800000 999999                0

129 rows selected.

This took a lot longer, of course, and produced an old-style height-balanced histogram. Part of the time came from the increased volume of data that had to be processed, part of it came from a suprise (which also appeared, in a different guise, in the code that created the original hybrid histogram).

I had specifically chosen the method_opt to gather for nothing but the single column. In fact whether I forced the “legact” (height-balanced) code or the modern (hybrid) code, I got a full tablescan that did some processing of EVERY column in the table and then threw most of the results away. Here are fragements of the SQL – old version first:


select /*+  
            no_parallel(t) no_parallel_index(t) dbms_stats
            cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring 
            xmlindex_sel_idx_tbl no_substrb_pad  
       */
       count(*), 
       count("ID"), sum(sys_op_opnsize("ID")),      
       count("BAD_COL"), sum(sys_op_opnsize("BAD_COL"))    
       ...
from
       "TEST_USER"."T1" t


select /*+
           full(t)    no_parallel(t) no_parallel_index(t) dbms_stats
           cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
           xmlindex_sel_idx_tbl no_substrb_pad
       */
       to_char(count("ID")),
       to_char(count("BAD_COL")),
       substrb(dump(min("BAD_COL"),16,0,64),1,240),
       substrb(dump(max("BAD_COL"),16,0,64),1,240),
       ...
       count(rowidtochar(rowid)) 
from
       "TEST_USER"."T1" t  /* ACL,TOPN,NIL,NIL,RWID,U,U254U*/

The new code only used the substrb() functions on the bad_col, but all other columns in the table were subject to the to_char(count()).
The old code applied the count() and sys_op_opnsize() to every column in the table.

This initial scan was a bit expensive – and disappointing – for the client since their table had 290 columns (which means intra-block chaining as a minimum) and had been updated so much that 45% of the rows in the table had to be “continued fetches”. I can’t think why every column had to be processed like this, but if they hadn’t been that would have saved a lot of CPU and I/O since the client’s critical column was very near the start of the table.

Finally

This problem with the popular value going missing is a known issue, for which there is a bug number, but there is further work going on in the same area which means this particular detail is being rolled into another bug fix. More news when it becomes available.

 

 

Using FBA with Materialized Views

Tom Kyte - Mon, 2018-01-15 05:26
Please refer to the LiveSQL link. NB some of the statement do not work because the user has insufficient privs. to create and manage Flashback areas in the LiveSQL environment. The code creates a table, inserts data , creates a flashback archive t...
Categories: DBA Blogs

Moving tables ONLINE on filegroup with constraints and LOB data

Yann Neuhaus - Mon, 2018-01-15 00:20

Let’s start this new week by going back to a discussion with one of my customers a couple of days ago about moving several tables into different filegroups. Let’s say that some of them contained LOB data. Let’s add to the game another customer requirement: moving all of them ONLINE to avoid impacting the data availability during the migration process. The concerned tables had schema constraints as primary key and foreign keys and non-clustered indexes as well. So a pretty common schema we may deal with daily at customer shops.

Firstly, let’s say that the first topic of the discussion didn’t focus on moving non-clustered indexes on a different filegroup (pretty well-known from my customer) but on how to manage moving constraints online without integrity issues. The main reason of that came from different pointers found by my customer on the internet where we have to first drop such constraints and then to recreate them (by using TO MOVE clause) and that’s whay he was not very confident to move such constraints without introducing integrity issues.

Let’s illustrate this scenario with the following demonstration. I will use a dbo.TransactionHistory2 table that I want to move ONLINE from the primary to the FG1 filegroup. There is a primary key constraint on the TransactionID column as well as foreign key on the ProductID column that refers to dbo.bigProduct table and the ProductID column.

EXEC sp_helpconstraint 'dbo.bigTransactionHistory2';

blog 125 - 1 - bigTransactionHistory2 PK FK

Here a picture of indexes existing on the dbo.bigTransactionHistory2 table:

EXEC sp_helpindex 'dbo.bigTransactionHistory2';

blog 125 - 2 - bigTransactionHistory2 indexes

Let’s say that the pk_big_TranactionHistory_TransactionID unique clustered index is tied to the primary key constraint.

Let’s start by using the first approach based on the WITH MOVE clause .

ALTER TABLE dbo.bigTransactionHistory2 DROP CONSTRAINT pk_bigTransactionHistory_TransactionID WITH (MOVE TO FG1, ONLINE = ON);

--> No constraint to avoid duplicates

ALTER TABLE dbo.bigTransactionHistory2 ADD CONSTRAINT pk_bigTransactionHistory_TransactionID PRIMARY KEY(TransactionDate, TransactionID)
WITH (ONLINE = ON);

By looking further at the script performed  we may quickly figure out that this approach may lead to introduce duplicate entries between the drop constraint step and the move table on the FG1 filegroup and  create constraint step.

We might address this issue by encapsulating the above command within a transaction. But obviously this method has cost: we have good chance to create a long blocking scenario – depending on the amount of data – and leading temporary to data unavailability. The second drawback concerns the performance. Indeed, we first drop the primary key constraint meaning we are dropping the underlying clustered index structure in the background. Going this way implies to rebuild also related non-clustered indexes to update the leaf level with row ids and to rebuild them again when re-adding the primary key constraint in the second step.

From my point of view there is a better way to go through if we want all the steps to be performed efficiently and ONLINE including the guarantee that constraints will continue to ensure checks during all the moving process.

Firstly, let’s move the primary key by using a one-step command. The same applies to the UNIQUE constraints. In fact, moving such constraint requires only to rebuild the corresponding index with the parameters DROP_EXISTING and ONLINE parameters to preserve the constraint functionality. In this case, my non-clustered indexes are not touched by the operation because we don’t have to update the leaf level as the previous method.

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionDate] ASC, [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

In addition, the good news is if we try to introduce a duplicate key while the index is rebuilding on the FG1 filegroup we will face the following error as expected:

Msg 2627, Level 14, State 1, Line 3
Violation of PRIMARY KEY constraint ‘pk_bigTransactionHistory_TransactionID’.
Cannot insert duplicate key in object ‘dbo.bigTransactionHistory2′. The duplicate key value is (Jan 1 2005 12:00AM, 1).

So now we may safely move the additional structures represented by the non-clustered index. We just have to execute the following command to move ONLINE the corresponding physical structure:

CREATE INDEX [idx_bigTransactionHistory2_ProductID]
ON dbo.bigTransactionHistory2 ( ProductID ) 
WITH (DROP_EXISTING = ON, ONLINE = ON)
ON [FG1]

 

Le’ts continue with the second scenario that consisted in moving a table ONLINE on a different filegroup with LOB data. Moving such data may be more complex as we may expect. The good news is SQL Server 2012 has introduced ONLINE operation capabilities and my customer run on SQL Server 2014.

For the demonstration let’s going back to the previous demo and let’s introduce a new [other infos] column with VARCHAR(MAX) data. Here the new definition of the dbo.bigTransactionHistory2 table:

CREATE TABLE [dbo].[bigTransactionHistory2](
	[TransactionID] [bigint] NOT NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NOT NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[other infos] [varchar](max) NULL,
 CONSTRAINT [pk_bigTransactionHistory_TransactionID] PRIMARY KEY CLUSTERED 
(
	[TransactionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Let’s take a look at the table’s underlying structure:

SELECT 
	OBJECT_NAME(p.object_id) AS table_name,
	p.index_id,
	p.rows,
	au.type_desc AS alloc_unit_type,
	au.used_pages,
	fg.name AS fg_name
FROM 
	sys.partitions as p
JOIN 
	sys.allocation_units AS au on p.hobt_id = au.container_id
JOIN	
	sys.filegroups AS fg on fg.data_space_id = au.data_space_id
WHERE
	p.object_id = OBJECT_ID('bigTransactionHistory2')
ORDER BY
	table_name, index_id, alloc_unit_type

 

blog 125 - 3 - bigTransactionHistory2 with LOB

A new LOB_DATA allocation unit type is there and indicates the table contains LOB data for all the index structures. At this stage, we may think that going to the previous way to move online the unique clustered index is sufficient but it is not according the output below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

blog 125 - 4 - bigTransactionHistory2 move LOB data

In fact, only data in IN_ROW_DATA allocation units moved from the PRIMARY to FG1 filegroup. In this context, moving LOB data is a non-trivial operation and I had to use a solution based on one proposed here by Kimberly L. Tripp from SQLSkills (definitely one of my favorite sources for tricky scenarios). So partitioning is the way to go. In respect of the solution fom SQLSkills I created a temporary partition function and scheme as shown below:

SELECT MAX([TransactionID])
FROM dbo.bigTransactionHistory2
-- 6910883
GO


CREATE PARTITION FUNCTION pf_bigTransaction_history2_temp (BIGINT)
AS RANGE RIGHT FOR VALUES (6920000)
GO

CREATE PARTITION SCHEME ps_bigTransaction_history2_temp
AS PARTITION pf_bigTransaction_history2_temp
TO ( [FG1], [PRIMARY] )
GO

Applying the scheme to the dbo.bigTransactionHistory2 table will allow us to move all data (IN_ROW_DATA and LOB_DATA) from the PRIMARY to FG1 filegroup as shown below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON ps_bigTransaction_history2_temp ([TransactionID])

Looking quickly at the storage configuration confirms this time all data moved to the right FG1.

blog 125 - 5 - bigTransactionHistory2 partitioning

Let’s finally remove the temporary partitioning configuration from the table (remember that all operations are performed ONLINE)

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1]

-- Remove underlying partition configuration
DROP PARTITION SCHEME ps_bigTransaction_history2_temp;
DROP PARTITION FUNCTION pf_bigTransaction_history2_temp;
GO

blog 125 - 6 - bigTransactionHistory2 last config

Finally, you can apply the same method for all non-clustered indexes that contain LOB data …

Cheers

 

 

 

 

 

 

 

 

Cet article Moving tables ONLINE on filegroup with constraints and LOB data est apparu en premier sur Blog dbi services.

Spectre and Meltdown on Oracle Public Cloud UEK

Yann Neuhaus - Sun, 2018-01-14 14:12

In the last post I published the strange results I had when testing physical I/O with the latest Spectre and Meltdown patches. There is the logical I/O with SLOB cached reads.

Logical reads

I’ve run some SLOB cache reads with the latest patches, as well as with only KPTI disabled, and with KPTI, IBRS and IBPB disabled.
I am on the Oracle Public Cloud DBaaS with 4 OCPU

DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 670,001.2
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 671,145.4
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 672,464.0
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 685,706.7 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,291.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,386.4 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 699,301.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,773.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,908.2 nopti noibrs noibpb

This is what I expected: when disabling the mitigation for Meltdown (PTI), and for some of the Spectre (IBRS and IBPB), I have a slightly better performance – about 5%. This is with only one SLOB session.

However, with 2 sessions I have something completely different:

DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,235,637.8 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,237,689.6 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,243,464.3 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,251,485.1
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,253,477.0
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,271,986.7

This is not a saturation situation here. My VM shape is 4 OCPUs, which is supposed to be the equivalent of 4 hyperthreaded cores.

And this figure is even worse with 4 sessions (all cores used) and more:

DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks) : 2,268,272.3 nopti noibrs noibpb
DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks): 2,415,044.8


DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks) : 3,353,985.7 nopti noibrs noibpb
DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks): 3,540,736.5


DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks) : 4,365,752.3 nopti noibrs noibpb
DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks): 4,519,340.7

The graph from those is here:
CaptureOPCLIO001

If I compare with the Oracle PaaS I tested last year (https://blog.dbi-services.com/oracle-public-cloud-liops-with-4-ocpu-in-paas/) which was on Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz you can also see a nice improvement here on Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz.

This test was on 4.1.12-112.14.10.el7uek.x86_64 and Oracle Linux has now released a new update: 4.1.12-112.14.11.el7uek

 

Cet article Spectre and Meltdown on Oracle Public Cloud UEK est apparu en premier sur Blog dbi services.

Docker-CE: How to modify containers with overlays / How to add directories to a standard docker image

Dietrich Schroff - Sun, 2018-01-14 13:01
After some experiments with docker i wanted to run a tomcat with my own configuration (e.g. memory settings, ports, ...).


My first idea was: Download tomcat, configure everything and then build an image.
BUT: After i learned how to use the -v (--volume) flag for adding some file via the docker command to an image i was wondering, wether creating a new image with only the additional files on top of standard tomcat docker image.

So first step is to take a look at all local images:
# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
558MB
friendlyhello       latest              976ee2bb47bf        3 days ago          148MB
tomcat              latest              11df4b40749f        8 days ago          558MB
I can use tomcat:latest. (if it is not there just pull it: docker pull tomcat)
Next step is to create a directory and add all the directories which you want to override.
For my example:
mkdir conftomcat
cd conftomcat
mkdir binInto the bin directory i put all the files from the tomcat standard container:
# ls bin
bootstrap.jar  catalina-tasks.xml  commons-daemon-native.tar.gz  daemon.sh  setclasspath.sh  startup.sh       tool-wrapper.sh
catalina.sh    commons-daemon.jar  configtest.sh                 digest.sh  shutdown.sh      tomcat-juli.jar  version.sh
Inside the catalina.sh i added -Xmx384M.
In conftomcat i created the following Dockerfile:
FROM tomcat:latest
WORKDIR /usr/local/tomcat/bin
ADD status /usr/local/tomcat/webapps/mystatus
ADD bin /usr/local/tomcat/bin
ENTRYPOINT [ "/usr/local/tomcat/bin/catalina.sh" ]
CMD [ "run"]And as you can see i added my index.jsp which is inside status (s. this posting).
Ok. Let's see, if my plan works:
#docker build  -t  mytomcat .ending build context to Docker daemon  375.8kB
Step 1/6 : FROM tomcat:latest
 ---> 11df4b40749f
Step 2/6 : WORKDIR /usr/local/tomcat/bin
 ---> Using cache
 ---> 5696a9ab99cb
Step 3/6 : ADD status /usr/local/tomcat/webapps/mystatus
 ---> 1bceea5af515
Step 4/6 : ADD bin /usr/local/tomcat/bin
 ---> e8d3a386a7f0
Step 5/6 : ENTRYPOINT [ "/usr/local/tomcat/bin/catalina.sh" ]
 ---> Running in a04038032bb7
Removing intermediate container a04038032bb7
 ---> 4c8fda05df18
Step 6/6 : CMD [ "run"]
 ---> Running in cce378648e7a
Removing intermediate container cce378648e7a
 ---> 72ecfe2aa4a7
Successfully built 72ecfe2aa4a7
Successfully tagged mytomcat:latest
and then start:
docker run -p 4001:8080 mytomcat Let's check the memory settings:
$ ps aux|grep java
root      2313 20.7  8.0 2418472 81236 ?       Ssl  19:51   0:02 /docker-java-home/jre/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xmx394M -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start
Yes - changed to 384M.
And check the jsp:



Yippie!
As you can see, i have the standard tomcat running with an override inside the configuration to 384M. So it should be easy to add certificates, WARs, ... to such a standard container.

Will you help me build the zoo of programming languages?

Amis Blog - Sun, 2018-01-14 12:23

Have you ever come across the following challenge? You have to name something; your own project, own product, company, your boat or even your own child. Coming up with the right name is very important since this is something you have worked on for a long time. So the name has to reflect your inspiration and effort. You used your own blood sweat and tears creating this. Spend many long lonely nights to finalize (just forget the child metaphor here). And now you are ready to launch it. But wait….. it has no name. Best way to name something is to find an example in nature. And animals are powerful and good inspirations for names. Here are 14 programming languages, software, and tools who are named after an animal grouped together in my zoo of programming languages. And there are probably many more. Feel free to help me and add yours as comments on this article.

Impala

Image resultThis is one of the major tools for querying a big data database. Impala is a tool for querying big data. Impala is a query engine that runs on Hadoop. Impala offers scalable parallel database technology to Hadoop, enabling users to issue low-latency SQL queries to data stored in HDFS and Apache HBase without requiring data movement or transformation. Impala is integrated with Hadoop to use the same file and data formats, metadata, security and resource management frameworks used by MapReduce, Apache Hive, Apache Pig and other Hadoop software.Image result for impala male
Impala is promoted for analysts and data scientists to perform analytics on data stored in Hadoop via SQL or business intelligence tools. The result is that large-scale data processing (via MapReduce) and interactive queries can be done on the same system using the same data and metadata – removing the need to migrate data sets into specialized systems and/or proprietary formats simply to perform the analysis. https://impala.apache.org/

The other Impala is a medium-sized antelope found in eastern and southern Africa. The sole member of the genus Aepyceros.

Toad

Image result for quest toadToad is a database management toolset from Quest Software that database developers, database administrators, and data analysts use to manage both relational and non-relational databases using SQL. There are Toad products for developers and DBAs, which run on Oracle, SQL Server, IBM DB2 (LUW & z/OS), SAP and MySQL, as well as, a Toad product for data preparation, which supports most data platforms. Toad solutions enable data professionals to automate processes, minimize risks and cut project delivery timelines. https://www.quest.com/toad/Image result for toad

The other toad Is a common name for certain frogs, especially of the family Bufonidae, that is characterized by dry, leathery skin, short legs, and large bumps covering the parotoid glands. Wikipedia

Elk

Image result for elastic stackThe ELK (now called Elastic Stack) stack consists of Elasticsearch, Logstash, and Kibana. Although they’ve all been built to work exceptionally well together, each one is a separate project that is driven by the open-source vendor Elastic—which itself began as an enterprise search platform vendor. It has now become a full-service analytics software company, mainly because of the success of the ELK stack. Wide adoption of Elasticsearch for analytics has been the main driver of its popularity.
Elasticsearch is a juggernaut solution for your data extraction problems. A single developer can use it to find the high-value needles underneath all of your data haystacks, so you can put your team of data scientists to work on another project.  https://en.wikipedia.org/wiki/ElasticsearchImage result for elk

The other elk, or wapiti (Cervus canadensis), is one of the largest species within the deer family, Cervidae, in the world, and one of the largest land mammals in North America and Eastern Asia. This animal should not be confused with the still larger moose (Alces alces) to which the name “elk” applies in British English and in reference to populations in Eurasia.

Ant

Related imageApache Ant is a software tool for automating software build processes, which originated from the Apache Tomcat project in early 2000. It was a replacement for the Make build tool of Unix and was created due to a number of problems with Unix’s make. It is similar to Make but is implemented using the Java language, requires the Java platform, and is best suited to building Java projects. The most immediately noticeable difference between Ant and Make is that Ant uses XML to describe the build process and its dependencies, whereas Make uses Makefile format. By default, the XML file is named build.xml. Ant is an open-source project, released under the Apache License, by Apache Software Foundation. https://ant.apache.org/index.htmlImage result for ant

The other Ant is a eusocial insect of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from wasp-like ancestors in the Cretaceous period, about 99 million years ago, and diversified after the rise of flowering plants. More than 12,500 of an estimated total of 22,000 species have been classified. They are easily identified by their elbowed antennae and the distinctive node-like structure that forms their slender waists.

Rhino

Inicio de ldp para 260px50px moziyarinocrnt.jpgThe Rhino project was started at Netscape in 1997. At the time, Netscape was planning to produce a version of Netscape Navigator written fully in Java and so it needed an implementation of JavaScript written in Java. When Netscape stopped work on Javagator, as it was called, the Rhino project was finished as a JavaScript engine. Since then, a couple of major companies (including Sun Microsystems) have licensed Rhino for use in their products and paid Netscape to do so, allowing work to continue on it. https://developer.mozilla.org/en-US/docs/Mozilla/Projects/RhinoImage result for rhino

The other Rhino ( rhinoceros, from Greek rhinokeros, meaning ‘nose-horned’, from rhinos, meaning ‘nose’, and keratos, meaning ‘horn’), commonly abbreviated to rhino, is one of any five extant species of odd-toed ungulates in the family Rhinocerotidae, as well as any of the numerous extinct species. Two of the extant species are native to Africa and three to Southern Asia.

Python

Image result for python softwarePython is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, and a syntax that allows programmers to express concepts in fewer lines of code, notably using significant whitespace. It provides constructs that enable clear programming on both small and large scales. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. https://en.wikipedia.org/wiki/Python_(programming_language)Image result for python snake

The other Python, is a genus of nonvenomous Pythonidae found in Africa and Asia. Until recently, seven extant species were recognised; however, three subspecies have been promoted and a new species recognized. A member of this genus, Python reticulatus, is among the longest snake species and extant reptiles in the world.

Goat

Related imageWebGoat or GOAT is a deliberately insecure web application maintained by OWASP designed to teach web application security lessons. This program is a demonstration of common server-side application flaws. The exercises are intended to be used by people to learn about application security and penetration testing techniques. https://www.owasp.org/index.php/Category:OWASP_WebGoat_ProjectImage result for goat

The other goat is a member of the family Bovidae and is closely related to the sheep as both are in the goat-antelope subfamily Caprinae. There are over 300 distinct breeds of goat. Goats are one of the oldest domesticated species and have been used for their milk, meat, hair, and skins over much of the world.

Lama

LAMA is a framework for developing hardware-independent, high-performance code for heterogeneous computing systems. It facilitates the development of fast and scalable software that can be deployed on nearly every type of system with a single code base. The framework supports multiple target platforms within a distributed heterogeneous environment. It offers optimized device code on the backend side, high scalability through latency hiding and asynchronous execution across multiple nodes. https://www.libama.org/Image result for lama animal

The other Lama (Lama glama) is a domesticated South American camelid, widely used as a meat and pack animal by Andean cultures since the Pre-Columbian era.
They are very social animals and live with other llamas as a herd. The wool produced by a llama is very soft and lanolin-free. Llamas are intelligent and can learn simple tasks after a few repetitions. When using a pack, they can carry about 25 to 30% of their body weight for 8 to 13 km (5–8 miles).[5]

Serpent

The serpent is one of the high-level programming languages used to write Ethereum contracts. The language, as suggested by its name, is designed to be very similar to Python; it is intended to be maximally clean and simple, combining many of the efficiency benefits of a low-level language with ease-of-use in programming style, and at the same time adding special domain-specific features for contract programming. The latest version of the Serpent compiler, available on GitHub, is written in C++, allowing it to be easily included in any client.

The serpent, or snake, is one of the oldest and most widespread mythological symbols. The word is derived from Latin serpents, a crawling animal or snake. Snakes have been associated with some of the oldest rituals known to humankind and represent the dual expression of good and evil.

Penguin

LogoPENGUIN is a grammar-based language for programming graphical user interfaces. Code for each thread of control in a multi-threaded application is confined to its own module, promoting modularity and reuse of code. Networks of PENGUIN components (each composed of an arbitrary number of modules) can be used to construct large reactive systems with parallel execution, internal protection boundaries, and plug-compatible communication interfaces. We argue that the PENGUIN building-block approach constitutes a more appropriate framework for user interface programming than the traditional Seeheim Model. We discuss the design of PENGUIN and relate our experiences with applications. https://en.wikipedia.org/wiki/Penguin_SoftwareImage result for penguin

The other Penguin(order Sphenisciformes, family Spheniscidae) are a group of aquatic, flightless birds. They live almost exclusively in the Southern Hemisphere, with only one species, the Galapagos penguin, found north of the equator. Highly adapted for life in the water, penguins have countershaded dark and white plumage, and their wings have evolved into flippers. Most penguins feed on krill, fish, squid and other forms of sea life caught while swimming underwater. They spend about half of their lives on land and half in the oceans. Although almost all penguin species are native to the Southern Hemisphere, they are not found only in cold climates, such as Antarctica. In fact, only a few species of penguin live so far south. Several species are found in the temperate zone, and one species, the Galápagos penguin, lives near the equator.

Cheetah

Image result for cheetah template engineCheetah is a Python-powered template engine and code generator. It can be used standalone or combined with other tools and frameworks. Web development is its principle use, but Cheetah is very flexible and is also being used to generate C++ game code, Java, SQL, form emails and even Python code.  Cheetah is an open source template engine and code-generation tool written in Python. Cheetah can be used unto itself or incorporated with other technologies and stacks regardless of whether they’re written in Python or not. https://pythonhosted.org/Cheetah/Image result for cheetah

At its core, Cheetah is a domain-specific language for markup generation and templating which allows for full integration with existing Python code but also offers extensions to traditional Python syntax to allow for easier text-generation.

Porcupine

Image result for porcupine application serverPorcupine is an open-source Python-based Web application server that provides front-end and back-end revolutionary technologies for building modern data-centric Web 2.0 applications. Many of the tasks required for building web applications as you know them, are either eliminated or simplified. For instance, when developing a Porcupine application you don’t have to design a relational database. You only have to design and implement your business objects as Python classes, using the building blocks provided by the framework (data-types). Porcupine integrates a native object – key/value database, therefore the overheads required by an object-relational mapping technique when retrieving or updating a single object are removed. http://www.innoscript.org/Image result for porcupine

The other Porcupines are rodentian mammals with a coat of sharp spines, or quills, that protect against predators. The term covers two families of animals, the Old World porcupines of family Hystricidae, and the New World porcupines of family Erethizontidae. Both families belong to the infraorder Hystricognathi within the profoundly diverse order Rodentia and display superficially similar coats of quills: despite this, the two groups are distinct from each other and are not closely related to each other within the Hystricognathi.

Orca

Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism.
programming language for distributed systems http://courses.cs.vt.edu/~cs5314/Lang-Paper-Presentation/Papers/HoldPapers/ORCA.pdfImage result for ORCA

The other orca (Orcinus orca) is a toothed whale belonging to the oceanic dolphin family, of which it is the largest member. Killer whales have a diverse diet, although individual populations often specialize in particular types of prey. Some feed exclusively on fish, while others hunt marine mammals such as seals and dolphins. They have been known to attack baleen whale calves, and even adult whales. Killer whales are apex predators, as there is no animal that preys on them. Killer whales are considered a cosmopolitan species, and can be found in each of the world’s oceans in a variety of marine environments, from Arctic and Antarctic regions to tropical seas – Killer whales are only absent from the Baltic and Black seas, and some areas of the Arctic ocean.

Seagul

SeagullSeagull is an Open Source (GPL) multi-protocol traffic generator test tool. Primarily aimed at IMS (3GPP, TISPAN, CableLabs) protocols (and thus being the perfect complement to SIPp for IMS testing), Seagull is a powerful traffic generator for functional, load, endurance, stress and performance/benchmark tests for almost any kind of protocol. Seagul is a traffic generator for load testing. Created by HP and released in 2006. http://gull.sourceforge.net/Image result for seagull

The other Seagull is a seabird of the family Laridae in the suborder Lari. They are most closely related to the terns (family Sternidae) and only distantly related to auks, skimmers, and more distantly to the waders. Until the 21st century, most gulls were placed in the genus Larus, but this arrangement is now known to be polyphyletic, leading to the resurrection of several genera.

Sloth

Sloth is the world’s slowest computer language. Proudly announced by Lary Page at the 2014 Google WWDC as a reaction on Microsoft C-flat-minor. Both languages are still competing in the race for the slowest computer language. Sloth stands for Seriously Low Optimization ThresHolds, has been under development for a really, really long time. I mean, like, forever, man. https://www.eetimes.com/author.asp?doc_id=1322644

Larry Page at the recent WWDC introducing SLOTH.

The other Sloths are arboreal mammals noted for the slowness of movement and for spending most of their lives hanging upside down in the trees of the tropical rainforests of South America and Central America. The six species are in two families: two-toed sloths and three-toed sloths. In spite of this traditional naming, all sloths actually have three toes. The two-toed sloths have two digits, or fingers, on each forelimb. The sloth is so named because of its very low metabolism and deliberate movements, sloth being related to the word slow.

Image result for sloth

 

Add your languages

Hope you enjoyed this small tour. There are probably many more languages named after animals. Please add them as comments and I will update the article. Hopefully, we can cover the entire animal kingdom. Thank you in advance for your submissions.

Sources from Wikipedia.

 

The post Will you help me build the zoo of programming languages? appeared first on AMIS Oracle and Java Blog.

DBMS_AQ.LISTEN to listen to a Single/Multi-Consumer Queue

Tom Kyte - Sun, 2018-01-14 11:06
Dear Experts, Need your guidance/suggestions to resolve this issue: Part of oracle advance queueing implementation, we've to dequeue the message as soon as it has been enqueued into the queue. This should happen immediately without any manual inter...
Categories: DBA Blogs

Doing DB upgrade RAC , via DBUA, from 11gr2 to 12cr2 . Using TDE (tablespace level) on source database

Tom Kyte - Sun, 2018-01-14 11:06
I am running a DB 11.2.0.4 (RAC db) that has TDE implemented - Tablespace level. Source db (11.2.0.4) has TDE implemented. sqlnet.ora file on each node has the entry ENCRYPTION_WALLET_LOCATION. Also each node has the wallet and auto login file (t...
Categories: DBA Blogs

Audit Trail : Disable my bash script audit

Tom Kyte - Sun, 2018-01-14 11:06
Hello Tom. I set audit trail to "XML,EXTENDED" , because my $AUD table was growing to much. I have a lot of 4kb files generated. I have several scripts in my crontab, and that is what is being audited. The content of the files are like this:...
Categories: DBA Blogs

YTD logic using analytic functions

Tom Kyte - Sun, 2018-01-14 11:06
Hi Tom, I am trying to get YTD in a view. I have below view, <code>create or replace view billsummary as select szRegionCode, szState, szPartitionCode, szProduct, TO_CHAR(dtSnapshot,'YYYY.MM') szMonthYear, szJioCenter, ...
Categories: DBA Blogs

Configuring a SQL Loader control File to exclude the second row

Tom Kyte - Sun, 2018-01-14 11:06
Hi, I am trying to configure a control file that excludes the second line of data from the load. The system is automated and I have been tasked to see if there is a solution to this. I am very new at this. I have been told about a discard file of ...
Categories: DBA Blogs

Dynamic query to print out any table

Tom Kyte - Sun, 2018-01-14 11:06
Hi Tom How i can use procedure have a parameter type of query 'any query' and print the data looks like comma separated ? please help ..
Categories: DBA Blogs

Truncate statement in data dictionary,

Tom Kyte - Sun, 2018-01-14 11:06
Hello, I have observed truncate statement (command_type = 85) doesn't appear in V$SQL. However, it does in V$SQLTEXT and V$SQLTEXT_WITH_NEWLINES. My intention is to extract the time of the truncate statement. How can I achieve this task witho...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator