Feed aggregator

Which Index is Better Global Or Local in Partitioned Table?

Tom Kyte - Fri, 2017-09-29 23:26
We have partitioned table based on date say startdate (Interval partition , For each day) We will use query that will generate report based on days (like report for previous 5 days) Also we use queries that will generate report based on hours (li...
Categories: DBA Blogs

SQL to find the ip address

Tom Kyte - Fri, 2017-09-29 23:26
Hi Tom, I want to capture the IP address of any client who has shutdown the db. Support currently in my db 5 clients are connected, one client shutdown the db, then I want to capture the IP address of client who has been shutdown the db. How to solv...
Categories: DBA Blogs

AWR

Tom Kyte - Fri, 2017-09-29 23:26
Hi Tom/Team, I am aware of the definition of terms used in AWR Report. but i want to know that - how to calculate and on what basis we need to calculate values listed for points in below 2 section of AWR report 1. Top 5 timed foreground event ...
Categories: DBA Blogs

BInary operator like AND, XOR

Tom Kyte - Fri, 2017-09-29 23:26
Hi i have a simple question can we use the binary operator like AND or XOR in a SQL statement. For example "select 1 AND 1 from dual;" result = 1 or true or "select 1 XOR 1 from dual;" if not, please can you tell me how can i do to have th...
Categories: DBA Blogs

Lots of archivelog generation when shrinking and compacting segments

Tom Kyte - Fri, 2017-09-29 23:26
Hi Tom, 1. what is the reason of huge redo and archivelog generation when compacting and shriking huge segments in 10g? 2. How it can be avoided or minimized? Thanks JP
Categories: DBA Blogs

Mining Goldgate ggserr.log

Michael Dinh - Fri, 2017-09-29 21:33

Can you imagine me running in circles shouting, “The sky is falling, the sky is falling?”

Replicat Lag at Chkpt: 03:21:45

Here are the trail files at target – look at how fast it is being created.

ls -alrt ./dirdat/aa*|tail -20
-rw-r----- 1 ggsuser ggsuser 499999845 Sep 29 11:09 ./dirdat/aa000020827
-rw-r----- 1 ggsuser ggsuser 499999575 Sep 29 11:09 ./dirdat/aa000020828
-rw-r----- 1 ggsuser ggsuser 499999929 Sep 29 11:10 ./dirdat/aa000020829
-rw-r----- 1 ggsuser ggsuser 499999771 Sep 29 11:11 ./dirdat/aa000020830
-rw-r----- 1 ggsuser ggsuser 499999941 Sep 29 11:11 ./dirdat/aa000020831
-rw-r----- 1 ggsuser ggsuser 499999858 Sep 29 11:12 ./dirdat/aa000020832
-rw-r----- 1 ggsuser ggsuser 499999571 Sep 29 11:12 ./dirdat/aa000020833
-rw-r----- 1 ggsuser ggsuser 499999874 Sep 29 11:13 ./dirdat/aa000020834
-rw-r----- 1 ggsuser ggsuser 499999782 Sep 29 11:14 ./dirdat/aa000020835
-rw-r----- 1 ggsuser ggsuser 499999975 Sep 29 11:14 ./dirdat/aa000020836

My hypothesis: lots of data being capture at source.

After all is said and done. The ggserr.log was mined.

Gather dates for the 10 highest number of trails created by day in 2017.

SOURCE:

grep "^2017" ggserr.log.dinh|grep "p_test.prm:  Rolling over remote file"|awk '{ print $1 }'|uniq -c|sort -nrk 1|head

    224 2017-09-29
    147 2017-06-02
    105 2017-02-28
    101 2017-05-31
    100 2017-03-01
     98 2017-06-01
     97 2017-05-18
     91 2017-05-26
     89 2017-07-25
     85 2017-01-26

TARGET:


grep "^2017" ggserr.log.dinh|grep "r_test.prm:  Switching to next trail file"|awk '{ print $1 }'|uniq -c|sort -nrk 1|head

    279 2017-09-29
    183 2017-02-28
    174 2017-03-01
    148 2017-06-02
    146 2017-02-24
    137 2017-08-29
    137 2017-03-24
    133 2017-08-11
    130 2017-08-16
    128 2017-03-02

Different count between source/target for 2017-09-29 is due to data being collected at different time.


EBS : Milestone Billing

OracleApps Epicenter - Fri, 2017-09-29 18:59
Milestone is a type of billing that occurs only as certain milestones have been attained. Milestones are events or conditions that trigger the billing. For example, you can bill customers based on the number of days since the activity began, a specific date, a manual milestone you specify, or an activity percentage of completion. You […]
Categories: APPS Blogs

Python cx_Oracle RPMs for Oracle Linux 6 and Oracle Linux 7

Christopher Jones - Fri, 2017-09-29 17:35

The Oracle Linux group is starting to roll out RPMs for language interfaces like Python cx_Oracle.  Check out their blog posts on the available cx_Oracle RPMs: 

cx_Oracle RPMs have landed on Oracle Linux Yum Server

Using cx_Oracle With Software Collection Library and What Those Different Versions Are For

Ruby-oci8 is also available for OL6 and OL7; more on that later.

 

Office 365 ProPlus Certified with EBS 12.1 and 12.2

Steven Chan - Fri, 2017-09-29 13:26

Microsoft Office 365 ProPlus versions of Excel, Word, and Outlook are certified with Oracle E-Business Suite 12.1 and 12.2. See:

This certification includes 32-bit and 64-bit versions of Office 365 ProPlus running on Windows 10, 8.1, and 7.

What is Office 365 ProPlus?

Office 365 ProPlus is a version of Microsoft Office that's available through Office 365. It includes Word, Excel, Outlook, PowerPoint, Access, OneNote, Publisher, and Skype for Business as well as online services such as Sway and Power Query.

Office 365 ProPlus is installed on the user's local computer using a streaming and virtualization technology called Click-to-Run. Office 365 ProPlus is not a web-based version of Office. It runs locally on the user's computer. Users don't need to be connected to the Internet all the time to use it.

What is not covered by this certification?

EBS 12.1 and 12.2 are certified with the Office 365 ProPlus versions of Excel, Word, and Outlook.

The following Office 365 ProPlus products are not certified with EBS 12.1 or 12.2:  

  • Office on iPad and Android devices
  • Office Online browser-based versions of Excel and Word
  • PowerPoint, Access, OneNote, Publisher, and Skype for Business
  • Online services such as Sway and Power Query

Click-to-Run delivers monthly updates to Office 365 ProPlus

By default, Click-to-Run products are configured to be updated automatically every month. Users don't have to download or install updates. The Click-to-Run product seamlessly updates itself in the background.

Microsoft also offers a semi-annual channel for updates in January and July, as well as a targeted semi-annual channel in March and September.

Certification with Click-to-Run updates

Oracle certified EBS 12.1 and 12.2 in September 2017 with the latest available versions of Excel, Word, and Outlook in Office 365 ProPlus at the time.

Microsoft's updates to Office 365 ProPlus are intended to be backwards-compatible. This means that Oracle's certification with the September 2017 baseline should continue to apply to Microsoft updates in October 2017, November 2017, and so on.

It is unlikely that Oracle will be able to certify all monthly Office 365 ProPlus updates with all EBS 12.1 and 12.2 updates. Given Microsoft's stance towards backwards-compatibility, this is theoretically unnecessary.

It is expected that Oracle will continue to certify selected updates to Office 365 ProPlus versions of Excel, Word, and Outlook with updates to EBS 12.1 and 12.2.  

Related Articles

Categories: APPS Blogs

Experience the Future of Cloud at Oracle OpenWorld 2017

Oracle Press Releases - Fri, 2017-09-29 11:00
Press Release
Experience the Future of Cloud at Oracle OpenWorld 2017 Technology’s Most Innovative Showcase Kicks Off in San Francisco

Oracle OpenWorld, San Francisco, Calif.—Sep 29, 2017

Oracle OpenWorld 2017 By the Numbers

This weekend, Oracle welcomes tens of thousands of customers and partners spanning 175 countries and over 18 million live-stream viewers to Oracle OpenWorld 2017. Located at San Francisco’s newly redesigned Moscone Center, conference events will span multiple venues in the city’s downtown from October 1-5. Heralded as the industry’s most important business and technology show, Oracle OpenWorld delivers unprecedented opportunities to hear from the greatest minds across all event programming, including actor and director Joseph Gordon-Levitt, former United States Senator Barbara Boxer, Executive Consultant for the Los Angeles Clippers Jerry West, former Secretary of Defense Leon Panetta, and President of the Council on Foreign Relations Richard Haass.

On Sunday, Oracle CTO and Executive Chairman Larry Ellison opens the event with a special keynote showcasing all of the innovations delivered in the Oracle Cloud. Mainstage presentations will continue throughout the week, featuring Ellison, Oracle CEO Mark Hurd, and Oracle President of Product Development Thomas Kurian. Leaders from the world’s most interesting brands, including Carbon, Trek, FexEx and Gap, will join Oracle executives on stage to discuss pressing topics impacting business and technology today and in the future.

This year, Oracle OpenWorld brings innovation to learning through a new series of session formats developed in collaboration with Stanford University, as well as reimagined exhibition halls. The conference’s latest iteration of “Collective Learning” features cutting edge session designs, including: Brain Snacks, 1:1 conversations with fellow experts, and Make Your Case, hands-on workshops tackling the best Oracle case studies. Oracle OpenWorld Exchange, the conference’s redesigned exhibition hall, debuts to foster community, spark learning, promote innovation, and unite our customers, partners, and attendees.

“As we raise the curtain on Oracle OpenWorld 2017, we welcome more than 60,000 customers and partners to learn about transforming their business with Oracle Cloud,” said Judy Sim, Oracle’s Chief Marketing Officer. “The event has evolved as our customers’ needs have changed and is now one of the leading technology conferences in the world. Today, we are thrilled to bring a positive economic impact worth more than $3 billion to the City of San Francisco over the last 20 years.”

To Learn and Explore:
  • Sessions: Tap into an elite network of world-class speakers totaling 67,500+ years of industry experience. Select from 2,311 sessions presented by 3,048 customer and partner speakers, more than 523 Oracle demos and case studies showcasing emerging technology, as well as hundreds of partner and customer exhibitions.
  • Oracle Keynotes:
    • Sunday, October 1, 5:00 p.m. – 7:00 p.m.
      • Oracle CTO and Executive Chairman Larry Ellison opens the conference with an inside look at the future of Oracle Cloud and its innovation path.
      • Doug Fisher, Senior Vice President and General Manager, Software and Services Group, Intel, presents the power of data, and how data offers massive enterprise-class cloud computing opportunities.
    • Monday, October 2, 9:00 a.m. – 10:15 p.m.

       

      • Oracle CEO Mark Hurd reveals where we are now and where we are headed in a cloud foundational world. Joining him on stage will be leaders from Oracle customers Bloom Energy, FedEx and Gap.
    • Tuesday, October 3

       

      • 9:00 a.m. – 11:00 a.m. – Oracle President of Product Development Thomas Kurian and Dave Donatelli, Oracle Executive Vice President, Cloud Business Group, showcase how Oracle Cloud is harnessing the power of emerging technologies like artificial intelligence, Internet of Things, and blockchain to transform organizations of all sizes. They will be joined by Richard Noble, Director of the Bloodhound Project, an inspiring initiative that engages the next generation in science, technology, engineering and math by aiming to surpass the world land speed record.
      • 2:00 p.m. – 3:00 p.m. – Larry Ellison unveils the future of databases in the cloud, including Oracle Autonomous Database, the world’s first “self-driving” database.
    • Wednesday, October 4, 9:00 a.m. – 11:00 a.m.

       

      • Oracle CEO Mark Hurd returns to the mainstage with NetSuite’s Executive Vice President of Development Evan Goldberg and special guests, to discuss the role technology plays in getting ahead of the competition.
  • Oracle’s Leader’s Circle: Connect with luminaries on industry trends, foreign affairs, economics and security at this exclusive, invitation-only executive program hosted by Oracle CEOs Safra Catz and Mark Hurd. Join Senator Barbara Boxer and Newt Gingrich, 50th Speaker of the U.S. House of Representatives for a provocative discussion about the future of the United States.  
  • The Innovation Studio: Experience innovations from Design Tech High School students and Oracle Education Foundation. Meet startups from Oracle’s Startup Cloud Accelerator, and talk with Oracle customers, partners, and industry business unit experts.
  • Oracle Cloud User Experience Lab: Experience hands-on demos of the latest Release 13 Oracle Cloud Applications, and learn about Oracle’s vision for the future of work, including experimental robotics, artificial intelligence, augmented reality, chatbots, and more of the emerging technology tools in the smart UX toolkit.
  • JavaOne Developer Lounge: Use Oracle Internet of Things (IoT) and Big Data technologies to brew your own beer. Create your own sculptures and furniture with a 3D printer. Relive “The Matrix” and shoot your own slow motion video with 60 Raspberry Pi cameras in the BulletTime Photo Booth. Interact with a cloud chatbot robot powered by the Oracle Intelligent Bots running on Oracle Mobile Cloud Service.
  • Oracle Code Event: Join developers from around the world in this one-day event covering machine learning, chatbots, cloud, databases, programming languages, DevOps, and much more.
  • Oracle NetSuite SuiteConnect: The best of SuiteWorld comes to Oracle OpenWorld for the first time. Held on October 4, this program features NetSuite users, Oracle executives, product experts and partners.
  To Support the Community and Environment:
  • Oracle Academy’s JavaOne4Kids: Designed for children ages 10-16, attendees can use Raspberry Pi and Java programming to catch escaped Pokemon; create a robot and bring it to life; make computer games using Greenfoot and Stride; among other fun activities. Oracle Academy is one of Oracle’s key investments in our collective future. In fiscal year 2016, the program impacted over 3.5 million students in 120 countries through $3.75 billion in direct and in-direct resources.
  • Plant a Billion Trees: Learn how The Nature Conservancy and Oracle Giving are helping to advance reforestation globally. As part of its participation in The Nature Conservancy’s Plant a Billion trees initiative, Oracle has already achieved 41 percent of its goal to plant one million trees.
  • Dian Fossey Gorilla Fund International: Discover how Oracle Cloud is helping to save the gorillas. Get a sneak peek of an upcoming National Geographic three-part special on Dian Fossey’s life and work as a gorilla conservationist. Hear from Tara Stoinski, President, CEO and Chief Scientific Officer of the Dian Fossey Gorilla Fund about its 27-year partnership with Oracle, and how Oracle Cloud technology has enabled the organization to revolutionize its data management and make its database – the world’s largest, most comprehensive collection of data on a wild great ape population – available to scientists, researchers and students without charge.
  • Ride for a Reason: Support the victims of the recent hurricanes by choosing Lyft for Oracle OpenWorld transportation. Between October 1-5, five percent of the cost of rides will be donated to the American Red Cross. Enter code OOW17 using a Lyft business profile.
  To Connect and Play:
  • Oracle CloudFest.17: Dance the night away with Grammy award winners The Chainsmokers and sing along with pop sensation Ellie Goulding at Oracle’s legendary customer appreciation event taking place on October 4 at AT&T Park.
  • SuiteConnect NextUp: Celebrate the day’s experiences at a special concert with Royal Machines, joined by “special guests” on October 3 at Howard Street mainstage.
Contact Info
Julie Sugishita
Oracle Corporate Communications
+1.650.506.0076
julie.sugishita@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle OpenWorld

Oracle OpenWorld, the industry's most important business and technology conference for the past 20 years, hosts tens of thousands of in-person attendees as well as millions online. Dedicated to helping businesses leverage Cloud for their innovation and growth, the conference delivers deep insight into industry trends and breakthroughs driven by technology. Designed for attendees who want to connect, learn, explore and be inspired, Oracle OpenWorld offers more than 2,500 educational sessions led by more than 2,000 customers and partners sharing their experiences, first hand. With hundreds of demos and hands-on labs, plus exhibitions from more than 400 partners and customers from around the world, Oracle OpenWorld has become a showcase for leading cloud technologies, from Cloud Applications to Cloud Platform and Infrastructure. For more information; to register; or to watch Oracle OpenWorld keynotes, sessions, and more, visit www.oracle.com/openworld. Join the Oracle OpenWorld discussion on Twitter.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julie Sugishita

  • +1.650.506.0076

Partitioned Indexes

Hemant K Chitale - Fri, 2017-09-29 10:15
Most discussions about Partitioning in Oracle are around Table Partitioning.  Rarely do we come across Index Partitioning.
A couple of days ago, there was an Oracle Community question on Partitioned Indexes.

So, here is a quick listing of Index Partitioning options  (these tests are in 11.2.0.4)


First, I start with a regular, non-partitioned table.

SQL> create table non_partitioned  
2 (id_col number,
3 data_col_1 number,
4 data_col_2 number,
5 data_col_3 varchar2(15)
6 )
7 /

Table created.

SQL>


I now attempt to create an Equi-Partitioned (LOCAL) Index on it.

SQL> create index equi_part on non_partitioned (id_col) local;
create index equi_part on non_partitioned (id_col) local
*
ERROR at line 1:
ORA-14016: underlying table of a LOCAL partitioned index must be partitioned


SQL>


As expected I can't create a LOCAL index on a non-partitioned table.

Can I create any partitioned index on this table ?

I try two different GLOBAL PARTITIONed Indexes

SQL> create index global_part   
2 on non_partitioned (id_col) global
3 partition by range (id_col)
4 (partition p_100 values less than (101),
5 partition p_200 values less than (201)
6 )
7 /
)
*
ERROR at line 6:
ORA-14021: MAXVALUE must be specified for all columns


SQL>
SQL> create index global_part
2 on non_partitioned (id_col) global
3 partition by range (id_col)
4 (partition p_100 values less than (101),
5 partition p_200 values less than (201),
6 partition p_max values less than (MAXVALUE)
7 )
8 /

Index created.

SQL>
SQL> create index global_part_comp
2 on non_partitioned (id_col, data_col_3) global
3 partition by range (id_col, data_col_3)
4 (partition p_1 values less than (101,'M'),
5 partition p_2 values less than (101,MAXVALUE),
6 partition p_3 values less than (201,'M'),
7 partition p_4 values less than (201,MAXVALUE),
8 partition p_max values less than (MAXVALUE, MAXVALUE)
9 )
10 /

Index created.

SQL>


So, I must have a MAXVALUE partition for the Index.  Note that the two indexes above are now Partitioned without the table itself being partitioned.

SQL> select index_name, partitioned
2 from user_indexes
3 where table_name = 'NON_PARTITIONED'
4 order by 1
5 /

INDEX_NAME PAR
------------------------------ ---
GLOBAL_PART YES
GLOBAL_PART_COMP YES

SQL>


The above indexes are Prefixed Global Partitioned Indexes. Can I create a Non-Prefixed Global Partitioned Index -- an Index where the Partition Key is not formed by the left-most columns of the index.

SQL> create index global_part_nonprefix
2 on non_partitioned (id_col, data_col_3) global
3 partition by range (data_col_1)
4 (partition p_1 values less than (101),
5 partition p_2 values less than (201),
6 partition p_max values less than (MAXVALUE)
7 )
8 /
partition by range (data_col_1)
*
ERROR at line 3:
ORA-14038: GLOBAL partitioned index must be prefixed


SQL>
SQL> !oerr ora 14038
14038, 00000, "GLOBAL partitioned index must be prefixed"
// *Cause: User attempted to create a GLOBAL non-prefixed partitioned index
// which is illegal
// *Action: If the user, indeed, desired to create a non-prefixed
// index, it must be created as LOCAL; otherwise, correct the list
// of key and/or partitioning columns to ensure that the index is
// prefixed

SQL>


So, I have proved that a Non-Partitioned Table cannot have a LOCAL Partitioned Index or a Non-Prefixed Global Partitioned Index but can still have a Global Partitioned Index where the Partition Key is left-prefixed from the Index Key. Also, that a Global Partitioned Index can be a Composite Index with columns of different datatypes.

Let me now proceed with a Partitioned Table.

SQL> create table partitioned
2 (id_col number,
3 data_col_1 number,
4 data_col_2 number,
5 data_col_3 varchar2(15)
6 )
7 partition by range (id_col)
8 (partition p_100 values less than (101),
9 partition p_200 values less than (201),
10 partition p_max values less than (MAXVALUE)
11 )
12 /

Table created.

SQL>


First, the Equi-Partitioned (LOCAL) Index.

SQL> create index part_equi_part
2 on partitioned (id_col) local
3 /

Index created.

SQL> select partition_name, partition_position
2 from user_ind_partitions
3 where index_name = 'PART_EQUI_PART'
4 order by 2
5 /

PARTITION_NAME PARTITION_POSITION
------------------------------ ------------------
P_100 1
P_200 2
P_MAX 3

SQL>


The usage of the LOCAL keyword instead of GLOBAL defines the Index as equi-partitioned with the table.  Index Partitions are automatically created to match the Table Partitions with the same Partition Names.  It is possible to create a LOCAL Partitioned Index and manually specify Partition Names but this, in my opinion, is a bad idea.  Attempting to manually name each Partition for the Index can result in a mis-match between Table Partition Names and Index Partition Names.

Next, I define two GLOBAL Partitioned Indexes on this table.

SQL> create index part_gbl_part  
2 on partitioned (data_col_1) global
3 partition by range (data_col_1)
4 (partition p_1 values less than (1001),
5 partition p_2 values less than (2001),
6 partition p_3 values less than (3001),
7 partition p_4 values less than (4001),
8 partition p_max values less than (MAXVALUE)
9 )
10 /

Index created.

SQL> create index part_gbl_part_comp
2 on partitioned (data_col_2, data_col_3) global
3 partition by range (data_col_2, data_col_3)
4 (partition p_a values less than (10, 'M'),
5 partition p_b values less than (10, MAXVALUE),
6 partition p_c values less than (20, 'M'),
7 partition p_d values less than (20, MAXVALUE),
8 partition p_e values less than (30, 'M'),
9 partition p_f values less than (30, MAXVALUE),
10 partition p_max values less than (MAXVALUE, MAXVALUE)
11 )
12 /

Index created.

SQL>
SQL> l
1 select index_name, partition_name, partition_position
2 from user_ind_partitions
3 where index_name in
4 (select index_name from user_indexes
5 where table_name = 'PARTITIONED'
6 )
7* order by 1,3
SQL> /

INDEX_NAME PARTITIO PARTITION_POSITION
------------------ -------- ------------------
PART_EQUI_PART P_100 1
PART_EQUI_PART P_200 2
PART_EQUI_PART P_MAX 3
PART_GBL_PART      P_1                       1
PART_GBL_PART P_2 2
PART_GBL_PART P_3 3
PART_GBL_PART P_4 4
PART_GBL_PART P_MAX 5
PART_GBL_PART_COMP P_A                       1
PART_GBL_PART_COMP P_B 2
PART_GBL_PART_COMP P_C 3
PART_GBL_PART_COMP P_D 4
PART_GBL_PART_COMP P_E 5
PART_GBL_PART_COMP P_F 6
PART_GBL_PART_COMP P_MAX 7

15 rows selected.

SQL>


The Equi-Partitioned (LOCAL) Index has the same number (and, recommended, names) of Partitions as the Table.
However, the GLOBAL Indexes can have different numbers of Partitions.

As with the first case, I cannot create a Global Non-Prefixed Partitioned Index (where the Index Partition key is not  a left-prefix of the Index).

SQL> create index part_global_part_nonprefix
2 on partitioned (id_col, data_col_3) global
3 partition by range (data_col_1)
4 (partition p_1 values less than (101),
5 partition p_2 values less than (201),
6 partition p_max values less than (MAXVALUE)
7 )
8 /
partition by range (data_col_1)
*
ERROR at line 3:
ORA-14038: GLOBAL partitioned index must be prefixed


SQL>


In this blog post, I haven't touched on Partial Indexing (a 12c feature).

I haven't touched on Unique LOCALly Partitioned Indexes.

I haven't demonstrated the impact of Partition Maintenance operations (TRUNCATE, DROP, MERGE, ADD, SPLIT) on LOCAL and GLOBAL Indexes here -- although I have touched on such operations and LOCAL indexes in earlier blog posts.
.
.
.

Categories: DBA Blogs

SQL Server 2016: New Dynamic Management Views (DMVs)

Yann Neuhaus - Fri, 2017-09-29 08:32

In SQL Server 2016, you will discover a lot of new Dynamic Management Views(DMVs).
In this article, I will just give you a little overview of these useful views for us as DBA.

SQL Server 2012 has 145 DMVs and SQL Server 2014 has 166 DMVs.
Now, SQL Server 2016 has 185 DMVs.

How to see it?

It is very easy to have a look using the sys.all_objects view:

SELECT * FROM sys.all_objects WHERE TYPE=’V’ AND NAME LIKE ‘dm_%’ order by name ASC

DMV_SQL2016

From SQL Server 2012 to SQL Server 2014, we can notice that a lot of new DMVs comes with the In-Memory technology with the syntax “dm_xtp_xxxxxxxx” or “dm_db_xtp_xxxxxxxx”

In SQL Server 2016, a lot of new “dm_exec_xxxxxxxx” is present.

All definitions for these views come from the Microsoft documentation or web site.

To begin, you will see 10 DMVs for the PolyBase technology:

  • dm_exec_compute_node_status
  • dm_exec_dms_workers

A useful msdn page resumes all DMVs for these new views here

Other dm_exec_xxx views are basically usefull like:

  • dm_exec_query_optimizer_memory_gateways
    • Returns the current status of resource semaphores used to throttle concurrent query optimization.
    • Microsoft Reference here
  • dm_exec_session_wait_stats
    • Returns information about all the waits encountered by threads that executed for each session
    • Microsoft Reference here

3 new DMVs for the Columstore technology:

  • dm_column_store_object_pool
  • dm_db_column_store_row_group_operational_stats
    • Returns current row-level I/O, locking, and access method activity for compressed rowgroups in a columnstore index.
    • Microsoft Reference here
  • dm_db_column_store_row_group_physical_stats
    • Provides current rowgroup-level information about all of the columnstore indexes in the current database
    • Microsoft Reference here

2 new DMVs for Stretch Databases in the database context and with rda(remote database archive):

  • dm_db_rda_migration_status
    • For the current database, list of state information of the remote data archive schema update task.
    • Microsoft Reference here

This list can change if a Service Pack is  applied.
It is just for you to have a little reference view about these useful views! 8-)

 

Cet article SQL Server 2016: New Dynamic Management Views (DMVs) est apparu en premier sur Blog dbi services.

“_suppress_identifiers_on_dupkey” – the SAP workaround for bad design

Yann Neuhaus - Fri, 2017-09-29 08:01

In SQL, ‘upsert’ is a conditional insert or update: if the row is there, you update it, but if it is not there, you insert it. In Oracle, you should use a MERGE statement for that. You are clearly doing it wrong if you code something like:

begin
insert...
exception
when dup_val_on_index then update...
end;


But it seems that there are many applications with this bad design, and Oracle has introduced an underscore parameter for them: “_suppress_identifiers_on_dupkey”. You won’t be surprised that this one is part of the long list of parameters required for SAP.

Let’s investigate this.

Insert – Exception – Update

So the idea is to try first an insert, rely on the unique constraint (primary key) to get an exception if the row exists, and in this case update the existing row. There are several flows with that.

The first problem, is that it is not as easy as it looks like. If a concurrent session deletes the row between you insert and update, then the update will fail. You have to manage this. The failed insert cannot leave a lock on the rows that was not inserted.

The second problem is that the SQL engine is optimized for transactions which commit. When the ‘dup_val_on_index’ on index occurs, you have already inserted the table row, updated some indexes, etc. And all that has to be rolled back when the exception occurs. This generates unnecessary contention on the index leaf block, and unnecessary redo.

Then the third problem, and probably the worst one, is that an exception is an error. And error management has lot of work to do, such as looking into the dictionary for the violated constraint name in order to give you a nice error message.

I’ve created the following table:

create table demo as select * from dual;
create unique index demo on demo(dummy);

And I’ve run 10 million inserts on it, all with duplicates:

exec for i in 1..1e7 loop begin insert into demo values('x'); exception when others then null; end; end loop;

Here is some extracts from the AWR on manual snapshots taked before and after.

Elapsed: 20.69 (mins)
DB Time: 20.69 (mins)

This has run for 20 minutes.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 33.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 92.31 % Non-Parse CPU: 94.90
Flash Cache Hit %: 0.00

The ‘Execute to Parse %’ show that 2/3 of statements are parsed each time.


SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 19-20
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 180,125,740
-> Captured SQL account for 127.7% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.80094E+08 1 1.800942E+08 100.0 1,239.8 99.5 .3 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
1.60094E+08 10,000,000 16.0 88.9 983.1 100.3 .4 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')
 
49,999,995 9,999,999 5.0 27.8 201.1 103.2 0 2skwhauh2cwky
PDB: PDB1
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 and o.owner# = u.u
ser#
 
19,999,998 9,999,999 2.0 11.1 148.5 98.9 0 2jfqzrxhrm93b
PDB: PDB1
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# =
cd.con# and cd.enabled = :1 and c.owner# = u.user#

My failed inserts have read on average 16 blocks for each attempt. that’s too much for doing nothing. And in addition to that, I see two expensive statements parsed and executed each time: one to get the object name and one to get the constraint name.
This is how we can retreive the error message which is:

 
ORA-00001: unique constraint (SCOTT.DEMO) violated
 

This is a big waste of resource. I did this test in PL/SQL but if you cumulate all worst practices and run those inserts row by row, then you will see those colors:
CaptureBreakReset

The Orange is ‘Log File Sync’ because you generate more redo than necessary.
The Green is ‘CPU’ because you read more blocks than necessary.
The read is ‘SQL*Net break/reset to client’ when the server process sends the error.

_suppress_identifiers_on_dupkey

When you set “_suppress_identifiers_on_dupkey” to true, Oracle will not return the name of the constraint which is violated, but only the information which is already there in the session context.

Here is the message that you get:

 
ORA-00001: unique constraint (UNKNOWN.obj#=73375) violated
 

Where 73375 is the OBJECT_ID of the index where the unique constraint exception has been violated.

You have less information, but it is faster:

Elapsed: 15.45 (mins)
DB Time: 15.48 (mins)

There is no Soft Parse overhead:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 96.43
Execute to Parse %: 99.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 90.38 % Non-Parse CPU: 99.95
Flash Cache Hit %: 0.00

Our statement is the only one using the CPU and reads less blocks:

SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 21-22
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 110,132,467
-> Captured SQL account for 81.8% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.10091E+08 1 1.100906E+08 100.0 926.2 98.8 1 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
90,090,580 10,000,000 9.0 81.8 515.7 99.1 1.9 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')

This parameter is a workaround for bad design, but not a solution.

Update – no rows – Insert

In order to avoid all this rollback and exception management overhead, there is another idea. Start with the update and, when no row was found, insert it. This is easy with the ROWCOUNT.

begin
update ...
if SQL%ROWCOUNT = 0 then insert ...

This is more efficient but still subject to a concurrent session inserting the row between your update and you insert. But at least, you manage the different scenario with a condition on ROWCOUNT rather than with an exception, which is more scalable.

So what?

Always use the database in the expected way. Exceptions and Errors are not for the normal scenario of the use-case. Exceptions should be unusual. The solution is to use the MERGE statement which has been implemented exactly for this reason: do an upsert without the error management overhead and with the statement isolation level which prevents errors in a multi-user environment.

 

Cet article “_suppress_identifiers_on_dupkey” – the SAP workaround for bad design est apparu en premier sur Blog dbi services.

Businesses Transform Customer Engagements with Oracle Live Experience Cloud

Oracle Press Releases - Fri, 2017-09-29 07:00
Press Release
Businesses Transform Customer Engagements with Oracle Live Experience Cloud Accelerates Transformation through Rich, Highly Personalized Mobile-First Interactions

Redwood Shores, Calif.—Sep 29, 2017

Oracle today announced Oracle Live Experience Cloud, a customer engagement service for the mobile generation. With the mobile and digital landscape shaping the way customers interact with businesses, companies must quickly adapt to changing expectations to deliver frictionless, real-time, contextual experiences across channels. With Oracle Live Experience Cloud, users can address these new requirements and bring a new dimension to their mobile and business applications by being able to serve customers in the way that best meets their needs, be it HD voice, HD video, screen sharing, and annotations.

As such, businesses will have the ability to quickly resolve customer issues, drive greater customer loyalty, and increase satisfaction by engaging users on the right channel at the right time. Agents will also be empowered to deliver better customer experiences by having access to contextual customer data and insights, cutting call times and limiting customer frustration.

“Nearly 70% of IT and business leaders say ‘improving customer experience’ is the goal of their digital transformation initiative, and advancements in the contact center are crucial to success,” says Robin Gareiss, president of Nemertes Research1. “Successful digital transformation requires short time to market. By leveraging a cloud-based solution, organizations can start seeing improvements in CSAT scores, revenue, and customer retention immediately. What’s more, the ability to retain context across channels from within the native app is a huge development and will dramatically boost customer satisfaction.”

A recent Oracle report titled “The Future of Enterprise Communications: The Cloud Redefines Customer Experience” noted that while 65 percent of companies agree communications embedded within cloud applications will become the dominant way of communicating with employees, suppliers and customers, many currently lack the ability to do so effectively. Oracle Live Experience Cloud enables embedded contextual data and business analytics so users can easily switch between channels without losing key information already shared. Regardless of the customer’s preferred channel, the user will enjoy a more streamlined experience while the business gains valuable customer insights that can be leveraged within its core business applications.

“Nothing is more aggravating than dealing with a call center or service desk where you are stuck in a long, dehumanized loop of menu options with a slow resolution,” said Doug Suriano, senior vice president, general manager, Oracle Communications. “With Oracle Live Experience Cloud, businesses can eliminate customer friction points by harnessing the power of contextual communications and real-time engagement capabilities to offer a personalized and highly interactive digital experience that builds customer loyalty and improves business outcomes.”

 A cloud-native solution, Oracle Live Experience Cloud can be easily integrated into web and mobile apps and used to proactively engage customers at key moments of their individual journey. It modernizes existing Call Center and CRM solutions, supporting enterprise digital transformation efforts employees to deliver contextual and responsive cross-channel engagements that satisfy the customer and ultimately drive sales. Finally, businesses can optimize engagement success by measuring interactions in real-time and provisioning updates to further improve overall business results.

Features of Oracle Live Experience Cloud
  • Real time communication capabilities - HD voice, HD video, screen sharing, annotating
  • In application channels and mobile controls
  • Rules based contextual  routing for all channels
  • Escalate from chat bot to live assistance
  • Customer context
  • Real-time recording, search and playback
  • Integrated Analytics
  • Pre-built API integrations for key CRM systems
  • Modern desktop agent experience
  • Design personalized engagement scenarios based on context, history and business priorities
  • Insights on individual and overall service team performance, engagement success KPIs, with supervisor and administrator views.
  • Encryption and Bring Your Own Key capabilities
  • Elastic network, compute, and storage resources optimized at all layers for real time communication service
Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

1 Nemertes 2017-18 Contact Center and Customer Engagement Benchmark Research Study | https://nemertes.com/

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.415.856.5145

Calling Procedure Parallel

Tom Kyte - Fri, 2017-09-29 05:06
I have below procedure which in turn calls two other Procedures. It calls and works fine but the two procs runs serial. I want to run them parallel and get the results on the main procs cursor. How do I do that? I tried with dbms_job.submit but could...
Categories: DBA Blogs

Getting sub-string from two Clobs object and compare those substrings

Tom Kyte - Fri, 2017-09-29 05:06
Hi, I am new to CLOB objects but seems like I need to get my hands dirty on this. I have a CLOB column in my table and I need to get item SKU values from this column separated by commas. This is hoe my CLOB Column value looks like. ------- <...
Categories: DBA Blogs

External table concepts

Tom Kyte - Fri, 2017-09-29 05:06
Hi All, I am new to oracle external table concepts. Have a very basic query - if i have a csv with the below columns Col1, Col2, Col3 Col4 .... Coln and i want to insert only Col3 & Col4 into an oracle external table , what would be my ...
Categories: DBA Blogs

sql query to update a table based on data from other table

Tom Kyte - Fri, 2017-09-29 05:06
Hi, Looks like my other similar questions got closed, so asking a new question. I have a cust_bug_data table with 2 columns(ROOT_CAUSE, BUG_NUMBER) like as follows: <code>create table cust_bug_data(ROOT_CAUSE VARCHAR(250), BUG_NUMBER NUMBER N...
Categories: DBA Blogs

Oracle Big Data Cloud Service – Compute Edition

Yann Neuhaus - Fri, 2017-09-29 03:00

In this blog post, we will see how to create a Big Data cluster through the Oracle Cloud Services. If you want more details about the Oracle Big Data cloud services offering, you can refer to my previous blog Introduction to Oracle Big Data.

First, you need to create your trial account through the following link: https://cloud.oracle.com/tryit. Note that, when you create your trial account, all information (phone number, address, credit card…), must be from the same country. Otherwise, you will get an error message.

Then you will get an email from Oracle with your connection information. The 4 main connection information are:

During the first connection you need to change your password and answer to 3 secret questions.

You are now login into the Oracle Cloud Services Dashboard. Select the “Big Data – Compute Edition” service to create your cluster.

BDCS-CE Dashboard

Click on “Service” and “Create Service”.

BDCS-CE Create Cluster

First, complete the service information. Cluster name, description… and click on “Next”.

BDCS-CE Cluster creation

Then, you enter the details of your Big Data cluster (configuration, credentials, storage…).

Cluster configuration:

Use the “full” deployment. It will provision a cluster with Spark, MapReduce, Zeppelin, Hive, Spark Thrift, Big Data File System.

Credentials:

Generate an ssh public key and insert it (see screenshot below). Update or keep the current Administrative user / password which is very important for the next operations.

Storage:

Oracle Public Cloud is working with Object storage container. Which means that, a storage container can be used by all cloud services. For the Big Data Service you need to use an existing storage container or create one. The storage container name must follow a specific syntax.

https://<identity_domaine>.storage.oraclecloud.com/v1/Storage-<identity_domaine>/<container_name>

Example: https://axxxxxx.storage.oraclecloud.com/v1/Storage-axxxxxx/dbistorage

You can find the complete configuration below.

BDCS-CE Configuration Overview

Confirm your cluster configuration and click on “Next”.

During the cluster deployment, you can take the time to read the documentation: https://docs.oracle.com/en/cloud/paas/big-data-cloud/index.html

Once your services has been deployed, you can access to the Big Data Cluster Console, to monitor your cluster and access it.

BDCS-CE Cluster Overview

 

BDCS-CE Cluster Console

OBDCS-CE Monitoring

You have now deployed an Big Data cluster composed by 3 nodes, based on HortonWorks distribution with the following tools:

  • HDFS = Hadoop Distributed FileSystem
  • YARN = Resources management for the cluster
  • Hive = Data Warehouse for managing large data sets using SQL
  • Spark= Data processing framework
  • Pig = High-level platform for creating programs that runs on Hadoop
  • ZooKeeper = Hadoop cluster scheduler
  • Zeppelin = Data scientist workbench, web based.
  • Alluxio = Memory speed virtual distributed storage
  • Tez = Framework for YARN-based, Data Processing Applications In Hadoop

Your Oracle Big Data cluster, through Oracle Big Data Cloud Service – Compute Edition is now ready to use.

Enjoy ;-)

 

Cet article Oracle Big Data Cloud Service – Compute Edition est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator