Feed aggregator

How to get the count of consecutive transactions

Tom Kyte - Mon, 2017-02-13 16:26
Team, Good Monday Morning!!!!!!!! First let me share the sample table and insert scripts for your reference. Table creation script : CREATE TABLE st_lo_trans(customer_name VARCHAR2(10),loan_code NUMBER,loan_tran_code NUMBER,tran_id VARCHA...
Categories: DBA Blogs

Most of the rows are going to one single partition

Tom Kyte - Mon, 2017-02-13 16:26
Hi, We have hash partitioned table with 32 partitions and table has 25 million records.Most of inserted records are going one single partitions in this case 17 million are going to one partition.Is there way this can be controlled so that rows are...
Categories: DBA Blogs

partition's drop

Tom Kyte - Mon, 2017-02-13 16:26
i am creating on partition table with help of the interval (numtoymintervel) then create the 4 partitions. Now i am drop the table without drop the partition's. then i will check in user_tab_partitions. that 4 partitions are creating with systable...
Categories: DBA Blogs

Why we use alter database open resetlogs after flashback database to guaranteed restore point

Tom Kyte - Mon, 2017-02-13 16:26
Hi Team, Below how i created a guaranteed restore point. ========================================================================================= SQL> select name,database_role,open_mode,flashback_on,log_mode from v$database; NAME D...
Categories: DBA Blogs

How to avoid materialized views having staleness="IMPORT" after importing schema by expdp/impdp?

Tom Kyte - Mon, 2017-02-13 16:26
After importing schema by expdp/impdp, I have materialized views with staleness set to "IMPORT". That is, complete refresh is needed to fast refresh is needed later on. However, those materialized views (as star schema commonly seen in datamart) take...
Categories: DBA Blogs

How to extract substrings enclosed within double quotes from a string

Tom Kyte - Mon, 2017-02-13 16:26
I have a requirement to extract column names (there could be multiple instances) from a SQL string. The column names will always be enclosed within double quotes. e.g. "le code" = 802 AND "principal amount" > 1200 OR "branch id" = 'ABC' <b>Req...
Categories: DBA Blogs

ORA-01858

Tom Kyte - Mon, 2017-02-13 16:26
I'm using Apple's macbook pro and I downloaded Virtualbox in order to work with Oracle 11g. I installed Windows 7 in Virtualbox. I wrote my queries in sql*plus. I have a problem with single quotes. Even the query is work in the classroom, it always g...
Categories: DBA Blogs

Ford Motor Company Chooses Oracle HCM Cloud Suite

Oracle Press Releases - Mon, 2017-02-13 16:20
Press Release
Ford Motor Company Chooses Oracle HCM Cloud Suite Ford selects Oracle HCM Cloud to manage team experience

Redwood Shores, Calif.—Feb 13, 2017

Oracle announced today that Ford Motor Company, has selected Oracle’s HCM Cloud Suite towards a more streamlined, digital view of HR product and services to help improve the employee and manager experience. Cloud services are a key enabler for HR transformations.

Traditionally, businesses have used multiple platforms to manage their HR needs. Today, the cloud has changed this as companies seek a new suite of services that can be integrated with existing technology. The Oracle HCM Cloud Suite, allows users to support large-scale Global HR, reporting and security requirements and provides Global HR, Payroll (U.S.), Workforce Compensation, Goal and Performance Management, Talent Review and Succession, and Recruiting and Onboarding. For HR, it is equally important to ensure both an international and security conscious portfolio of services that can capitalize on universal reporting and data analytics to anticipate and plan for driving business and recruiting needs.

Oracle’s SaaS and PaaS systems are robust, yet flexible enough to address these needs. The cloud offers a single platform to access employee records and information while also helping HR professionals cut costs and save time so they can focus on more strategic decision-making and less on paperwork. The Oracle Cloud Customer Connect Program is a unique offering for customers to connect with peers to address the unique challenges of their business and learn from others best practices and strategies for moving to the cloud.

“Oracle is extremely excited to be working with Ford and others in the auto industry,” said Gretchen Alarcon, group vice president of HCM Product Strategy for Oracle. “We work hard to listen and deliver modern, unified cloud solutions that get at the heart of what will ultimately offer an all-encompassing service to save a business money and offer a strategic approach to business practices. We want to make sure this technology ultimately adds to any company’s move toward the future. But even more so, we want to let our customers know that we offer our network and collaboration as a means to help during this modern transformation.”

Contact Info
Jennifer Yamamoto
Oracle
+1.916.761.9555
jennifer.yamamoto@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Jennifer Yamamoto

  • +1.916.761.9555

Oracle Internet of Things Cloud Applications Enable Organizations to Optimize the Digital Supply Chain with Improved Visibility and Predictive Insights

Oracle Press Releases - Mon, 2017-02-13 14:15
Press Release
Oracle Internet of Things Cloud Applications Enable Organizations to Optimize the Digital Supply Chain with Improved Visibility and Predictive Insights IoT predictive analytics automate business processes and operations across the supply chain to enhance the customer experience

Oracle Modern Supply Chain Experience, San Jose, Calif.—Feb 13, 2017

Oracle announced that it is expanding its Internet of Things (IoT) portfolio with four new cloud solutions to help businesses fully utilize the benefits of digital supply chains. These solutions enable businesses to detect, analyze, and respond to IoT signals and incorporate these insights into existing and rapidly evolving market capabilities.

Signals from IoT-enabled devices are growing exponentially and represent an opportunity for organizations that are able to manage, interpret, and leverage these vast sources of data. Organizations with the tools to integrate device data into business processes and applications can gain critical predictive insights and drive cost-effective actions. IoT solutions enable businesses to deliver innovative new services faster and with less risk to their customers.

“The world is becoming ever-more digital and IoT is the next step on that journey—delivering better customer experiences and helping organizations achieve a competitive advantage," said Bhagat Nainani, group vice president of IoT applications development at Oracle. “Oracle IoT Applications enable businesses to leverage connected devices and equipment to modernize the digital supply chain.”

Oracle IoT Applications support three important functions: device registration and data collection; signal analysis and interpretation; and connection to specific business processes. By applying advanced, predictive analytics to the device signals, Oracle IoT Applications calculate complex business-specific KPIs and trigger automated actions in real time. The solutions can deliver capabilities, such as predictive maintenance, diagnostic dashboards, and increased real-time visibility, and can be applied to manufacturing, quality, and product data, as well as logistics and employee movements.

The new cloud applications integrate with Oracle Supply Chain Management (SCM) Cloud Applications to make IoT signals actionable and include the following products:

  • IoT Asset Monitoring Cloud: Monitors assets, utilization, availability, and data from connected sensors and creates incidents in the backend SCM, ERP, or Service Clouds to automate the workflows
  • IoT Connected Worker Cloud: Tracks employees to support safety, service, and regulatory compliance initiatives
  • IoT Fleet Monitoring Cloud: Monitors position and progress of passenger, service, and delivery vehicles and driver-behavior
  • IoT Production Monitoring Cloud: Monitors production equipment to assess and predict manufacturing issues

These aforementioned IoT Cloud applications are built on the Oracle IoT Cloud and Oracle Big Data Cloud. They integrate with Oracle SCM Cloud and Oracle Service Cloud, as well as on-premises software, to enable operational visibility, and facilitate better customer service and responsiveness.

“Noble Plastics is an innovative custom injection molding manufacturer that leverages robotics for automation. We see tremendous potential in IoT to drive predictive maintenance of industrial robots,” said Scott Rogers, Technical Director, Noble Plastics. “Oracle IoT Asset Monitoring Cloud will help us in monitoring FANUC robots that are deployed on our plant floors and alert technicians proactively on their mobile device. This will enhance product quality, production efficiency and throughput while controlling costs.”

For additional information, visit Oracle Cloud and connect with Oracle SCM Cloud and Oracle IoT Cloud on Twitter.

 
Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

Where do you specify the Date Format Mask

Dimitri Gielis - Mon, 2017-02-13 10:32

When reviewing Oracle APEX applications I often see hardcoded date or timestamp formats.
You can define your date formats in multiple places in your application. In your item or column attributes, as part of your code e.g.TO_CHAR(sysdate, ‘DD-MON-YYYY HH24:MI’) or if you want to make it more reusable you might create a substitution string. That will all work, but you can make your life easier and for the ones looking or needing to maintain your code…

APEX itself provides in the Globalization attributes (Shared Components) a place where you can define your default date and format masks for your entire applications. I consider this a best practice to use those fields, as it’s defined in one logical place, so when you need to change your format, you do it once and you’re done. In your custom SQL and PL/SQL code you can also reference those format masks by predefined substitution strings:

  • APP_NLS_DATE_FORMAT
  • APP_DATE_TIME_FORMAT
  • APP_NLS_TIMESTAMP_FORMAT
  • APP_NLS_TIMESTAMP_TZ_FORMAT

e.g. TO_CHAR(sysdate, :APP_NLS_DATE_FORMAT)

Here’s a screenshot which shows which substitution string corresponds with which field:

Application Attributes - Globalization

You can define the format mask you want, or you can click the arrow to see most used format masks represented with an example. To make it a bit easier, I put the format mask (in red) next to it, so you see the underlying format mask more easily:

Possible date format masks defined in the pop-up

If you need to make the format mask dynamic, for example using different format masks for different language, APEX doesn’t allow you to translate that substitution string through Text Messages, but you can work around it by using your own substitution string and have that dynamically filled. In the Globalization Attributes you would add instead of a hardcoded format mask your own substitution string e.g. &MY_TRANSLATED_DATE.FORMAT.

Categories: Development

Band Join 12c

Jonathan Lewis - Mon, 2017-02-13 07:53

One of the optimizer enhancements that appeared in 12.2 for SQL is the “band join”. that makes certain types of merge join much more  efficient.  Consider the following query (I’ll supply the SQL to create the demonstration at the end of the posting) which joins two tables of 10,000 rows each using a “between” predicate on a column which (just to make it easy to understand the size of the result set)  happens to be unique with sequential values though there’s no index or constraint in place:

select
        t1.v1, t2.v1
from
        t1, t2
where
        t2.id between t1.id - 1
                  and t1.id + 2
;

This query returns nearly 40,000 rows. Except for the values at the extreme ends of the range each of the 10,000 rows in t2 will join to 4 rows in t1 thanks to the simple sequential nature of the data. In 12.2 the query, with rowsource execution stats enabled, completed in 1.48 seconds. In 12.1.0.2 the query, with rowsource execution stats OFF, took a little over 14 seconds. (With rowsource execution stats enabled it took 12.1.0.2 a little over 1 minute to return the first 5% of the data – I didn’t bother to wait for the rest, though the rate would have improved over time.)

Here are the two execution plans – spot the critical difference:


12.1.0.2
-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |    25M|   715M|  1058  (96)| 00:00:01 |
|   1 |  MERGE JOIN          |      |    25M|   715M|  1058  (96)| 00:00:01 |
|   2 |   SORT JOIN          |      | 10000 |   146K|    29  (11)| 00:00:01 |
|   3 |    TABLE ACCESS FULL | T1   | 10000 |   146K|    27   (4)| 00:00:01 |
|*  4 |   FILTER             |      |       |       |            |          |
|*  5 |    SORT JOIN         |      | 10000 |   146K|    29  (11)| 00:00:01 |
|   6 |     TABLE ACCESS FULL| T2   | 10000 |   146K|    27   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("T2"."ID"<="T1"."ID"+2) 5 - access("T2"."ID">="T1"."ID"-1)
       filter("T2"."ID">="T1"."ID"-1)

12.2.0.1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 40000 |  1171K|    54  (12)| 00:00:01 |
|   1 |  MERGE JOIN         |      | 40000 |  1171K|    54  (12)| 00:00:01 |
|   2 |   SORT JOIN         |      | 10000 |   146K|    27  (12)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   146K|    25   (4)| 00:00:01 |
|*  4 |   SORT JOIN         |      | 10000 |   146K|    27  (12)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T2   | 10000 |   146K|    25   (4)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."ID">="T1"."ID"-1)
       filter("T2"."ID"<="T1"."ID"+2 AND "T2"."ID">="T1"."ID"-1)

Notice how operation 4, the FILTER, that appeared in 12.1 has disappeared in 12.2 and the filter predicate that it used to hold is now part of the filter predicate of the SORT JOIN that has been promoted to operation 4 in the new plan.

As a reminder – the MERGE JOIN operates as follows: for each row returned by the SORT JOIN at operation 2 it calls operation 4. In 12.1 this example will then call operation 5 so the SORT JOIN there happens 10,000 times. It’s important to know, though, that the name of the operation is misleading; what’s really happening is that Oracle is “probing a sorted result set in local memory” 10,000 times – it’s only on the first probe that it finds it has to call operation 6 to read and move the data into local memory in sorted order.

So in 12.1 operation 5 probes (accesses) the in-memory data set starting at the point where t2.id >= t1.id – 1; I believe there’s an optimisation here because Oracle will recall where it started the probe last time and resume searching from that point; having found the first point in the in-memory set where the access predicate it true Oracle will walk through the list passing each row back to the FILTER operation as long as the access predicate is still true, and it will be true right up until the end of the list. As each row arrives at the FILTER operation Oracle checks to see if the filter predicate there is true and passes the row up to the MERGE JOIN operation if it is. We know that on each cycle the FILTER operation will start returning false after receiving 4 rows from SORT JOIN operation – Oracle doesn’t.  On average the SORT JOIN operation will send 5,000 rows to the FILTER operation (for a total of 50,000,000 values passed and discarded).

In 12.2, and for the special case here where the join predicate uses constants to define the range, Oracle has re-engineered the code to eliminate the FILTER operation and to test both parts of the between clause in the same subroutine it uses to probe and scan the rowsource. In 12.2 the SORT JOIN operation will pass 4 rows up to the MERGE JOIN operation and stop scanning on the fifth row it reaches. In my examples that’s an enormous (CPU) saving in subroutine calls and redundant tests.

Footnote:

This “band-join” mechanism only applies when the range is defined by constants (whether literal or bind variable). It doesn’t work with predicates like (e.g.):

where t2.id between t1.id - t1.step_back and t1.id + t1.step_forward

The astonishing difference in performance due to enabling rowsource execution statistics is basically due to the number of subroutine calls eliminated – I believe (subject to a hidden parameter that controls a “sampling frequency”) that Oracle will call the O/S clock twice each time it calls the SORT JOIN operation from the FILTER operation to acquire the next row. In 12.1 we’re doing 50M calls redundant calls to SORT JOIN.

The dramatic difference in performance even when rowsource execution statistics isn’t enabled is probably something you won’t see very often in a production system – after all, I engineered a fairly extreme data set and query for the purposes of demonstration. Note, however, the band join does introduce a change in cost, so it’s possible that on the upgrade you may find a few cases where the optimizer will switch from a nested loop join to a merge join using a band-join.


Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive Real-Time Analytics

Oracle Press Releases - Mon, 2017-02-13 07:00
Press Release
Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive Real-Time Analytics Enhances Oracle Cloud Platform Portfolio with Oracle Data Integrator Cloud Service

Redwood Shores, Calif.—Feb 13, 2017

Oracle today expanded Oracle Cloud Platform’s data integration offerings with the launch of Oracle Data Integrator Cloud. This new cloud service significantly simplifies and accelerates cross-enterprise data integration to support real-time analytics that help organizations drive better business decisions. 

In today’s information driven economy, data is a fundamental asset to most businesses. As more and more data moves to the cloud, getting information and insight to the right people and the right applications at the right time becomes progressively more difficult. With the introduction today of the Oracle Data Integrator Cloud, organizations can improve their agility by deploying projects more quickly, reduce risk with an open, non-proprietary technology, and reduce costs with better productivity. 

“To be effective and agile, enterprises need seamless communication and flow of data between sources and targets - data originating from IoT, Web, and business applications or data that is stored in the cloud or on premises,” said Jeff Pollock, vice president of product management, Oracle. “Oracle Data Integrator Cloud provides businesses with a high-performance, simple, and integrated cloud service to execute data transformations where the data lies, with no hand coding required, and without having to copy data unnecessarily.”

Easy to use and integrate, Oracle Data Integrator Cloud helps organizations improve productivity, reduce development costs, and lower total cost of ownership by facilitating better data movement and transformation between Oracle and non-Oracle systems, data sources, and applications. It offers a flow-based declarative user interface along with release management capabilities that allow customers to improve productivity and better manage their code, as well as their development, testing and production environments. Oracle Data Integrator Cloud’s high performance architecture, with its E-LT capabilities and advanced parallelism options enable faster, more efficient loading and transformation for data marts, data warehouses, and big data systems.

Oracle Data Integrator Cloud is fully integrated with Oracle’s PaaS offerings, including Oracle Database Cloud, Oracle Database Exadata Cloud, and Oracle Big Data Cloud. Oracle also delivers pre-built integration for non-Oracle solutions, allowing users to seamlessly switch between underlying Big Data technologies such as Hive, HDFS, HBase, and Sqoop.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data.  The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, please visit us at http://cloud.oracle.com.

Contact Info
Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Kristin Reeves

  • +1.415.856.5145

UNION vs UNION ALL: What’s The Difference?

Complete IT Professional - Mon, 2017-02-13 05:00
What’s the difference between UNION and UNION ALL in Oracle SQL? There are a few. Learn what they are in this article. What Is UNION and UNION ALL? First of all, let’s explain what they are. UNION and UNION ALL are both “set operators”. They are keywords you can use to combine two sets of […]
Categories: Development

Oracle 12c – Issues with the HEATMAP Segment even if the heat map feature is not used

Yann Neuhaus - Mon, 2017-02-13 04:55

When I don’t need I feature, I don’t turn it on, or do not use it because it reduces the possibility to run into issues. Most of the times this is true, however, during the preparation for an RMAN workshop, the RMAN list failure command showed me the following dictionary issue.

RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected        Summary
---------- -------- --------- -------------------- -------
2          CRITICAL OPEN      13-FEB-2017 10:12:26 SQL dictionary health check: seg$.type# 31 on object SEG$ failed

I thought first, that it might be related to some incorrect errors shown by the health check (DBMS_HM), because there used to be some issues with that tool. But even after applying the following patch, nothing changed and the error still appears.

19543595: INCORRECT HEALTHCHECK ERRORS FROM DBMS_HM – FALSE ERRORS ON TS$ , FILE$ OR USER

So I started a manual health check again to get some more details.

SQL> BEGIN
  2  DBMS_HM.RUN_CHECK (check_name => 'Dictionary Integrity Check',
  3  run_name => 'WilliamsDICTrun002',
  4  input_params => 'CHECK_MASK=ALL');
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> SELECT DBMS_HM.GET_RUN_REPORT('WilliamsDICTrun002') from dual;

DBMS_HM.GET_RUN_REPORT('WILLIAMSDICTRUN002')
---------------------------------------------------------------------
Basic Run Information
 Run Name                     : WilliamsDICTrun002
 Run Id                       : 61
 Check Name                   : Dictionary Integrity Check
 Mode                         : MANUAL
 Status                       : COMPLETED
 Start Time                   : 2017-02-13 10:56:58.250100 +01:00
 End Time                     : 2017-02-13 10:56:58.689301 +01:00
 Error Encountered            : 0
 Source Incident Id           : 0
 Number of Incidents Created  : 0

Input Paramters for the Run
 TABLE_NAME=ALL_CORE_TABLES
 CHECK_MASK=ALL

Run Findings And Recommendations
 Finding
 Finding Name  : Dictionary Inconsistency
 Finding ID    : 62
 Type          : FAILURE
 Status        : OPEN
 Priority      : CRITICAL
 Message       : SQL dictionary health check: seg$.type# 31 on object SEG$
               failed
 Message       : Damaged rowid is AAAAAIAABAAAK+RAAc - description: Ts# 1
               File# 2 Block# 28032 is referenced

Now I do have the ROWID, the file number and the block number of the affecting object. Let’s see what it is.

SQL> select FILE#, BLOCK#, TYPE#, TS#, BLOCKS from seg$ where rowid='AAAAAIAABAAAK+RAAc';

     FILE#     BLOCK#      TYPE#        TS#     BLOCKS
---------- ---------- ---------- ---------- ----------
         2      28032         11          1       1024
		 

SQL> SELECT segment_name, segment_type, block_id, blocks
  2  FROM   dba_extents
  3  WHERE
  4  file_id = 2
  5  AND
  6  ( 28032 BETWEEN block_id AND ( block_id + blocks ) );

SEGMENT_NAME               SEGMENT_TYPE               BLOCK_ID     BLOCKS
-------------------------- ------------------------ ---------- ----------
HEATMAP                    SYSTEM STATISTICS             28032       1024

Really strange. It is related to the HEATMAP segment, but I am not using the heat map feature, or used it in the past.

SQL> show parameter heat

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
heat_map                             string      OFF

SQL> select name, DETECTED_USAGES from DBA_FEATURE_USAGE_STATISTICS where name like 'Heat%';

NAME                     DETECTED_USAGES
------------------------ ---------------
Heat Map                               0

But how can I get this fixed now? You could either ignore this issue, create a SR at Oracle, or you can drop the statistics segment, in case you are not using the heatmap feature.

In my case, I decided to the drop the statistics segment by issuing the following command. Dropping the statistics segment works by setting the underscore parameter “_drop_stat_segment” to 1.

SQL> select SEGMENT_NAME, SEGMENT_TYPE from dba_extents where SEGMENT_TYPE = 'SYSTEM STATISTICS';

SEGMENT_NAME               SEGMENT_TYPE
-------------------------- ------------------------
HEATMAP                    SYSTEM STATISTICS

SQL> ALTER SYSTEM SET "_drop_stat_segment"=1 scope=memory;

System altered.

SQL> select SEGMENT_NAME, SEGMENT_TYPE from dba_extents where SEGMENT_TYPE = 'SYSTEM STATISTICS';

no rows selected

The heat map table is gone now. Let’s run the dictionary check again.

SQL> BEGIN
  2  DBMS_HM.RUN_CHECK (check_name => 'Dictionary Integrity Check',
  3  run_name => 'WilliamsDICTrun003',
  4  input_params => 'CHECK_MASK=ALL');
  5  END;
  6  /

PL/SQL procedure successfully completed.

SQL> SELECT DBMS_HM.GET_RUN_REPORT('WilliamsDICTrun003') from dual;

DBMS_HM.GET_RUN_REPORT('WILLIAMSDICTRUN003')
---------------------------------------------------------------------
Basic Run Information
 Run Name                     : WilliamsDICTrun003
 Run Id                       : 81
 Check Name                   : Dictionary Integrity Check
 Mode                         : MANUAL
 Status                       : COMPLETED
 Start Time                   : 2017-02-13 11:17:15.190873 +01:00
 End Time                     : 2017-02-13 11:17:15.642501 +01:00
 Error Encountered            : 0
 Source Incident Id           : 0
 Number of Incidents Created  : 0

Input Paramters for the Run
 TABLE_NAME=ALL_CORE_TABLES
 CHECK_MASK=ALL

Run Findings And Recommendations


RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

no failures found that match specification

 

Looks much better now.

Conclusion

Even if you are not using some features, you can still have trouble with them. :-)

 

Cet article Oracle 12c – Issues with the HEATMAP Segment even if the heat map feature is not used est apparu en premier sur Blog dbi services.

Webcast: "EBS Technology: Latest Features and Roadmap"

Steven Chan - Mon, 2017-02-13 02:06

ATG RoadmapOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for a summary of  recent updates from the Applications Technology Group, see:

Lisa Parekh, Vice President Technology Integration, provides an overview of Oracle’s recent advances and product strategy for Oracle E-Business Suite technology. This is the cornerstone session for Oracle E-Business Suite technology. Come hear how you can get the most out of Oracle E-Business Suite by taking advantage of the latest user interface updates and mobile applications.  Learn about systems administration and configuration management tools for running Oracle E-Business Suite on-premises or in the cloud, and hear how the latest technologies can be used with Oracle E-Business Suite to improve performance, security, and ease of integration for your system. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

How to retrieve current value of oracle identity column value for foreign key purpose

Tom Kyte - Sun, 2017-02-12 22:06
Hi, after inserting into parent table w identity column, how to retrieve the identity column value to use for child table foreign key purpose thx Heather
Categories: DBA Blogs

Date math to calculate A and B weeks

Tom Kyte - Sun, 2017-02-12 22:06
I am trying to calculate the Recycle Week. It is either an A or B week. My first attempt was the following with obvious problems as the first day of the week changes every year. <code> SELECT DECODE(MOD(TO_NUMBER(TO_CHAR(SYSDATE, 'WW')), 2), 0,...
Categories: DBA Blogs

"Table override": can I use this feature?

Tom Kyte - Sun, 2017-02-12 22:06
Hi! I created an empty table, lets say, tb_test (col1 varchar2(20 char), col2 varchar2(20 char)); Now I created a view of this table, lets say, vw_test as select * from tb_test; If I do select * from vw_test; I will get nothing, it is empty....
Categories: DBA Blogs

Goldengate 12c Find log sequence#, rba# for integrated extract (Doc ID 2006932.1)

Michael Dinh - Sun, 2017-02-12 20:50

When using integrated extract, info command does not show Seqno, RBA, e.g. Seqno 3292, RBA 79236752.
Command send status will show Seqno, RBA; is not usable when process is stopped.
For Oracle GoldenGate – Version 12.1.2.1.2 and later, there is now debug option to retrieve Seqno, RBA.

oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs01
$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     E_HAWK      00:00:10      00:00:06
EXTRACT     STOPPED     P_HAWK      00:00:00      00:02:03


GGSCI (arrow1.localdomain) 2> info e*

EXTRACT    E_HAWK    Last Started 2017-02-12 18:35   Status RUNNING
Checkpoint Lag       00:00:10 (updated 00:00:08 ago)
Process ID           1665
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2017-02-12 18:36:06
                     SCN 0.4928929 (4928929)


GGSCI (arrow1.localdomain) 3> info e* debug

EXTRACT    E_HAWK    Last Started 2017-02-12 18:35   Status RUNNING
Checkpoint Lag       00:00:10 (updated 00:00:02 ago)
Process ID           1665
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2017-02-12 18:36:16  Seqno 3292, RBA 79236752
                     SCN 0.4928939 (4928939)


GGSCI (arrow1.localdomain) 4> send e* status

Sending STATUS request to EXTRACT E_HAWK ...


EXTRACT E_HAWK (PID 1665)
  Current status: Recovery complete: Processing data

  Current read position:
  Redo thread #: 1
  Sequence #: 3292
  RBA: 79263888
  Timestamp: 2017-02-12 18:36:47.000000
  SCN: 0.4928987 (4928987)
  Current write position:
  Sequence #: 0
  RBA: 1420
  Timestamp: 2017-02-12 18:36:56.251219
  Extract Trail: ./dirdat/aa



GGSCI (arrow1.localdomain) 5> stop e*

Sending STOP request to EXTRACT E_HAWK ...
Request processed.


GGSCI (arrow1.localdomain) 6> send e* status

Sending STATUS request to EXTRACT E_HAWK ...

ERROR: sending message to EXTRACT E_HAWK (Connection reset by peer).


GGSCI (arrow1.localdomain) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     E_HAWK      00:00:07      00:00:13
EXTRACT     STOPPED     P_HAWK      00:00:00      00:03:09


GGSCI (arrow1.localdomain) 8> info e* debug

EXTRACT    E_HAWK    Last Started 2017-02-12 18:35   Status STOPPED
Checkpoint Lag       00:00:07 (updated 00:00:19 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2017-02-12 18:37:07  Seqno 3292, RBA 79275152
                     SCN 0.4929013 (4929013)


GGSCI (arrow1.localdomain) 9> exit
oracle@arrow1:HAWKA:/u01/app/12.2.0.1/ggs01
$

12cR2 DBCA can create a standby database

Yann Neuhaus - Sun, 2017-02-12 15:33

Do you like DBCA to create a database from command line, with -silent -createDatabase? On a simple command line you can provision a database, with oratab, tnsnames.ora directory creation and any setting you want. And you can even call a custom script to customize further. But if you want to put it in Data Guard, you have to do the duplicate manually with RMAN. This evolves in 12.2 with a new option in DBCA to do that: dbca -silent -createDuplicateDB -createAsStandby

Limitations

I’ve tried in the Oracle Public Cloud where I just created a RAC database. But unfortunately, this new feature is only for Single Instance:

[FATAL] [DBT-16056] Specified primary database is not a Single Instance (SI) database.
CAUSE: Duplicate database operation is supported only for SI databases.

Ok. RAC is complex enough anyway, so you don’t need that quick command line to create the standby. So I tried with a single instance database:

[FATAL] [DBT-16057] Specified primary database is a container database (CDB).
CAUSE: Duplicate database operation is supported only for non container databases.

Ok. That a bit surprising to have a new feature in 12.2 that works only on the architecture that is deprecated in 12.1 but if we think about it, DBCA is for fast provisioning. In multitenant you create a CDB once, put it in Data Guard, and the fast provisioning comes with the ‘create pluggable database’. And deprecated doesn’t mean that we do not use it, and it is good to have a simple command line tools for easy provisioning in non-CDB.

Then, I tried on a non-CDB that I’ve created in 12.2

I’m a big fan of EZCONNECT but I had a few problems with it. What’s worth to know is that there is no ‘impossible to connect’ message. When it cannot connect, the following message is raised:

[FATAL] [DBT-16051] Archive log mode is not enabled in the primary database.
ACTION: Primary database should be configured with archive log mode for creating a duplicate or standby database.

just because this is the first thing that DBCA checks and this is where it fails when connections is not ok.

But you can also use a tnsnames.ora network service name. This is what I’ll use for -primaryDBConnectionString

$ tnsping ORCLA
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 11-FEB-2017 22:28:35
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = MAA.compute-usslash.oraclecloud.internal)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcla.compute-usslash.oraclecloud.internal)))
OK (0 msec)

-createDuplicateDB -createAsStandby

Here is an example:

dbca -silent -createDuplicateDB -gdbName ORCLB.compute-usslash.oraclecloud.internal -sid ORCLB -sysPassword "Ach1z0#d" -primaryDBConnectionString ORCLA -createAsStandby -dbUniquename ORCLB

This will connect RMAN to the target (here called ‘primary’), with the connect string ORCLA and run a duplicate to create ORCLB as specified.

It starts to create a temporary listener (which is still there in listener.ora even after completion), create the auxiliary instance and run RMAN:
Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCLB/orcla.log" for further details.

Through RMAN API, the file names are set:


run {
set newname for datafile 1 to '/u01/app/oracle/oradata/orclb/system01.dbf' ;
set newname for datafile 3 to '/u01/app/oracle/oradata/orclb/sysaux01.dbf' ;
set newname for datafile 4 to '/u01/app/oracle/oradata/orclb/undotbs01.dbf' ;
set newname for datafile 7 to '/u01/app/oracle/oradata/orclb/users01.dbf' ;
set newname for tempfile 1 to '/u01/app/oracle/oradata/orclb/temp01.dbf' ;

and the DUPLICATE FOR STANDBY FROM ACTIVE is run:

duplicate target database
for standby
from active database
dorecover
spfile
set 'db_recovery_file_dest_size'='8405385216'
set 'compatible'='12.2.0'
set 'undo_tablespace'='UNDOTBS1'
set 'dispatchers'='(PROTOCOL=TCP) (SERVICE=ORCLAXDB)'
set 'db_name'='orcla'
set 'db_unique_name'='ORCLB'
set 'sga_target'='2281701376'
set 'diagnostic_dest'='/u01/app/oracle'
set 'audit_file_dest'='/u01/app/oracle/audit'
set 'open_cursors'='300'
set 'processes'='300'
set 'nls_language'='AMERICAN'
set 'pga_aggregate_target'='757071872'
set 'db_recovery_file_dest'='/u01/app/oracle/fast_recovery_area/orcla'
set 'db_block_size'='8192'
set 'log_archive_format'='%t_%s_%r.dbf'
set 'nls_territory'='AMERICA'
set 'control_files'="/u01/app/oracle/oradata/orclb/control01.ctl", "/u01/app/oracle/fast_recovery_area/orcla/ORCLB/control02.ctl"
set 'audit_trail'='DB'
set 'db_domain'='compute-usslash.oraclecloud.internal'
set 'remote_login_passwordfile'='EXCLUSIVE'
reset 'local_listener'
reset 'db_file_name_convert'
set 'log_archive_dest_1'='location=/u01/app/oracle/fast_recovery_area/orcla'
reset 'event'
reset 'remote_listener'
nofilenamecheck;
}

The parameters are coming from the ‘primary’ and adapted for the new database. Be careful. This is where I prefer to review the parameters before. For example, when you duplicate to clone the primary (without the -createAsStandby) you probably don’t want to keep the same log_archive_dest that was set in a Data Guard configuration. I’ll have to post a blog about that.

At the end, the standby database is opened read-only, so be careful to close it before starting the apply of redo if you don’t have the Active Data Guard option.

Data Guard

DBCA doesn’t go beyond the DUPLICATE. And you can use it also in Standard Edition to setup the manual standby.

I hope that one day we will have an option to create the Data Guard configuration in the same process, but here you have to do it yourself:

  • No tnsnames.ora entry is added for the standby
  • The static listener entries are not added in listener.ora
  • No Data Guard configuration is there
  • The Data Guard Broker is not started except if it was set in advance to true on primary
  • No standby redo logs are created (except when they were present on primary)

You can set dg_broker_start=true and create the standby redo logs on a post-script that you call with the -customScripts argument. However, the best way is to do it in advance on the primary, and then the duplicate will do the same on the standby.

So what?

You don’t need this new feature because it is easy to automate it yourself. It’s just a copy of spfile parameters, with a few change, and a RMAN duplicate command. But your scripts will be specialized for your environment. Generic scripts are more complex to maintain. The big advantage to have this integrated on DBCA is that is designed for all configurations, and is maintained through versions.

 

Cet article 12cR2 DBCA can create a standby database est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator