Feed aggregator

Oracle Data Integrator 12c: Getting Started - Developer's Quickstart

Rittman Mead Consulting - Mon, 2016-11-14 13:18

I’ve decided that it’s time for a refresher on Oracle Data Integrator 12c. This week in the “Oracle Data Integrator 12c: Getting Started” series: getting a quick start on mapping development. Several objects must be created before a single bit of ETL can even be created, and for those who are new to the product, as many readers of this series will be, that can be frustrating. The objects that must be in place are as follows:

  • Data Server
  • This object is the connection to your data source. Created under one of the many technologies available in ODI, this is where the JDBC url, username, password, and other properties are all created and stored.
  • Physical Schema
  • Underneath the Data Server you’ll find the Physical Schema. This object, when connecting to a relational database, represents the database schema where the tables reside that you wish to access in ODI.
  • Logical Schema
  • Here’s where it can sometimes get a bit tricky for folks new to Oracle Data Integrator. One of the great features in ODI is how it abstracts the physical connection and schema from the logical objects. The Logical Schema is mapped to the Physical Schema by an object called a Context. This allows development of mappings and other objects to occur against the Logical schema, shielding the physical side from the developers. Now when promoting code to the next environment, nothing must changed in the developed objects for the connection.
  • Model
  • Once you have the Topology setup (Data Server, Physical Schema, Logical Schema), you can then create your Model. This is going to be where the logical Datastores are grouped for a given schema. There are many other functions of the Model object, such as journalizing (CDC) setup, but we’ll save those features for another day.
  • Datastore
  • The Datastore is a logical representation of a table, file, XML element, or other physical object. Stored in the form of a table, the Datastore has columns and constraints. This is the object that will be used as a source or target in your ODI Mappings.

Now you can create your mapping. Whew!

Over the years, Oracle has worked to make the process of getting started a lot easier. Back in ODI 11g, the Oracle Data Integrator QuickStart was a 10 step checklist, where each step leads to another section in the documentation. A nice gesture by Oracle but by no means “quick”. There was also a great tool, the ODI Accelerator Launchpad, built in Groovy by David Allan of the Oracle DI team. Now we were getting closer to something “quick”. But this was simply a script that you had to run, not an integrated part of the ODI Studio platform. Finally, with the release of ODI 12.1.3, the Quickstart was introduced. The New Model and Topology Objects wizard allows you to create everything you need in order to reverse engineer tables into ODI Datastore objects and begin creating your first mappings.

ODI 12c New Model and Topology Objects wizard

Going through the wizard is much simpler than manually setting up the Topology objects and Model for folks just getting started with Oracle Data Integrator. The blog post from Oracle linked above can walk you through the process and I’ve added a demonstration video below that does the same. As a bonus in my demo, I’ve added a tip to help you get your initial load mappings created in an instant. Have a look:

There you have it, a quick and easy way to get started with Oracle Data Integrator 12c and create your first source to target Mapping. If you have further questions and would like a more detailed answer, you can always join one of the Rittman Mead ODI bootcamps to learn more from one of our data integration experts. Up next in the Getting Started series, we’ll look at adding enhancing the ODI metadata by adding constraints and other options.

Categories: BI & Warehousing

Bulgarian Oracle User Group (BGOUG) 2016 : Pravets to Birmingham

Tim Hall - Mon, 2016-11-14 13:05

bgougA group of us were being picked up by a minibus at 09:50 for the trip back to the airport. Timo Raitalaakso and Gianni Ceresa were on the same flight as me for the first leg. We said our goodbyes to everyone in the hotel lobby, then it was off to Sofia airport.

The airport was very quiet when we arrived. We checked in and dropped off our bags, then walked straight through security. It really doesn’t get easier than that. Timo, Gianni and myself then sat and chatted until the it was time to board.

The flight to Munich was listed as a two hour flight, but I have no idea how long it actually took. I was reading a novel written by one of my friends during the trip. The guy a couple of seats along was snoring so loud it kept making me laugh. I’m not sure how anyone could sleep in the same house as him!

We arrived at Munich, where I said goodby to Timo and Gianni, before trudging around for quite some time trying to find my gate. It was a 1:40 layover for me, so it wasn’t a rush.

The flight from Munich to Birmingham was another two hour flight. I spent the journey reading again, so I didn’t really notice much about the flight.

Back in Birmingham, I got my case and took a taxi home, while continuing to read my book. By the time I got home I was feeling quite drained, so I went to bed early, ready to start the working week!

That marked the end of my last international event of the year and I’m looking forward to spending some time at home in a single timezone. The last few months have been a killer!

Cheers

Tim…

Bulgarian Oracle User Group (BGOUG) 2016 : Pravets to Birmingham was first posted on November 14, 2016 at 8:05 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Marketing Cloud Helps B2B Marketers Accelerate Lead Generation

Oracle Press Releases - Mon, 2016-11-14 12:29
Press Release
Oracle Marketing Cloud Helps B2B Marketers Accelerate Lead Generation New marketing automation and content marketing capabilities break down marketing silos and simplify cross-device content creation

Redwood Shores, Calif.—Nov 14, 2016

Oracle today announced enhancements to the marketing automation and content marketing capabilities within the Oracle Marketing Cloud that simplify digital marketing and empower marketers to deliver a truly personalized cross-channel customer experience. The latest additions to Oracle Eloqua and Oracle Content Marketing enable marketers to create and distribute content, and easily transform data in order to rapidly adapt to customer behavior and needs. This helps customers to increase sales and marketing collaboration, build stunning cross-device content and accelerate lead generation.

Ever increasing customer expectations are forcing B2B organizations to rethink established marketing processes in order to break down internal silos between marketing, sales and other customer facing departments and prevent a fragmented customer experience. To address this challenge, Oracle has introduced a new Content Portal, Program Canvas and Responsive Content Editor within the Oracle Marketing Cloud. These innovative new capabilities give marketers the power to align sales and marketing by making approved marketing content easy to find, track and share.

“The very nature of B2B marketing is changing and as a result marketers need to rethink content strategies and work closer than ever with sales to generate and convert leads,” said Stephen Streich, senior director of product management, Oracle Marketing Cloud. “With the latest additions to Oracle Eloqua and Oracle Content Marketing, Oracle is empowering marketers to quickly create and share compelling content across their organization and reduce the number of steps required to identify and pursue new leads. This will drive efficiency across the marketing and sales process and ultimately help marketers improve the customer experience and lead generation.”

The new enhancements to Oracle Eloqua and Oracle Content Marketing enable marketers to:

  • Find, track and share content across the organization: The new Content Portal allows cross-functional sales and marketing teams to find and utilize the right content at the right time using search criteria including Sales Stage, Buyer Persona, Content Type and custom fields. Individual users can subscribe to content that is relevant for them which then triggers automated notifications as soon as new content assets become available. Easily embedded into any third-party application or page, the new Content Portal also allows users to work in the tools they know enriched with capabilities they need to easily find, track and share marketing approved content.
  • Improve speed to lead and data normalization: The new Program Canvas empowers marketers to quickly set up data transformations and data normalization workflows. A new Listener Framework makes the data workflows dramatically faster and more responsive by listening to lead scoring models, forms and new contact creation events to give marketers the ability to be more responsive to critical prospect behaviors. In addition, next-generation application integration capabilities significantly reduce the number of workflow steps necessary to manage and maintain data, improving speed to lead while keeping databases maintained.
  • Streamline and simplify responsive content creation: Slated for planned release in early CY 2017, the new Responsive Content Editor helps marketers make content more meaningful and responsive by removing technical roadblocks.

The Oracle Marketing Cloud is part of the Oracle Customer Experience (CX) Cloud, one of the industry’s most complete CX solutions. Oracle CX Cloud empowers organizations to improve experiences, enhance loyalty, differentiate their brands, and drive measurable results by creating consistent, connected, and personalized brand experiences across all channels and devices.

Contact Info
Simon Jones
PR for Oracle
+1.415.856.5155
sjones@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

Small and Medium-Sized Businesses Scale with Oracle Cloud Platform

Oracle Press Releases - Mon, 2016-11-14 09:56
Press Release
Small and Medium-Sized Businesses Scale with Oracle Cloud Platform Oracle Enables Companies of All Sizes to Succeed with Easy-to-Deploy Cloud Solutions

Redwood Shores, Calif.—Nov 14, 2016

Oracle Corporation announced today that small and medium sized business (SMBs) turn to Oracle for strategic collaboration on their journey to the Cloud. SMBs are utilizing the benefits of enterprise-grade capabilities delivered by Oracle Cloud Platform for rapid application development, fast and predictable performance, reliable security, and elastic scalability to support their growth.

Oracle Cloud Platform offers an integrated offering across Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) to enable developers, IT and line of business leaders to leverage the Cloud. These offerings allow SMBs to develop, deploy and manage any application or workload. Oracle offers a strategic platform to help solve business challenges and ensure time-to-value with the cloud. This offering allows SMBs to utilize the same best-in-class features that enterprises leverage.

MCH Data Taps Oracle Database Cloud Service for Seamless Transition into the Cloud

MCH Strategic Data is a data services company headquartered in Sweet Springs, Missouri that specializes in collecting and compiling data from institutions in the education, healthcare, government and religious sectors. With its nimble 100-person staff, MCH relies on third-party vendors to manage many of its technology systems. When one of MCH Data’s vendors needed to be replaced, they standardized on Oracle as other vendors failed to meet their needs. Oracle Database Cloud Service provided an easy and fast transition to the Cloud as MCH Data was able to provision a complete Oracle Database Cloud Service environment in a few minutes and get started immediately without upfront costs.

“The power of data in the Cloud is in the ability to rapidly leverage it for business insights that drive our customers’ success,” said Brian Vogelsmeier, director of IT, MCH Strategic Data. “As collecting data is paramount to our business, we needed to pick a vendor that has best-in-class database Cloud technology that makes the back up to the database seamless, hence our choice to leverage Oracle’s services.”  

Pragmatyxs Standardized on Oracle Database Cloud Service, PaaS and IaaS for Efficiency, Security and 24/7 Availability

Pragmatyxs, a company that works with Fortune 500 clients to ensure smooth communications between their supply chain and finance systems, as well as the barcode and product labels mandated by regulatory bodies, also leverages Oracle. All of Pragmatyxs’ clients’ product labels must comply with industry and regulatory standards, across dozens of countries. Pragmatyxs had an Oracle Database on premises, in addition to using Java for product development.They chose to migrate both to the Cloud in order to reduce the amount of time that their spent on maintenance and support, without having the upfront hardware costs, without needing to know backup and recovery commands, and without having to perform complex tasks such as database software upgrades and patching.

“We launched our Cloud based label printing service for our partners and remote facilities which was a strategic initiative for our business. We have 15 employees, so we needed a solution that was the most efficient and secure with 24/7 availability” said Paul Van Hout, CEO and Founder, Pragmatyxs. “By using Oracle Database Cloud Service, PaaS and IaaS. Pragmatyxs utilizes Oracle’s Database Cloud Service, Java Cloud Service, Infrastructure as a Service, and Messaging and Cloud Service. We are maximizing the efficiency promised by the Cloud while giving our customers a better, more configurable product. With Oracle, we can stay ahead of the industry and compete like we never could before.”

IQMS ERP Selects Oracle Cloud Platform to Offer Customers Cloud Backup and Industry-Leading Security

After 27 years of offering manufacturing operations applications entirely under a licensed, on-premises model, IQMS ERP, a comprehensive manufacturing MES and ERP software system, now offers the choice to subscribe to the software as service running on Oracle Cloud Platform. This allows them to offer customers capabilities they didn’t have before, such as Oracle Business Intelligence Cloud Service alongside its IQMS application, providing the tools for companies to analyze and visualize the extensive Internet of Things (IoT) data that IQMS collects on factory-floor machine performance. Similarly, IQMS has started offering Oracle’s Cloud backup service, for both Cloud and on-premises versions of IQMS. Since IQMS applications run on the Oracle Database, the Cloud backup service drew immediate interest from customers.

“For its reliability, price point, and strong security, the real value of Oracle Cloud Platform is being able to offer customers more services along with added simplicity,” said Gary Nemmers, CEO and President, IQMS. “If any component fails, factories stop and that can’t happen. With Oracle Cloud Platform, our customers have benefits like more extensive data encryption than they run in house and some are have even freed up resources previously tied up with on-premises datacenter maintenance.”

“We are proud to be helping businesses of all sizes build, grow and compete in the Cloud,” said Ashish Mohindroo, Vice President, Oracle Cloud. “With a single, connected cloud, Oracle offers SMBs more than just subscription software. We are proud to be a strategic partner to help explain how all of the pieces of our cloud work together to help solve business challenges—by helping SMBs build innovative applications and solutions for sales, marketing, finance and reporting, to talent and recruitment drives.”

In addition to subscription pricing that’s SMB-friendly, Oracle has improved the buying experience to help enable customers be successful with their cloud purchases. Oracle created The Accelerated Buying Experience to make purchasing cloud services fast and simple. Rather than taking weeks to execute a transaction, customers can now complete their purchases in a matter of hours or a few days.

Key features include:

  • Data Management
    • MySQL Cloud Service
    • Oracle Database Cloud Service
    • Exadata Express Cloud Service
    • Big Data Cloud Service
  • Cloud Native Application Development
    • Application Builder Cloud Service
    • Application Container Cloud Service
    • Java Cloud Service
    • Mobile Cloud Service
  • Integration
    • Application Integration Cloud Service
    • SOA Cloud Service
  • Application and Cloud Management
    • Application Performance Monitoring Cloud Service
    • Log Analytics Cloud Service

To learn more about Oracle’s SMB offerings, please visit us at oracle.com/smb.

To learn more about what the Oracle Cloud Platform, please visit us at cloud.oracle.com.

Contact Info
Sarah Fraser
Oracle PR
+1 (650) 743.0660
sarah.fraser@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Sarah Fraser

  • +1 (650) 743.0660

Flashback Database -- 2 : Flashback Requires Redo (ArchiveLog)

Hemant K Chitale - Mon, 2016-11-14 09:03
Although Flashback Logs support the ability to execute a FLASHBACK DATABASE command, the actual Flashback also requires Redo to be applied.  This is because the Flashback resets the images of blocks but doesn't guarantee that all transactions are reset to the same point in time (any one block can contain one or more active, uncommitted transactions, and there can be multiple blocks with active transactions at any point in time).  Therefore, since Oracle must revert the database to a consistent image, it needs to be able to apply redo as well (just as it would do for a roll-forward recovery from a backup).

Here's a quick demo of what happens if the redo is not available.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.oldest_flashback_scn, l.oldest_flashback_time
2 from v$flashback_database_log l;

SYSDATE OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T
------------------ -------------------- ------------------
14-NOV-16 22:51:37 7246633 14-NOV-16 22:39:43

SQL>

sh-4.1$ pwd
/u02/FRA/ORCL/archivelog/2016_11_14
sh-4.1$ date
Mon Nov 14 22:52:29 SGT 2016
sh-4.1$ rm *
sh-4.1$

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.

SQL> flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS');
flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS')
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 7246634 to SCN 7269074
ORA-38761: redo log sequence 70 in thread 1, incarnation 5 could not be
accessed


SQL>
SQL> l
1 select sequence#, first_change#, first_time
2 from v$archived_log
3 where resetlogs_time=(select resetlogs_time from v$database)
4 and sequence# between 60 and 81
5* order by 1
SQL> /

SEQUENCE# FIRST_CHANGE# FIRST_TIME
---------- ------------- ------------------
60 7245238 14-NOV-16 22:27:35
61 7248965 14-NOV-16 22:40:46
62 7250433 14-NOV-16 22:40:52
63 7251817 14-NOV-16 22:41:04
64 7253189 14-NOV-16 22:41:20
65 7254583 14-NOV-16 22:41:31
66 7255942 14-NOV-16 22:41:44
67 7257317 14-NOV-16 22:41:59
68 7258689 14-NOV-16 22:42:10
69 7260094 14-NOV-16 22:42:15
70 7261397 14-NOV-16 22:42:22
71 7262843 14-NOV-16 22:42:28
72 7264269 14-NOV-16 22:42:32
73 7265697 14-NOV-16 22:42:37
74 7267121 14-NOV-16 22:42:43
75 7269075 14-NOV-16 22:48:05
76 7270476 14-NOV-16 22:48:11
77 7271926 14-NOV-16 22:48:17
78 7273370 14-NOV-16 22:48:23
79 7274759 14-NOV-16 22:48:32
80 7276159 14-NOV-16 22:48:39
81 7277470 14-NOV-16 22:48:43

22 rows selected.

SQL>



Note how the error message states that Redo(Archive)Log Sequence#70 is required but provides a range of SCNs that span Sequence#60 to Sequence#74 !

Bottom Line : Flashback Logs alone aren't adequate to Flashback database.  You also need the corresponding Redo.

Just to confirm that I can continue with the current (non-Flashbacked Database) state (in spite of the failed Flashback)  :

SQL> shutdown;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 89
Next log sequence to archive 90
Current log sequence 90
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
7289329

SQL>


.Bottom Line : *Before* you attempt a FLASHBACK DATABASE to the OLDEST_FLASHBACK_TIME (or SCN) from V$FLASHBACK_DATABASE_LOG, ensure that you *do* have the "nearby"  Archive/Redo Logs. !
.
.
.
Categories: DBA Blogs

12cR2 Single-Tenant: Multitenant Features for All Editions

Yann Neuhaus - Mon, 2016-11-14 09:00

Now that 12.2 is there, in the Oracle Public Cloud Service, I can share the slides of the presentation I made for Oracle Open World:

I’ll give the same session in French, In Geneva on November 23rd at Oracle Switzerland. Ask me if you want an invitation.

The basic idea is that non-CDB is deprecated, and not available in the Oracle Public Cloud. If you don’t purchase the Multitenant Option, then you will use ‘Single-Tenant’. And in 12.2 there are interesting features coming with it. Don’t fear it. Learn it and benefit from it.

CaptureSingleTenant

In addition to that, I’ll detail

  • The 12.2 new security feature coming with multitenant: at DOAG 2016
  • The internals of multitenant architecture: at UKOUG TECH16

And don’t hesitate to come at the dbi services booth for questions and/or demos about Multitenant.
There’s also the book I co-authored: Oracle Database 12c Release 2 Multitenant (Oracle Press) which should be available within a few weeks.

 

Cet article 12cR2 Single-Tenant: Multitenant Features for All Editions est apparu en premier sur Blog dbi services.

Adding Reserved command in SQLcl

Kris Rice - Mon, 2016-11-14 08:27
I saw Stephen's example of checking reserved words in the database from Vertan's day and figured I'd do the same in SQLcl. #595 #plsql Is it a reserved word? PL/SQL procedure to help you sort that out. Dyn PLSQL example! @oraclelivesql https://t.co/M10kVnsQ3y pic.twitter.com/XFFHOVzNCK — Steven Feuerstein (@sfonplsql) November 11, 2016 Checked if something is reserved seems like a nice add

Distributed Trap

Jonathan Lewis - Mon, 2016-11-14 07:19

Here’s an interesting (and potentially very useful) observation from an OTN database forum thread that appeared at the end of last week. It concerns the problems of pulling data from remote systems, and I’ll start by building some data:

rem
rem     Script:         remote_insert_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2016
rem
rem     Last tested
rem             12.1.0.2
rem             11.2.0.4
rem

create table t1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',100)           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 50000
;

alter table t1 add constraint t1_pk primary key(id);

create table t2
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(sysdate) + rownum d1,
        rownum                  id,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',50)            padding
from
        generator
where
        rownum <= 500
;

alter table t2 add constraint t2_pk primary key(id);

create table t3
as
select sysdate d1, t1.* from t1
where rownum = 0
;

--  Now gather stats if you need to (depending on version)

I’ve created three tables. Table t3 is an empty copy of table t1 with a date column added, and t2 is some sort of reference table that looks rather like table t1 but has a lot less data. Now I’m going to pretend that t1 and t2 are in a remote database while t3 is in the local database and copy data from t1 to t3, omitting any data that is referenced in t2. The SQL is simple:


define m_target=test@loopback

insert into t3(
        id, small_vc, padding
)
select
        t1.id, t1.small_vc, t1.padding
from
        t1@&m_target    t1
where
        t1.id not in (
                select t2.id from t2@&m_target
        )
;

----------------------------------------------------------------------
| Id  | Operation                | Name | Cost (%CPU)| Inst   |IN-OUT|
----------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |     0   (0)|        |      |
|   1 |  LOAD TABLE CONVENTIONAL | T3   |            |        |      |
|   2 |   REMOTE                 |      |            |   TEST | R->S |
----------------------------------------------------------------------

Remote SQL Information (identified by operation id):
----------------------------------------------------
   2 - EXPLAIN PLAN INTO PLAN_TABLE@! FOR SELECT
       "A1"."ID","A1"."SMALL_VC","A1"."PADDING" FROM "T1" "A1" WHERE
       "A1"."ID"<>ALL (SELECT "A2"."ID" FROM "T2" "A2") (accessing
       'TEST.LOCALDOMAIN@LOOPBACK' )

I’ve set up an SQL*Plus substitution variable to hold a database link name (and used a loopback qualifier so that I can pretend t1 and t2 are in a remote database. The execution plan (taken from an explain plan, but confirmed by running the query and calling dbms_xplan.display_cursor) shows that Oracle has executed select part of the insert as a “fully remote” statement – which is nice.

Unfortunately I forgot to include a datestamp as I copied the data over. So let’s modify the query to do that:


insert into t3(
        d1,
        id, small_vc, padding
)
select
        sysdate,
        t1.id, t1.small_vc, t1.padding
from
        t1@&m_target    t1
where
        t1.id not in (
                select t2.id from t2@&m_target
        )
;

--------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |      1 |        |   123 (100)|      0 |00:00:27.42 |   10908 |
|   1 |  LOAD TABLE CONVENTIONAL |      |      1 |        |            |      0 |00:00:27.42 |   10908 |
|*  2 |   FILTER                 |      |      1 |        |            |  49500 |00:00:26.51 |       0 |
|   3 |    REMOTE                | T1   |      1 |  50000 |   113   (3)|  50000 |00:00:00.33 |       0 |
|   4 |    REMOTE                | T2   |  50000 |      1 |     0   (0)|    500 |00:00:25.29 |       0 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( NOT EXISTS (SELECT 0 FROM  "T2" WHERE "T2"."ID"=:B1))

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT /*+ OPAQUE_TRANSFORM */ "ID","SMALL_VC","PADDING" FROM "T1" "T1"
       (accessing 'TEST.LOCALDOMAIN@LOOPBACK' )

   4 - SELECT /*+ OPAQUE_TRANSFORM */ "ID" FROM "T2" "T2" WHERE "ID"=:1 (accessing
       'TEST.LOCALDOMAIN@LOOPBACK' )

Whoops, the plan just changed – it also took 27.4 seconds instead of the 1.1 seconds that it used to – that’s because of the 50,000 remote calls to execute a remote query for the subquery filter. The addition of sysdate (which is the local sysdate@!) to the select list has made the select statement distributed instead of fully remote, and the query for a CTAS or “insert/select” has to be driven from the local site if it’s a distributed query.

Okay, plan (b), don’t insert sysdate, add it to the table as a default:


alter table t3 modify d1 default sysdate;

This doesn’t help; even though the query doesn’t mention sysdate explicitly the query is still treated as disrtibuted query.

Okay, plan (c) – don’t insert sysdate, add a trigger to the table:

alter table t3 modify d1 default null;

create or replace trigger t3_bri
before insert on t3
for each row
begin
        :new.d1 := sysdate;
end;
/


And this works (in 11.2.0.4 and 12.1.0.2, at least, which are the versions I tested).

I could have made the trigger a little more sophisticated, of course, but the point of the post was simply to demonstrate a problem and a simple workaround.

There are probably several other commonly used features (various sys_context() calls, perhaps) that have the same effect.

 


Oracle Service Cloud Enables Brands to Transform the Customer Experience by Unlocking the Value of Internet of Things Data

Oracle Press Releases - Mon, 2016-11-14 07:00
Press Release
Oracle Service Cloud Enables Brands to Transform the Customer Experience by Unlocking the Value of Internet of Things Data New integration between Oracle Service Cloud and Oracle IoT Cloud helps brands deliver smart and proactive customer service experiences and accelerate IoT projects

Redwood Shores, Calif.—Nov 14, 2016

Oracle today announced an innovative new solution that enables brands to quickly and efficiently leverage insights from the Internet of Things (IoT) to power smart and connected customer service experiences. Powered by a packaged integration between Oracle Service Cloud and Oracle IoT Cloud, the new solution helps brands enhance the customer experience, increase operational efficiency and reduce costs by using IoT data to predict customer needs and proactively address customer service issues.

The explosive growth of the Internet of Things gives organizations the opportunity to deliver innovative new services faster and reduce risk by connecting, analyzing and integrating data-driven insights from connected “things” into business processes and applications. To help brands capitalize on this opportunity to drive next-generation customer service experiences, Oracle has introduced a new packaged integration between Oracle Service Cloud and Oracle IoT Cloud. The new IoT Accelerator is an open source integration that also includes implementation documentation to easily configure, extend and deploy.

“The Internet of Things is fundamentally changing the way consumers interact with brands and in the process, it is creating volumes of data that organizations can leverage to transform the customer experience,” said Meeten Bhavsar, senior vice president, Oracle Service Cloud. “By delivering a packaged integration between Oracle Service Cloud and Oracle IoT Cloud, we are able to accelerate the time to value, while lowering the complexity of IoT projects. For brands, this also means they can easily take advantage of IoT data and make it actionable across engagement channels to deliver exceptional customer service experiences.”

Oracle Service Cloud helps brands seamlessly integrate IoT device data into existing omni-channel operations. For example, Denon + Marantz, a leading provider of premium branded equipment, is leveraging customer insights from more than 200,000 connected devices globally to deliver personalized, positive and consistent customer experiences worldwide.

“Denon and Marantz products have always provided a high quality, immersive musical experience and with IoT data, we now have the opportunity to extend that first-class experience to our customer service team,” said Scott Strickland, CIO, Denon + Marantz. “Leveraging Oracle Service Cloud’s IoT integration capabilities, we have been able to improve our customers experience and increase our internal efficiency and knowledge base. In addition, we can leverage IoT information via the Oracle Marketing Cloud to target campaigns based on how a consumer actually uses the product and not how we think they use it.”

The Oracle Service Cloud is part of the Oracle Customer Experience (CX) Cloud Suite, one of the industry’s most complete CX solutions. Oracle CX Cloud Suite empowers organizations to improve experiences, enhance loyalty, differentiate their brands, and drive measurable results by creating consistent, connected, and personalized brand experiences across all channels and devices.

The Oracle IoT Cloud enables customers to connect to any device generating data, perform real-time and predictive analytics on device data and extend business processes within enterprise applications. With Oracle IoT Cloud, users can rapidly build IoT applications for preventive maintenance and asset tracking using pre-built integrations with Oracle Platform as a Service, Oracle cloud applications and third-party applications.

For more information, please read about thelatest November 2016 Oracle Service Cloud release including the Internet of Things (IoT) Accelerator or check out the Modern Service Experience blog.

Contact Info
Simon Jones
PR for Oracle
+1.415.856.5155
sjones@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Simon Jones

  • +1.415.856.5155

University System of New Hampshire, The College of New Jersey, and University of Wyoming Turn to Oracle Cloud to Drive Institutional Excellence

Oracle Press Releases - Mon, 2016-11-14 07:00
Press Release
University System of New Hampshire, The College of New Jersey, and University of Wyoming Turn to Oracle Cloud to Drive Institutional Excellence The industry’s most complete and modern Cloud solutions for finance help higher education institutions drive student success through improved financial operations

Redwood Shores, Calif.—Nov 14, 2016

Oracle today announced that The College of New Jersey (TCNJ) and University of Wyoming have selected Oracle Enterprise Resource Planning (ERP) Cloud and University System of New Hampshire has selected Oracle Enterprise Management (EPM) Cloud to advance educational opportunities by digitizing and modernizing their financial, planning, and budgeting systems. With predictive capabilities and the power to scale, Oracle Cloud solutions for finance will transform the way these institutions do business and support their future growth.  

“Moving to Oracle ERP Cloud will enable TCNJ to react to rapid changes in higher education by proactively looking at our critical financial information in real time,” said Dr. Sharon Blanton, vice president and chief information officer at TCNJ. “The breadth and depth of Oracle’s Cloud solutions for finance offers us a cost-effective way to modernize our finance systems faster so we can direct more resources toward the advancement of our students, faculty, and staff.”

Oracle Cloud solutions for finance empower higher education institutions with better insights into financial, procurement, and grant information that help faculty and staff concentrate more resources on driving institutional innovation and excellence. The solutions enable higher education institutions to drive predictable performance, budget more accurately, and report with confidence.

“Oracle is working with higher education institutions to deliver innovative cloud solutions that offer more complete and impactful reporting to streamline operations and drive new opportunities in education,” said Hari Sankar, GVP of product management at Oracle. “We are proud to support academic success with Oracle Cloud solutions for finance, which frees up staff and faculty to focus on higher-value tasks in support of their students.”

Oracle delivers the industry’s broadest suite of enterprise-grade Cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).

Contact Info
Joann Wardrip
Oracle
+1.650.607.1343
joann.wardrip@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Joann Wardrip

  • +1.650.607.1343

12 Tips for Finding an IT Job on Twitter

Complete IT Professional - Mon, 2016-11-14 05:00
Twitter is one of the most important and most popular social networking sites. It can also be used for finding a job in the IT industry. As of 2016, Twitter has: 310 million monthly active users Average of 500 million tweets per day One great way you can use Twitter is to find a job […]
Categories: Development

Application Management Pack 13.1.1.1 Certified with Enterprise Manager 13.2

Steven Chan - Mon, 2016-11-14 02:05

In July 2016, we announced the general availability of Oracle Application Management Suite 13.1.1.1 for Oracle E-Business Suite as a plug-in for Enterprise Manager Cloud Control 13c. Application Management Suite 13.1.1.1 is now certified as a plug-in with Oracle Enterprise Manager Cloud Control 13c Release 2.

EM 13cR2 compatibility

We encourage all customers to upgrade to the latest release of Oracle Enterprise Manager 13.2 and Oracle Application Management Pack 13.1.1.1. If you are planning to manage Oracle E-Business Suite cloud environments, you must be on Oracle Enterprise Manager 13c and Oracle Application Management Pack Release 13.1.1.1.0 with latest patches listed in the Note below.

References

For release details and product documentation links, see:

Downloading this update

Oracle Application Management Suite for Oracle E-Business Suite can be downloaded and installed by:

Related Article


Categories: APPS Blogs

Links for 2016-11-13 [del.icio.us]

Categories: DBA Blogs

Validation procedure

Tom Kyte - Mon, 2016-11-14 01:46
How to write a validation procedure for three conditions which are dependent on each other . first condition will be for country zone in that there will be condition for branch for specific date range .
Categories: DBA Blogs

Chapter 12-user management

Tom Kyte - Mon, 2016-11-14 01:46
I am trying to create a role, but every time i type it in on sql plus, i get an error saying insufficient privileges. I was wondering what am i doing wrong. I have tried typing in CREATE ROLE PURCH_ROLE_1; and CREATE ROLE PURCH_ROLE_1 IDENTIFIED BY c...
Categories: DBA Blogs

Multi Table Insert with Differing Exists on Destination Tables

Tom Kyte - Mon, 2016-11-14 01:46
Is it possible to write a "Insert all into <table_1> ( col_1,col_2,col_3 ) values ( col_a,col_b,col_c) into <table_2> ( col_1,col_2,col_3 ) values ( col_a,col_b,col_c) into...
Categories: DBA Blogs

Data Loading with APEX - How to avoid upload of null values

Tom Kyte - Mon, 2016-11-14 01:46
Hi guys, I'm developing an application wich has a data loading wizard section. Users can upload several CSV files where some of the columns values are checked against a reference (I used Table Lookup for that). Problem is that I just realized tha...
Categories: DBA Blogs

Log File Sync Wait

Tom Kyte - Mon, 2016-11-14 01:46
Hi Tom, 1 ) As per my understanding, Log-File-Sync wait is the duration, after which I receive an acknowledgement that my commit is successful. In my database the only prominent issue I see is huge log-file-sync waits. 2 ) There are 32 cpu and ea...
Categories: DBA Blogs

50 Awesome Career Tips For IT Professionals

Complete IT Professional - Sun, 2016-11-13 21:00
The IT industry is a large field, and as a result, it can be hard to stand out from the rest of the people in the industry. There are some things you can do to improve your career, improve the way you do work, and make yourself look better for your company. Have a read […]
Categories: Development

12cR2 new index usage tracking

Yann Neuhaus - Sun, 2016-11-13 14:15

A common question is: how to know which indexes are not used, so that I can drop them. If you tried to use index monitoring you probably have seen the limits of it which make it difficult to use. It has been improved in 12.2 so let’s check if it helps to release the stress of performance regression when we drop an index… or not.

I’ll check two views here. Here is what documentation says about them:

  • DBA_INDEX_USAGE displays cumulative statistics for each index.
  • V$INDEX_USAGE_INFO keeps track of index usage since the last flush. A flush occurs every 15 minutes.
    After each flush, ACTIVE_ELEM_COUNT is reset to 0 and LAST_FLUSH_TIME is updated to the current time.

The documentation about V$INDEX_USAGE_INFO show a column INDEX_STATS_COLLECTION_TYPE where description explains that by default the statistics are collected based on sampling (only a few of the executions are considered when collecting the statistics). The type of collection that collects the statistics for each execution may have a performance overhead.

SAMPLED

I’ve found an undocumented to control this collection, which defaults to ‘SAMPLED’ and I’ll set it to ‘ALL’ to get deterministic test case:
17:53:51 SQL> alter session set "_iut_stat_collection_type"=ALL;
Session altered.

So this is the first problem with how reliable index usage tracking is. If your boss is running a report once a month which needs a index, you may miss this execution and think that this index is unused and decide to drop it. And you will have a regression. Do you want to take the risk on a sample monitoring?

Execution using index

On the SCOTT schema I’m running a query that uses the index PK_DEPT

17:53:51 SQL> set autotrace on explain
Autotrace Enabled
Displays the execution plan only.
 
17:53:51 SQL> select * from emp join dept using(deptno) where ename like 'K%';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM DNAME LOC
10 7839 KING PRESIDENT 17-nov 00:00:00 5000 ACCOUNTING NEW YORK
 
Explain Plan
-----------------------------------------------------------
Plan hash value: 3625962092
 
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 117 | 3 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 117 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMP | 1 | 87 | 2 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 30 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
3 - filter("EMP"."ENAME" LIKE 'K%')
4 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- this is an adaptive plan
 
17:53:52 SQL> set autotrace off
Autotrace Disabled

When I look at the index usage tracking views, I don’t see this usage and the reason is that the last flush is from before the execution:

17:53:52 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 8 30790 13-NOV-16 05.48.12.218000000 PM 3
 
17:53:52 SQL> select * from dba_index_usage where owner='SCOTT';
 
no rows selected

The statistics are gathered in memory and are flushed to the dictionary every 15 minutes. For the moment, I’ve not found how to flush them manually, so I just wait 900 seconds:


17:53:52 SQL> host sleep 900
 
18:10:32 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 2 3 30000 9 45898 13-NOV-16 06.03.13.344000000 PM 3
 
18:10:32 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 1 1 1 0 1 0 0 0 0 0 0 0 0 13-nov 18:03:13

Here is my index usage recorded. On execution. One row returned from the index.

DBMS_STATS

One drawback of index monitoring was that the statistics gathering was setting the monitoring to ‘YES’. Let’s see if it’s better in 12.2:


18:10:32 SQL> exec dbms_stats.gather_index_stats('SCOTT','PK_DEPT');
PL/SQL procedure successfully completed.

Again, waiting 15 minutes to get it flushed (and check LAST_FLUSH_TIME):


18:10:32 SQL> host sleep 900
 
18:27:12 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT F LUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 1 3 30000 1 0 48136 13-NOV-16 06.18.13.748000000 PM 3
 
18:27:12 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_CO UNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_CO UNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_10 00_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

It seems that the index tracking usage has been incremented here. Total rows returned incremented by 4 which is the number of rows in DEPT, read by dbms_stats.
This will be very difficult to use to detect unused index because we can expect that even unused indexes have statistics gathering on them.

Index on Foreign Key to avoid table locks

There’s another risk we have when we drop an index. It may not be used for access, but to avoid a TM Share lock on a child table when deleting rows from the referenced table. This is again something that was not monitored. When the parent table has few rows, like some lookup tables, the index on the foreign key will probably not be used to access to the child rows, or to check that there are no child rows when you delete a parent one. A full scan will be faster. But an index on it is still required to avoid to lock the whole table when we delete rows from the parent.

Let’s create such an index.


18:27:12 SQL> create index FK_EMP on EMP(DEPTNO);
Index FK_EMP created.

I’ll delete DEPTNO=50 and I can verify that checking that there are no child rows is done without the need of the index:


SQL_ID 1v3zkdftt0vv7, child number 0
-------------------------------------
select * from emp where deptno=50
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 3 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 1 | 0 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter("DEPTNO"=50)

Let’s delete the parent row and see if the index is used or not.


19:19:47 SQL> delete from DEPT where deptno='50';
0 rows deleted.
19:19:47 SQL> commit;
Commit complete.

This do not lock the EMP table because of the presence of the index FK_EMP. If the index were not there, a TM Share lock would have been acquired, which prevent concurreny DML on EMP table (at least).


19:19:48 SQL> host sleep 900
 
19:34:48 SQL> select * from v$index_usage_info;
INDEX_STATS_ENABLED INDEX_STATS_COLLECTION_TYPE ACTIVE_ELEM_COUNT ALLOC_ELEM_COUNT MAX_ELEM_COUNT FLUSH_COUNT TOTAL_FLUSH_DURATION LAST_FLUSH_TIME STATUS_MSG CON_ID
1 0 0 3 30000 12 48152 13-NOV-16 07.24.11.086000000 PM 3
 
19:34:48 SQL> select * from dba_index_usage where owner='SCOTT';
OBJECT_ID NAME OWNER TOTAL_ACCESS_COUNT TOTAL_EXEC_COUNT TOTAL_ROWS_RETURNED BUCKET_0_ACCESS_COUNT BUCKET_1_ACCESS_COUNT BUCKET_2_10_ACCESS_COUNT BUCKET_2_10_ROWS_RETURNED BUCKET_11_100_ACCESS_COUNT BUCKET_11_100_ROWS_RETURNED BUCKET_101_1000_ACCESS_COUNT BUCKET_101_1000_ROWS_RETURNED BUCKET_1000_PLUS_ACCESS_COUNT BUCKET_1000_PLUS_ROWS_RETURNED LAST_USED
73723 PK_DEPT SCOTT 2 2 5 0 1 1 4 0 0 0 0 0 0 13-nov 18:18:13

No additional index usage has been detected. Do you take the risk to drop the index? Probably not. Even making the index invisible do not lower the risk. You may check DBA_TAB_MODIFICATIONS to know if the parent table is subject of deletes, but what if some transactions are updating the referenced key? This is also a case of TM Share lock, and this happens more that we think (for example when Hibernate updates all columns even those that do not change).

So what?

The new index usage tracking in 12.2 is very nice to get statistics on index usage, better than a simple ‘YES/NO’ flag as we have before. But detecting which index is not used and can be safely dropped is still something complex and that requires the application knowledge and comprehensive non-regression testing.
There is nothing yet that can tell you than all would have been the same if an index were not there.

 

Cet article 12cR2 new index usage tracking est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator