Feed aggregator

Update a large amount of rows in the table

Tom Kyte - Sun, 2017-07-02 15:06
Hi, I have 10 million records in my table but I need to update 5 million records from that table. I checked tom sir solutions but i didn't find a total code.That have already Create table tblname as select updations from tble and after rename old...
Categories: DBA Blogs

12c MultiTenant Posts -- 5 : Flashback a PDB

Hemant K Chitale - Sun, 2017-07-02 10:31
12.2 allows FLASHBACK DATABASE for a Pluggable Database.

Note that ALTER DATABASE FLASHBACK ON is not enabled by default in a newly created database -- you must issue this command at the CDB level to enable Flashback for all the Pluggable Databases.

SQL> connect hemant/hemant@NEWPDB
Connected.
SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------------------------------------------
OBJ_LIST
HKC_STORE_FILE
T

SQL> select count(*) from obj_list;

COUNT(*)
----------
145282

SQL> select count(*) from hkc_store_file;

COUNT(*)
----------
1

SQL> truncate table obj_list;

Table truncated.

SQL> drop table hkc_store_file;

Table dropped.

SQL>


Now, let me Flashback the PDB.

SQL> connect / as sysdba
Connected.
SQL> alter pluggable database newpdb close;

Pluggable database altered.

SQL> flashback pluggable database newpdb to timestamp sysdate-3/1440;

Flashback complete.

SQL> alter pluggable database newpdb open;
alter pluggable database newpdb open
*
ERROR at line 1:
ORA-01113: file 19 needs media recovery
ORA-01110: data file 19:
'/u03/oradata/NEWPDB/data_D-ORCL12C_I-768045447_TS-MYDATA_FNO-19_0cs7a8e4'


SQL> alter pluggable database newpdb open resetlogs;

Pluggable database altered.

SQL>


Let me test the data.

SQL> connect hemant/hemant@NEWPDB
Connected.
SQL> select count(*) from obj_list;

COUNT(*)
----------
145282

SQL> select count(*) from hkc_store_file;

COUNT(*)
----------
1

SQL>


Yes, the FLASHBACK DATABASE is successful.

What are the pre-requisites ?

1.  ALTER DATABASE FLASHBACK ON  at the CDB
2.  ARCHIVELOG at the CDB
3.  LOCAL UNDO enabled -- highly recommended else a subsequent Point In Time Recovery of the CDB may prevent OPENing the PDB
4.  OPEN RESETLOGS for the PDB

.
.
.

Categories: DBA Blogs

The APEX Community & ODTUG Kscope

Joel Kallman - Sun, 2017-07-02 10:28
Another successful Oracle Development Tools User Group (ODTUG) Kscope conference is in the books.  And like every year, the global APEX community convenes at Kscope. It is really an amazing collection of experts and first-time attendees from around the globe - from the USA, Germany, England, Australia, Croatia, Brazil, Estonia, India, Austria, Belgium, Netherlands, Canada, Poland, Finland, New Zealand and more!

Make no mistake - I appreciate and fully support many other conferences around the globe where APEX has a presence, including:

  • OUG Ireland
  • OUG Scotland
  • APEX World
  • APEX Connect
  • Great Lakes Oracle Conference
  • Oracle OpenWorld
  • Slovenian Oracle User Group
  • Croatian Oracle User Group
  • Oracle Developer Tour Latin America
  • DOAG Konferenz + Ausstellung
  • UKOUG Technology Conference and Exhibition
  • High Five Polish Oracle User Group

But if you're in the APEX Community, or you want to join the passionate and growing global APEX community, please plan on attending an ODTUG Kscope conference.  Just once.  You'll thank me later.

Registration for Kscope18 is already open!


Green means GO! APEX sessions at Kscope.

APEX Sunday Symposium

APEX Open Mic Night

#LetsWreckThisTogether APEX Talks

(bash): The most useless commands (7)

Dietrich Schroff - Sat, 2017-07-01 03:14

After talking about
  1. rev
  2. sl
  3. cowsay
  4. cmatrix
here another pretty useless command:
calToday with every smartphone having its own calendar app or every e-mail program with integrated time planning function this command is no more needed. The manpage says:
NAME
     cal, ncal — displays a calendar and the date of EasterAnd here we go:
$ cal 2000
                            2000
       Januar               Februar                 März        
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
                   1         1  2  3  4  5            1  2  3  4
 2  3  4  5  6  7  8   6  7  8  9 10 11 12   5  6  7  8  9 10 11
 9 10 11 12 13 14 15  13 14 15 16 17 18 19  12 13 14 15 16 17 18
16 17 18 19 20 21 22  20 21 22 23 24 25 26  19 20 21 22 23 24 25
23 24 25 26 27 28 29  27 28 29              26 27 28 29 30 31  
30 31                                                          
       April                  Mai                   Juni        
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
                   1      1  2  3  4  5  6               1  2  3
 2  3  4  5  6  7  8   7  8  9 10 11 12 13   4  5  6  7  8  9 10
 9 10 11 12 13 14 15  14 15 16 17 18 19 20  11 12 13 14 15 16 17
16 17 18 19 20 21 22  21 22 23 24 25 26 27  18 19 20 21 22 23 24
23 24 25 26 27 28 29  28 29 30 31           25 26 27 28 29 30  
30                                                              
        Juli                 August              September      
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
                   1         1  2  3  4  5                  1  2
 2  3  4  5  6  7  8   6  7  8  9 10 11 12   3  4  5  6  7  8  9
 9 10 11 12 13 14 15  13 14 15 16 17 18 19  10 11 12 13 14 15 16
16 17 18 19 20 21 22  20 21 22 23 24 25 26  17 18 19 20 21 22 23
23 24 25 26 27 28 29  27 28 29 30 31        24 25 26 27 28 29 30
30 31                                                          
      Oktober               November              Dezember      
So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa  So Mo Di Mi Do Fr Sa
 1  2  3  4  5  6  7            1  2  3  4                  1  2
 8  9 10 11 12 13 14   5  6  7  8  9 10 11   3  4  5  6  7  8  9
15 16 17 18 19 20 21  12 13 14 15 16 17 18  10 11 12 13 14 15 16
22 23 24 25 26 27 28  19 20 21 22 23 24 25  17 18 19 20 21 22 23
29 30 31              26 27 28 29 30        24 25 26 27 28 29 30
                                            31                    
 Or "the date of Easter":
$ ncal -e 2018
 1 April 2018

Generate XML output of complete table

Tom Kyte - Sat, 2017-07-01 02:26
Hi, My requirement is to generate XML output for different tables which can have different number of columns and for the same i am looking for a generic solution wherein i'll just have to put the table-name and XML out put of the table can be ge...
Categories: DBA Blogs

Difference between varchar and varchar2

Tom Kyte - Sat, 2017-07-01 02:26
dear tom, happy new year. i want to know is there any difference between varchar and varchar2? if no then why oralce having these two identical datatype. thanks. regards. shankar sen
Categories: DBA Blogs

Oracle JET Router State Control

Andrejus Baranovski - Sat, 2017-07-01 02:25
In my use case there is requirement to navigate to JET module, without displaying it in menu structure. Navigation is a simple task, more complex is to make sure current menu item will be unselected after navigation to invisible module (so that later we are able to re-open it).

There is accounts module in JET sample app (download it from GitHub - JETModularArchitecture):


Accounts module is included into router, but not included into array of visible menu items:


On button click (dashboard module) we call function, where router navigation happens to accounts module:


Let's see how this functionality works. I click on Accounts button to call a function and force router navigation:


Accounts module will be opened (through router navigation), but currently highlighted menu item will stay as it was (Dashboard). This makes it impossible to navigate back to Dashboard (unless we navigate to another module and then navigate to Dashboard):


Solution for this problem is to change selection property of ojNavigationList component in HTML:


Instead of pointing to router stateId directly, we should point to computed function. This will help to control currently selected menu item. But we still need to update router state, when menu item is selected. For that reason I'm using beforeSelect property, which points to function, where router state is changed:


Selection property value is calculated by function, which returns NULL, if accounts is currently selected module. This allows to remove focus from Dashboard menu item, after navigation to accounts. Once focus is removed, we can click back on Dashboard and navigate. For all other modules - current router state ID is returned. Select handler call router API to navigate to the module:


Navigation to accounts module from dashboard. Dashboard menu item doesn't remain focused anymore:


Navigation back to dashboard work too:

How Do I Start Learning Oracle ADF - The 12c Edition

Shay Shmeltzer - Fri, 2017-06-30 17:42

The most popular blog entry on my blog has been the "How do I start Learning ADF" entry for years now. That entry however was last updated in 2012 (and written in 2010) - so I figured it is time to give it another update, point to more recent resources, fix broken links, and cover additional resources that appeared over the years.

So here is the ADF 12c version of that blog entry updated for 2017:

Get started with Oracle ADF in 6 steps

Step 1 - Learn Basic Java

Oracle ADF aims to reduce the amount of coding you need to do for a lot of the tasks you'll need for building an application, and if you follow some of the tutorials mentioned later you'll see how you can build advanced apps without coding. But, at the end of the day, you will write code when developing with ADF - and that code would be written in Java. You don't have to be a Java ninja to work in ADF, but you should be familiar with basic language concepts and constructs.

There are lots of resources out there that will teach you the language (by the way if you are on ADF 12.2.* you should learn the Java/JDK 8 syntax), one option is the Oracle Java Tutorials path. Searching online you'll be able to find many other resources for this task. Since Oracle ADF is based on Java EE architecture - you might want to also get a bit of understanding of that architecture - but don't worry about learning all of Java EE in details - ADF will make it much simpler for you.

While learning the language you should be practicing it with the development tool that you are going to use, if you are going to developer Oracle ADF applications then that tool will be Oracle JDeveloper. Get yourself familiar with the basic IDE features for coders by running through this IDE tutorial.

Step 2 - Get started with Oracle ADF

Now that you know the basics of the Java language (and maybe some Java EE concepts), it's time to start using the framework that will simplify your life. Start by reading the data sheet and technical paper to understand what ADF is all about.

Now get your hands dirty by completing the Overview tutorial for Oracle ADF - this will take you a couple of hours but by the end of it you'll have built a full blown application, and you will touch on most of the parts of the Oracle ADF architecture.

Two other tutorials you should do next will deepen your knowledge about the Oracle ADF Controller Layer and taskflows, and the Oracle ADF Faces UI layer. If you got more time, have a run through other tutorials from our site.

Step 3 - Getting Educated

Now that you have hands-on experience with Oracle ADF, it would be a good point to go and get some deeper knowledge about how the framework works. You can leverage the collection of free online lessons we recorded in the ADF Essentials channel. You don't have to watch all the videos, but I would definitely recommend that at a minimum you'll watch the overview, ADF business components, ADF Controller (both parts) and ADF Faces video. And then you must watch the video about the ADF bindings internal seminars (2 parts) - these are critical for you to understand the inner working of the ADF "magic layer" that makes development so simple. 

By the way if you prefer to get knowledge through live or online instructor-lead courses or by reading books - we have those too - see the list here.

Step 4 - RTFM

Ok, now you have a good grasp of the framework and how it works, it might be a good time to read the manual for Oracle ADF - "Developing Fusion Web Applications with Oracle Application Development Framework". This is the complete guide and you should read it to get more insight into the framework, best practices, and general guidelines. Note that the ADF documentation libraries has additional books about ADF Faces, ADF Desktop Integration, Administration guides and more.

Step 5 - Become an ADF Architect

Now that you know how to build ADF apps, it's time to learn how to architect more complex projects and work in a team environment. The resource to learn from is the ADF Architecture Square - where we discuss best practices, development guidelines, and most importantly how to architect a complete complex application. Here you can find docs and also a link to a set of videos on the ADF Architecture Square YouTube Channel. If you only have time to watch one video from that channel - go for the "Angels in the ADF Architecture". By the way, if you are looking for a platform for your team to collaborate on while building Oracle ADF applications - check out the Oracle Developer Cloud Service and the integration it provides with JDeveloper.

Step 6 - Join the Community

As you continue on your development road, there will be times when you'll want to know "How do I do X?" or "Why do I get this error?". The nice thing is that since many other developers are working with ADF, you can leverage their collective knowledge. Got a question - type it into google and it is likely that you'll find blog entries and youtube videos that explain how to solve your issue.

A great place to search for answers is the indexed collection of ADF and JDeveloper blog articles. Search by keywords or topics and you'll likely get great samples to help you achieve your task.

Still can't find the answer? Ask your question on our ADF community forum, just don't forget to follow the basic rules of asking questions on the forum.

Things keep evolving in the world of Oracle ADF, so to keep up to speed you should follow JDeveloper on Twitter for the latest news.

Over the years Oracle ADF has proven itself to be a great framework for enterprise applications, and each new release introduced further capabilities and simplifications - If you are just now joining the world of Oracle ADF you are in for a great ride. Have fun.

Categories: Development

PostgreSQL 10 Beta 1 : pg_hba_file_rules view

Yann Neuhaus - Fri, 2017-06-30 10:12

This small blog to share a helpful stuff with the pg_hba.conf file :

The pg_hba.conf file is the central configuration file to control client authentication.
It is located in the database cluster’s data directory :
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] ll pg_hba.conf
lrwxrwxrwx. 1 postgres postgres 59 Jun 30 10:19 pg_hba.conf
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1]

When you add or modify an entry in this file, you have to reload the cluster to take the changes in account :

postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] echo "host all efm 192.168.22.38/35 md5" >> /u02/pgdata/PG10BETA1/pg_hba.conf
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1] pg_ctl -D "/u02/pgdata/PG10BETA1/" reload
server signaled
postgres@ppas01:/u02/pgdata/PG10BETA1/ [PG10BETA1]

Oops… Have you seen my mistake ? Probably not. The system didn’t see it either.
That was in Postgres 9.6 and older versions.

In Postgres 10 there is now a new view called “pg_hba_file_rules” which will returns the current content of the pg_hba.conf file entries AND reports the errors :
postgres=# select version();
version
-------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 10beta1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

postgres=# select line_number, error, type, database, user_name, address, netmask from pg_hba_file_rules;
line_number | error | type | database | user_name | address | netmask
-------------+-------------------------------------------------+-------+---------------+-----------+-----------+-----------------------------------------
80 | | local | {all} | {all} | |
82 | | host | {all} | {all} | 127.0.0.1 | 255.255.255.255
84 | | host | {all} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
87 | | local | {replication} | {all} | |
88 | | host | {replication} | {all} | 127.0.0.1 | 255.255.255.255
89 | | host | {replication} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
90 | invalid CIDR mask in address "192.168.22.38/35" | | | | |
(7 rows)

This will allows you to get a quick look at what is wrong in pg_hba.conf and fixing it as necessary.

 

Cet article PostgreSQL 10 Beta 1 : pg_hba_file_rules view est apparu en premier sur Blog dbi services.

OAC: Essbase – Loading Data

Rittman Mead Consulting - Fri, 2017-06-30 09:33

After my initial quick pass through Essbase under OAC here, this post looks at the data loading options available in more detail. I used the provided sample database ASOSamp.Basic, which first had to be created, as a working example.

Creating ASOSamp

Under the time-honoured on-prem install of Essbase, the sample applications were available as an install option – supplied data has to be loaded separately, but the applications / cubes themselves are installed as part of the process if the option is selected. This is not quite the same under OAC – some are provided in an easily installable format, but they are not immediately available out-of-the-box.

One of the main methods of cube creation in Essbase under OAC is via the Import of a specifically formatted Excel spreadsheet, and it is via the provision of downloadable pre-built ‘template’ spreadsheets that the sample applications are installed in this version.

After accessing the homepage of Essbase on OAC, download the provided cube creation template – this can be found under the ‘Templates’ button on the home page:

Note that in the case of the sample the ASOSamp.Basic database, the data is not in the main template file – it is held in a separate file. This is different to other examples, such as Sample.Basic, where the data provided is held in a dedicated tab in the main spreadsheet. Download both Aggregate Storage Sample and Aggregate Storage Sample Data:

Return to the home page, and click Import. Choose the spreadsheet downloaded as Aggregate Storage Sample (ASO_Sample.xlsx) and click Deploy and Close.

This will effect all of the detail in the spreadsheet – create the application, create the cube, add dimensions / attribute dimensions and members to the outline, etc:

Loading ASOSamp.Basic

Because the data file is separate from the spreadsheet, the next step is to uploaded this to OAC so that it is available for loading: back on the home page, select the newly-created ASOSamp.Basic (note: not ASOSamp.Sample as with on-prem), and click Files:

In the right-hand window, select the downloaded data file ASOSampleData.txt and click the Upload button:

This will upload the file:

Once the file upload is complete, return to the home page. With the newly-created ASOSamp.Basic still selected, click Jobs:

Choose Data Load as the Job Type, and highlight the required Data File:

Click Execute.

A new line will be added to the Job Monitor:

The current status of the job is shown – in this case, ‘in progress’ – and the screen can be refreshed.

Once complete, the Status field will show the completion state of the job, whilst the Job Details icon on the right-hand side provides more detail – in this case, confirming that 311,795 records were successfully loaded, and 0 rejected:

The success of the load is confirmed by a quick look in Smartview:

Note that a rules file was not selected as part of the job – this makes sense when we look at the data file…

...which is familiar-looking: just what we would expect from a EAS export (MAXL: export database), which can of course just be loaded in a similar no-rules-file way in on prem.

Incidentally, this is different to the on-prem approach to ASOSamp.Sample where a ‘flat’, tab-delimited data file is provided for the sample data, along with a rules file that is required for the load:

...although the end-results are the same:

This ‘standard’ load works in overwrite mode – any new values in the file will be added, but any that exist already will be overwritten: running the load again and refreshing the Smartview report results in the same numbers confirms this.

This can be verified further by running with a changed data file: taking a particular node of data for the Units measure…

One of the constituent data values can be changed in a copy of the data file – in this example, one record (it doesn’t matter which for this purpose) can be increased – in this case, ‘1’ has been increased to ‘103’:

The amended file needs to be saved and uploaded to OAC as outlined above, and the load process repeated, this time using the amended file. After a successful load, the aggregated value on the test Smartview report has increased by the same 102:

Loading flat files

So, how might we load the same sort of flat, tab delimited file of the like supplied as the on-prem ASOSamp.Sample data file ?

As above, files can be uploaded to OAC, so putting the dataload.txt data file from the on-prem release into OAC is straightforward. However, as you’d expect, attempting to run this as a load job without a rules file results in an error.

However, it is possible to run an OAC load with a rules file created in an on-prem version: firstly, upload the rules file (in this case, dataload.rul) in the same way as the data file. When setting up the load job, select the data file as normal, but under Scripts select the rules file required:

The job runs successfully, with the ‘Details’ overlay confirming the successful record count.

As with rules files generated by the Import facility, uploaded rules files can also be edited in text mode:

It would seem logical that changing the dataLoadOptions value at line 215 to a value other than OVERWRITE (eg ADD) might be a quick behavioural change for the load that would be easy to effect. However, making this change resulted in verification errors. Noting that the errors related to invalid dimension names, an attempt was made to verify the actual, unchanged rules file as uploaded…which also resulted in the same verification errors. So somewhat curiously, the uploaded on-prem rules file can be successfully used to load a corresponding data file, but (effectively) can’t be edited or amended.

Loading from Spreadsheet Template

The template spreadsheets used to build applications can also contain one or more data tabs. Unlike the OAC Jobs method or EssCS Dataload, the spreadsheet method gives you the option of a rules file AND the ability to Add (rather than overwrite) data:

Within OAC, this is actioned via the ‘Import’ function on the home page:

Note that we are retaining all data, and have the Load Data box checked. Checks confirm the values in the file are added to those already in the cube.

The data can also be uploaded via the Cube Designer in Excel under Cube Designer / Load Data:

Note that unlike running this method under OAC, the rules file (which was created by the initial import as the Data tab existed in the spreadsheet at that point) has to be selected manually.

Once complete, an offer is made to view the Job Status Viewer (which can also be accessed from Cube Designer / View Jobs):

With further detail for each job also being available:

Use facilities to upload files

Given the ability to upload and run both data and rules files, the next logical step would be to script this for automated running. OAC contains a downloadable utility, the Command Line Tool (aka CLI , EssCS) which is a number of interface tools that can be run locally against an OAC instance of Essbase:

Login / Logout
Calc
Dataload
Dimbuild
Clear
Version
Listfiles
Download
Upload
LcmExport
LcmImport

Running locally, a successful EssCS login effectively starts a session that then remains open for other EssCS commands until the session is closed with a logout command.

The login syntax suggests the inclusion of the port number in the URL, but I had no success with this…although it worked without the port reference:

As above, the connection is made and is verified by the successful running of another command (eg version), but the logout command produced an error. Despite this, the logout appeared successful – no other EssCS commands worked until a login was re-issued.

With EssCS installed and working, the Listfiles and Upload facilities become available. The function of these tools is pretty obvious from the name. Listfiles should be issued with at least arguments for the application and cube name:

The file type (csc, rul, txt, msh, xls, xlsx, xlsm, xml, zip, csv) can be included as an additional argument…

…although the file types is a fixed list – for example, you don’t seem to be able to use a wild card to pick up all spreadsheet files.

Whilst there is an Upload (and Download) facility, there does not seem to be the means to delete a remote file…which is a bit of an inconvenience, because using Upload to upload a file that already exists results in an error, and there is no overwrite option. The dataload.txt and dataload.rul files previously uploaded via the OAC front end were therefore manually deleted via OAC, and verified using Listfiles.

The files were then uploaded back to OAC using the Upload option of EssCS:

As you would expect, the files will then appear both in a Listfiles command and via OAC:

Note that the file list in OAC does not refresh with a browser page refresh or any ‘sort’ operation: use Refresh under Actions as above.

With the files now re-uploaded, the data can be loaded. EssCS also contains a DataLoad command, but unfortunately there appears to be no means to specify a rules file – meaning it would seem to be confined to overwrite, ‘export data’ style imports only:

A good point here is that the a DataLoad EssCS command makes an entry to the Jobs table, so success / record counts can be confirmed:

Summary

The post details three methods of loading data to Essbase under OAC:

  • Via the formatted template spreadsheet (on import or from Cube Designer)
  • Via the Command Line Interface
  • Via the Jobs facility of OAC

There are some minor differences between them, which may affect which you may wish to use for any particular scenario.

Arguably, given the availability of MAXL, there is a further custom method available as the actual data load can be effected that way too. This will be explored further in the next post that will start to consider how these tools might be used for real scenarios.

Categories: BI & Warehousing

my oracle database load is particularly high

Tom Kyte - Fri, 2017-06-30 08:06
Dear Mr Connor, my oracle database load is particularly high,and I don't connect it ,finally application developer reboot their application server to solve it! I export an AWR report for failure diagnosis in peroid time, <code> WORKLOAD REPOSIT...
Categories: DBA Blogs

Delete a record after it is inserted

Tom Kyte - Fri, 2017-06-30 08:06
1)i want delete the inserted record automatically after 10 minute of insertion by using trigger .Is it possible ?if not possible, give possible way how to do? Otherwise give another different way to do it? Thank You.
Categories: DBA Blogs

Get DDL table without segment attributes but with tablespace

Tom Kyte - Fri, 2017-06-30 08:06
Hi all, I would like to get DDL for a table with just tablespace info (without the remaining "Segment attributes" info). For example, <code>CREATE TABLE "SQL_ZRRMMMSKTDVROYPXNSHFKJXCB"."MY_TABLE" ( "COLUMN1" VARCHAR2(1), "COLUMN2" NUMB...
Categories: DBA Blogs

Bulk insert with unique sequence number Country wise

Tom Kyte - Fri, 2017-06-30 08:06
Hi Connor/Chris, I have a stored procedure logic written which first insert data into table using BULK LIMIT, then for sequence number we are using ROWNUM to get the uniform sequence : n_card_rowcount := 0; LOOP ...
Categories: DBA Blogs

Activities after Weekly Data Inserts on Partition Table

Tom Kyte - Fri, 2017-06-30 08:06
Hi, We have Archival DB which holds historical application data. Weekly basis data from Primary DB gets copied to Archival DB Partition tables. Do I need to perform any activities post data movement in Partition table like Gather Table/Index S...
Categories: DBA Blogs

how to disable auto login wallet

Tom Kyte - Fri, 2017-06-30 08:06
Hi, In a RAC database, I have a open keystore (wallet) with AUTOLOGIN. I would like to disable the AUTO LOGIN, but I am not sure of the correct procedure for that. select * from gv$encryption_wallet; WRL_TYPE WRL_PARAMETER ...
Categories: DBA Blogs

ORD_IMAGE vs ORDImage

Tom Kyte - Fri, 2017-06-30 08:06
Hi Oracle Team, After reading through the docs, I still cannot figure out how the two are different: 1. Multimedia ORD_IMAGE PL/SQL Package (ORD_IMAGE with an underscore in the name) 2. Multimedia ORDImage Object Type (ORDImage) Functionally...
Categories: DBA Blogs

PeopleSoft and Adaptive Query Optimization in Oracle 12c

David Kurtz - Fri, 2017-06-30 07:44
Adaptive Query Optimization is a significant feature in Oracle 12c. Oracle has made lots of information available on the subject.(See https://blogs.oracle.com/optimizer/oracle-database-12c-is-here).
Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics…There are two distinct aspects in Adaptive Query Optimization: adaptive plans, which focuses on improving the execution of a query and adaptive statistics, which uses additional information to improve query execution plans.from Optimizer with Oracle Database 12c Release 2, White Paper June 2017 (see http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf)Adaptive Statistics include:
  • SQL plan directives: created and used to adapt SQL execution plans.
  • Statistics feedback: cardinality from table joins is used to improve SQL execution plans.
  • Performance feedback: improves the degree of parallelism chosen when PARALLEL_DEGREE_POLICY is set to ADAPTIVE
  • Adaptive dynamic sampling for parallel execution: dynamic statistics are gathered for certain parallel queries to improve cardinality estimates. The sample size is determined automatically.
(Parallelism is not used by default within PeopleSoft because all objects are marked NOPARALLEL by Application Designer).
This additional information should help the optimizer make better decisions, but it comes at the price of making the database do more work during SQL parse. Unfortunately, PeopleSoft makes extensive use of dynamically generated SQL, often with literal values leading to large amounts of parse. Even a small additional overhead during SQL parse can result in a significant overhead for the entire system. Adaptive Plans include:
  • Nested loop join/Hash join selection: the optimizer chooses between nested loops or hash joins at query runtime.
  • Adaptive parallel distribution method: the parallel distribution method is determined at runtime.
  • Star transformation bitmap pruning: certain bitmap indexes may be removed from the SQL execution plan at runtime if selectivity is significantly poorer than the estimate.
When joining tables, making the best choice between nested loops and hash joins is often critical for good performance. This feature could be of significant benefit to many applications, not just PeopleSoft. Let's take, as an example, this SQL generated by nVision on a 12c release 1 database. Due to the literal values it is very unlikely that the statement will be executed a second time.
SELECT SUM(A.POSTED_TOTAL_AMT) FROM sysadm.PS_LEDGER A WHERE A.LEDGER ='XXXXXXXXXX' AND A.FISCAL_YEAR=2015 AND (A.ACCOUNTING_PERIOD 
BETWEEN 1 AND 12 OR A.ACCOUNTING_PERIOD BETWEEN 998 AND 999) AND A.ACCOUNT IN ('40000001','40000002','40000003','40000004') AND
(A.DEPTID ='001A' OR A.DEPTID ='002A' OR A.DEPTID ='003A' OR A.DEPTID ='004A' OR A.DEPTID ='005B' OR A.DEPTID ='006A' OR A.DEPTID
='007A') AND A.CURRENCY_CD ='USD' AND A.STATISTICS_CODE=' '
This recursive SQL was generated by dynamic statistics processing during SQL parse. Such SQL is easily identifiable by the /* DS_SVC */ comment.
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600) */
SUM(C1)
FROM (
SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "A") */ 1 AS C1
FROM "SYSADM"."PS_LEDGER" SAMPLE BLOCK(82.1847, 8) SEED(12) "A"
WHERE ("A"."FISCAL_YEAR"=2015) AND ("A"."CURRENCY_CD"='USD') AND
("A"."ACCOUNT"='40000001' OR "A"."ACCOUNT"='40000002' OR
"A"."ACCOUNT"='40000003' OR "A"."ACCOUNT"='40000004') AND
("A"."ACCOUNTING_PERIOD"<=170 AND "A"."ACCOUNTING_PERIOD">=1 OR
"A"."ACCOUNTING_PERIOD">=901 AND "A"."ACCOUNTING_PERIOD"<=905) AND
("A"."DEPTID"='001A' OR "A"."DEPTID"='002A' OR "A"."DEPTID"='003A' OR
"A"."DEPTID"='004A' OR "A"."DEPTID"='005B' OR "A"."DEPTID"='006A' OR
"A"."DEPTID"='007A') AND ("A"."LEDGER"='XXXXXXXXXX') AND
("A"."STATISTICS_CODE"=' ')) innerQuery
It is easy to see that you wouldn't need too many additional queries like this to have a significant on system performance. In the early days of Oracle 12c many PeopleSoft customers reported just such experience. It is no surprise therefore that Oracle recommend disabling adaptive optimization in 12c. 
optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications. from E-ORA Advice for the PeopleSoft Oracle DBA (Doc ID 1445965.1) (see https://support.oracle.com/epmos/faces/DocumentDisplay?id=1445965.1)In Oracle 12c release 1, the single parameter OPTIMIZER_ADAPTIVE_FEATUES controls all the adaptive optimization features. However, in Oracle 12c release 2 this is replaced by 2 new parameters:
Significantly this enhancement is available in 12c release 1. See
There are two patches available
  • patch for bug# 22652097 introduces the two parameters OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS and removes the parameter OPTIMIZER_ADAPTIVE_FEATURES.
  • patch for bug# 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STATS_EXTENSIONS is set to ON.
Here is an example of another query from nVision, this time on a 12.1 system where the two patches have been applied. The optimizer has switched from a nested loop to a hash join in an adaptive plan. In this particular case, this was a good decision and the hash join outperformed the nested loop.
SELECT A.AFFILIATE,SUM(A.POSTED_TOTAL_AMT) FROM PS_XX_XXXXXXX_VW A,
PSTREESELECT10 L1 WHERE A.LEDGER='XXXXXXX' AND A.FISCAL_YEAR=2016 AND
A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND L1.SELECTOR_NUM=63382 AND
A.ACCOUNT=L1.RANGE_FROM_10 AND A.BUSINESS_UNIT='XXXXXX' AND
A.CURRENCY_CD='USD' AND A.STATISTICS_CODE=' ' GROUP BY A.AFFILIATE

Plan hash value: 4041461164

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 86 (100)| | | |
| 1 | HASH GROUP BY | | 1 | 67 | 86 (3)| 00:00:01 | | |
| 2 | HASH JOIN | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 3 | NESTED LOOPS | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 4 | STATISTICS COLLECTOR | | | | | | | |
| 5 | PARTITION RANGE ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 | 16 | 28 |
| 6 | INLIST ITERATOR | | | | | | | |
| 7 | PARTITION LIST ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 |KEY(I) |KEY(I) |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER | 9 | 459 | 84 (2)| 00:00:01 | KEY | KEY |
| 9 | INDEX RANGE SCAN | PSCLEDGER | 117 | | 52 (2)| 00:00:01 | KEY | KEY |
|- 10 | PARTITION RANGE SINGLE | | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
|- 11 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 12 | PARTITION RANGE SINGLE | | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 13 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
--------------------------------------------------------------------------------------------------------------------------------------

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
Conclusion Adaptive Optimization is a great feature that does deliver real benefits, but like many Oracle features, it works best on an application that has been developed how Oracle would recommend applications be developed. Since the default value for OPTIMIZER_ADAPTIVE_STATISTICS is FALSE, PeopleSoft is clearly not alone in not being such an application.
Given PeopleSoft's propensity to generate non-sharable SQL, adaptive statistics are not a viable option, and should not be enabled, and in 12c release 1 that has meant sacrificing the benefits of adaptive plans. However, patch 22652097 means that we don't have to wait for release 2, we can now choose which feature to use.
Every system is different, even every PeopleSoft system is different. However, my experience of this so far with PeopleSoft is that I have not seen Oracle switch join method cause a problem. It certainly isn't a panacea either. When the join method has changed, it hasn't always made a big difference, but it has been net beneficial. I would certainly recommend applying these patches.

Video: How Do You Get from a Tandy 1000 to the Moon?

OTN TechBlog - Fri, 2017-06-30 07:00

During my first interview with Sean Phillips, recorded at Oracle Code in Washington DC, he went into technical detail about the role JavaFX played in his work for NASA on an application that plots space flight trajectories. Cool gig, right? So when I had another chance to talk to Sean, this time at last week's  Oracle Code event in Atlanta, I wanted to focus on his career, tracking his own trajectory from his first experiments with a Tandy 1000 computer to his current work helping spacecraft to get to the farthest reaches of our solar system. That's what you'll get in this video. After a brief recap of his Oracle Code Atlanta keynote session, Sean describes when he first caught the software development bug, and brings us up to the present and the NASA projects in which he is involved. Watch the video!

Additional Resources

after 27+ years in Oracle land I am forced to patch sqlplus

Amis Blog - Fri, 2017-06-30 06:25

When starting to use sqlplus 12.2 I noticed that my SQL prompt was not changing to what login.sql told it to be. This did not happen in sqlplus 12.1 or lower versions.
Maybe this is a bug, maybe a new feature I thought. Behaviour in 12.2 of sqlplus indeed has changed according to documentation: sqlplus 12.2 no longer looks in local directory (i.e. where you start sqlplus) for a login.sql file to run but only looks for .sql files in directories indicated by environment variables (SQLPATH for Windows and ORACLE_PATH on Linux). However, even when setting these environment variables to the proper values sqlplus still did not run my login.sql automatically. Ok, then I’ll create an SR with Oracle Support. They confirmed that this odd behaviour indeed is a bug and that a patch is available for sqlplus: PATCH:25804573. So now finally I have a reason to patch sqlplus!

The post after 27+ years in Oracle land I am forced to patch sqlplus appeared first on AMIS Oracle and Java Blog.

Pages

Subscribe to Oracle FAQ aggregator