Feed aggregator

I Am Speaking at OTN Yathra 2017

Oracle in Action - Wed, 2017-06-14 00:42

RSS content

The Oracle ACE directors and Oracle Volunteers  in the region are organizing their third evangelist event called ‘OTNYathra 2017’ during the month of June 2017 in a time period of 3 weeks. This yathra, managed by ACE directors and Oracle Volunteers of the region, is to travel across the country covering over six major IT focused cities (Chennai, Bangalore, Hyderabad,  Pune, Mumbai and Gurgaon) . It aims  to bring the Oracle community together, giving them awareness and improve the level of knowledge and increase the networking opportunities in the region.

I will be speaking at this year’s OTNYathra  about “RAC server Pools: What’s new in Oracle 12c?”.

Oracle Clusterware 11g release 2 (11.2) laid the foundation of policy based cluster management by introducing  server pools as a means for specifying resource placement and administering server allocation and access. However, originally, server pools were restricted to a set of basic attributes characterizing servers as belonging to a given pool. There was no way to distinguish between types of servers; all servers were considered to be equal in relation to their processors, physical memory, and other characteristics. This can lead to  sub-optimal performance of some applications if the servers  assigned to the server pools hosting those applications do not meet the applications’ requirements.

Oracle Grid Infrastructure 12c enhances the use of server pools by introducing server attributes e.g. memory, CPU_count etc. which can be associated with each server. Server pools can be configured so that their members belong to a category of servers, which share a particular set of attributes. Moreover, the administrators can maintain a library of policies and switch between them as required rather than manually reallocating servers to various server pools based on workload.

My presentation discusses in detail the new features of RAC server pools in 12c.

My session will be held on Sunday 25th June, 2017   from 1.30 pm to 2.30 pm in
Room 1, Fidelity International, Unitech Cyber Park, Tower D, Sector 39, Durga Colony, Sector 39, Gurugram, Haryana 122001, India
Hope to meet you there!!



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [I Am Speaking at OTN Yathra 2017], All Right Reserved. 2017.

The post I Am Speaking at OTN Yathra 2017 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Swagger UI with Spring Boot 1.5.x

Pas Apicella - Tue, 2017-06-13 23:00
I recently created this demo / blog entry on using HTTPIE with Spring Boot Rest Repositories as shown below.

http://theblasfrompas.blogspot.com.au/2017/05/using-httpie-with-spring-boot-rest.html

I decided to take that same example and add Swagger UI to the RESTful endpoints. The full source code is here.

https://github.com/papicella/httpie-springboot

In short what you need is the following maven dependancies and that will add all you need. I found it works much cleaner if you use the same version of both these dependancies for some reason
  
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.6.1</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.6.1</version>
</dependency>

Finally a Class file describing the config and enabling Swagger is required as follows
  
package pivotal.io.boot.httpie.demo;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.service.ApiInfo;
import springfox.documentation.service.Contact;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;

import static springfox.documentation.builders.PathSelectors.regex;

@Configuration
@EnableSwagger2
public class SwaggerConfig
{
@Bean
public Docket swaggerSpringMvcPlugin() {
return new Docket(DocumentationType.SWAGGER_2)
.select()
.apis(RequestHandlerSelectors.basePackage("pivotal.io.boot.httpie.demo"))
.paths(regex("/api/employee/emps.*"))
.build()
.apiInfo(metaData());
}

private ApiInfo metaData() {
ApiInfo apiInfo = new ApiInfo(
"Spring Boot Employee REST API",
"Spring Boot Employee REST API",
"1.0",
"Terms of service",
new Contact("Pas Apicella", "https://www.blogger.com/profile/09389663166398991762", "papicella@pivotal.io"),
"Apache License Version 2.0",
"https://www.apache.org/licenses/LICENSE-2.0");
return apiInfo;
}
}

The GitHub repo also included a Pivotal Cloud Foundry manifest.yml file to make it easy to deploy to Pivotal Cloud Foundry. The example uses a static hostname BUT can easily be changed to use a random-route or alter the hostname itself.

applications:
- name: pas-swagger-demo
  memory: 1G
  instances: 1
  hostname: pas-swagger-demo
  path: ./target/httpie-springboot-0.0.1-SNAPSHOT.jar
  env:
    JAVA_OPTS: -Djava.security.egd=file:///dev/urando

Then it's the simple "cf push"

$ cf push

pasapicella@pas-macbook:~/piv-projects/httpie-springboot$ cf push
Using manifest file /Users/pasapicella/piv-projects/httpie-springboot/manifest.yml

Creating app pas-swagger-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Creating route pas-swagger-demo.cfapps.io...
OK

..

Showing health and status for app pas-swagger-demo in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: pas-swagger-demo.cfapps.io
last uploaded: Wed Jun 14 03:32:31 UTC 2017
stack: cflinuxfs2
buildpack: container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.15-offline-https://github.com/cloudfoundry/java-buildpack.git#a3a9e61 java-main java-opts open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfigur...

     state     since                    cpu      memory         disk           details
#0   running   2017-06-14 01:33:40 PM   291.5%   510.9M of 1G   154.9M of 1G


The application is running on Pivotal Web Services as follows:

http://pas-swagger-demo.cfapps.io/swagger-ui.html



Categories: Fusion Middleware

Complete refresh single partition of MV over DB link?

Tom Kyte - Tue, 2017-06-13 13:06
We use materialized views to replicate tables between instances. We'd like to increase performance of loads on the master side by TRUNCATEing/INSERTing data to specific partitions as needed. On the remote side how can we limit the complete refresh to...
Categories: DBA Blogs

Calling a web service (WSDL) through Oracle Procedure/Function

Tom Kyte - Tue, 2017-06-13 13:06
Hello Experts, We have a requirement to call a web service through Oracle Store procedure/Function? Can we get any example related to that? We have to call web service : http://10.98.171.13:7003/soa-infra/services/default/ManageQuoteTaskProc...
Categories: DBA Blogs

where to download personal edition for windows 12.1.0.2

Tom Kyte - Tue, 2017-06-13 13:06
I have downloaded latest available 12c 12.1.0.2 Enterprise Edition for Windows. I get to the screen to select which Edition of Oracle to install and installer GUI does not allow any changes ( it remains stuck on Enterprise edtion ). How does one ...
Categories: DBA Blogs

Removing deadlock

Tom Kyte - Tue, 2017-06-13 13:06
Hi Tom, How can I clear deadlock without killing a session? thanks in advance. john
Categories: DBA Blogs

Whose Deck is it Anyways?

Scott Spendolini - Tue, 2017-06-13 07:11
This year at KScope, we're going to try something new.  And fun.  And funny to watch - we hope.  It's called "Whose Deck is it Anyways?", and will occur on Sunday at 8:30pm.  It's only 30 minutes, but it will likely be the best 30 minutes of the conference.  Or at least the most embarrassing.

Here's what we're going to do: the will be four 5-minute presentations - one on each of the following: BI, EPM, Database & APEX.

Sound interesting?  Probably not.  We get that, too.  So here's what we did.

Each 5-minute session will be presented by a non-expert.  For example, it's highly likely that I'll be presenting on BI or EPM.

To make it even better, each slide deck will be prepared by the corresponding expert.  So again, it's highly likely that my slide deck's creator will be either Stewart Bryson or Edward Roske.  If nothing else, this session will be a crash course in how not to make cohesive, easy to read slides.

Interested now?  Ya, I thought so.  Here's some more details on the KScope site.

Implementing OAuth in Oracle JET Applications

OAuth is a security protocol that allows applications(clients) to authorize them for secure API access. Your application registers itself with an API and gets some information describing the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle 12.2 – How to run a consistent full database export with dbms_datapump and parallel degree of 8

Yann Neuhaus - Tue, 2017-06-13 03:58

Nowadays, since the cloud is becoming more and more important, the PL/SQL API’s become more and more important too. Fortunately, Oracle has quite a lot of them. E.g. How do you run a Data Pump export if you have no ssh connectivity to the server? You could use the old exp tool, which is still available even with Oracle 12.2, or you can use DBMS_DATAPUMP. The Data Pump API is quite good documented at in the following books:

Database Utilities
https://docs.oracle.com/database/122/SUTIL/using-ORACLE_DATAPUMP-api.htm#SUTIL600

Database PL/SQL Packages and Types Reference
https://docs.oracle.com/database/122/ARPLS/DBMS_DATAPUMP.htm#ARPLS66050

But you might might some useful stuff in the $ORACLE_HOME/rdbms/admin/dbmsdp.sql as well.

In this little how to, I would like to show how to create a consistent full database export (parallel 8) with
the Data Pump API.

There are a only a few steps involved to get the job done.

1. Create a directory and grant the necessary privileges to user HR
2. Grant the DATAPUMP_EXP_FULL_DATABASE role to user HR
3. Execute the Data Pump job with DBMS_DATAPUMP
4. Monitor the Data Pump job
5. Optionally, do some cleanup

 

1.) Let’s start with the directory.

SQL> CREATE OR REPLACE DIRECTORY DATAPUMP_DIR AS '/u01/app/oracle/admin/DBIT122/dpdump';

Directory created.

SQL> GRANT READ, WRITE ON DIRECTORY DATAPUMP_DIR TO HR;

Grant succeeded.

2.) Now we grant the DATAPUMP_EXP_FULL_DATABASE role to the HR user

SQL> GRANT DATAPUMP_EXP_FULL_DATABASE TO HR;

Grant succeeded.

Please be aware that the DATAPUMP_EXP_FULL_DATABASE role affects only export operations. It allows the user HR to run these operations:

  • Perform the operation outside of the scope of their schema
  • Monitor jobs that were initiated by another user
  • Export objects (for example, TABLESPACE definitions) that unprivileged users cannot reference

Without this role, you might run into the following error when doing a full export:

ERROR at line 1:
ORA-31631: privileges are required
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4932
ORA-06512: at "SYS.DBMS_DATAPUMP", line 6844
ORA-06512: at line 6

 

3.) Now it’s time to run the Data Pump job. Be aware, that for a consistent export, you need to specify the FLASHBACK_TIME or FLASHBACK_SCN. In my case, I use the FLASHBACK_TIME and set it to the current SYSTIMESTAMP.

Ok. Let’s give it a try.

SQL> connect hr/hr
Connected.

SQL> @exp_datapump.sql
SQL> declare
  2        l_datapump_handle    NUMBER;  -- Data Pump job handle
  3        l_datapump_dir       VARCHAR2(20) := 'DATAPUMP_DIR';  -- Data Pump Directory
  4        l_status             varchar2(200); -- Data Pump Status
  5    BEGIN
  6        l_datapump_handle := dbms_datapump.open(operation => 'EXPORT',  -- operation = EXPORT, IMPORT, SQL_FILE
  7                                                job_mode =>'FULL',  -- job_mode = FULL, SCHEMA, TABLE, TABLESPACE, TRANSPORTABLE
  8                                                job_name => 'DBIT122 EXPORT JOB RUN 003',  -- job_name = NULL (default) or: job name (max 30 chars)
  9                                                version => '12'); -- version = COMPATIBLE (default), LATEST (dbversion), a value (11.0.0 or 12)
 10
 11            dbms_datapump.add_file(handle    => l_datapump_handle,
 12                               filename  => 'exp_DBIT122_%U.dmp',
 13                               directory => l_datapump_dir);
 14
 15        dbms_datapump.add_file(handle    => l_datapump_handle,
 16                               filename  => 'exp_DBIT122.log' ,
 17                               directory => l_datapump_dir ,
 18                               filetype  => DBMS_DATAPUMP.ku$_file_type_log_file);
 19
 20        dbms_datapump.set_parameter(l_datapump_handle,'CLIENT_COMMAND','Full Consistent Data Pump Export of DBIT122 with PARALLEL 8');
 21
 22            dbms_datapump.set_parameter(l_datapump_handle,'FLASHBACK_TIME','SYSTIMESTAMP');
 23
 24        dbms_datapump.set_parallel(l_datapump_handle,8);
 25
 26        dbms_datapump.start_job(handle => l_datapump_handle);
 27
 28        dbms_datapump.wait_for_job(handle => l_datapump_handle,
 29                                   job_state => l_status );
 30
 31        dbms_output.put_line( l_status );
 32
 33        end;
 34  /

PL/SQL procedure successfully completed.

SQL>

4.) In another window, you might want to monitor the status of your export job.

SQL> r
  1  select owner_name, job_name, rtrim(operation) "OPERATION",
  2         rtrim(job_mode) "JOB_MODE", state, attached_sessions
  3    from dba_datapump_jobs
  4   where job_name not like 'BIN$%'
  5*  order by 1,2

OWNER_NAME JOB_NAME                         OPERATION    JOB_MODE     STATE        ATTACHED_SESSIONS
---------- -------------------------------- ------------ ------------ ------------ -----------------
HR         DBIT122 EXPORT JOB RUN 003       EXPORT       FULL         EXECUTING                    1

Cool. If the job finished successfully, you will see 8 dump files, because we specified exp_DBIT122_%U.dmp as the file name, and one log file.

oracle@dbidg01:/u01/app/oracle/admin/DBIT122/dpdump/ [DBIT122] ls -l
total 4752
-rw-r----- 1 oracle oinstall  630784 Jun 13 10:29 exp_DBIT122_01.dmp
-rw-r----- 1 oracle oinstall 3321856 Jun 13 10:29 exp_DBIT122_02.dmp
-rw-r----- 1 oracle oinstall  180224 Jun 13 10:29 exp_DBIT122_03.dmp
-rw-r----- 1 oracle oinstall   57344 Jun 13 10:29 exp_DBIT122_04.dmp
-rw-r----- 1 oracle oinstall  430080 Jun 13 10:28 exp_DBIT122_05.dmp
-rw-r----- 1 oracle oinstall   20480 Jun 13 10:29 exp_DBIT122_06.dmp
-rw-r----- 1 oracle oinstall   28672 Jun 13 10:29 exp_DBIT122_07.dmp
-rw-r----- 1 oracle oinstall  176128 Jun 13 10:28 exp_DBIT122_08.dmp
-rw-r--r-- 1 oracle oinstall   11966 Jun 13 10:29 exp_DBIT122.log

5.) Finally, you might want to do some cleanup, in case you don’t need the dump files and the log files anymore. Or you start your export job with the REUSE_DUMPFILES=YES option. This option overwrites the destination dump file if they exist. In case you want to do the cleanup manually, you can use the ULT_FILE package.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122.log' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_01.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_02.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_03.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_04.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_05.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_06.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_07.dmp' );

PL/SQL procedure successfully completed.

SQL> exec utl_file.fremove( 'DATAPUMP_DIR', 'exp_DBIT122_08.dmp' );

PL/SQL procedure successfully completed.

SQL>
Conclusion

The PL/SQL API’s become more and more important, especially in cloud environments. It makes quite a lot of sense, from my point of view to look closer into the one or the other. Especially the DBMS_DATAPUMP is an important one for moving data around.

 

 

 

Cet article Oracle 12.2 – How to run a consistent full database export with dbms_datapump and parallel degree of 8 est apparu en premier sur Blog dbi services.

Rows to Columns Concatenating Values

Tom Kyte - Mon, 2017-06-12 18:46
Hi, <code>Seq_num ITEM_CODE ATTRIBUTE_NAME ATTRIBUTE_VALUE LAST_MODIFIED ==================================================================================================================== 25521604 "HTC-UA-BUNDLE" "Alkaline_Battery_Count" 4...
Categories: DBA Blogs

Your actual view on IT Architecture for where to put the logics: inside or outside the database

Tom Kyte - Mon, 2017-06-12 18:46
Hi Tom and team This is the first question ever I am posting here, until now I was using Asktom as a silent reader only - and I want to say thank You a thousand times to keep this site up for so long, helping many people to find explanations and s...
Categories: DBA Blogs

Reverse Key

Tom Kyte - Mon, 2017-06-12 18:46
Tom, I have many primary keys that are sequences. Thinking of using reverse keys. How do I change the index created by the primary key? Rebuild as a reverse key? Thanks. Keith Cutler More info: Reading through the books I received wi...
Categories: DBA Blogs

data retention from 68 table and show it

Tom Kyte - Mon, 2017-06-12 18:46
Hi Tom I have one requirement please let me know the solution Requirement: ============ I have 68 table names that I get from user_tab_columns using some conditions and I achieve it easily But the challenge is I want to show all the co...
Categories: DBA Blogs

MView fast refresh - snapshot to old

Tom Kyte - Mon, 2017-06-12 18:46
Hi, When executing fast refresh over DbLink on MView "snapshot to old" error is returned. Refresh is executed at database DB_MV: <code>dbms_mview.refresh( list => 'MV_TABLE_NAME');</code> Master table is located on DB_MASTER At execut...
Categories: DBA Blogs

Table creation script on a Select statement take longer than Select Statement time

Tom Kyte - Mon, 2017-06-12 18:46
Hi, I have a Select Statement completing in 20 minutes, but when I try to create a Table with the Select statement(more than 10M Records) it takes more than 2 hours to complete. I have ti use the Table in a PL/SQL program for Data manipulation. I ...
Categories: DBA Blogs

Oracle Releases PeopleSoft Cloud Manager Image 04

PeopleSoft Technology Blog - Mon, 2017-06-12 15:35

PeopleSoft Cloud Manager Image 04 is now available in Oracle Cloud Marketplace.  This new version brings in a number of new enhancements and fixes that customers were asking for.  We’re listening to feedback and working hard to incorporate your ideas into the product as quickly as possible. Here are some of the new features to try out in image 04.  Many thanks to Nagendra Krishnappa for putting this together.

Multiple Middle-Tier Provisioning

The prior images of Cloud Manager had a limitation on the number of middle-tiers that could be provisioned.  With this new image, you can create multiple middle-tiers and connect them all to the same database. In the topology definition, you can now add multiple middle-tier nodes for a topology.  When you add/use this topology in a template, you can configure attributes for each middle-tier individually.

Elastic Search deployment and Configuration

In image 03, Cloud Manager was deploying the Elastic Search server, but users had to manually configure and integrate it with the environment.  In image 04, Cloud Manager can now automatically integrate the Elastic Search server - one less administrative task for you. 

Support Lift of Campus Solutions On-Premise Environments

If you are using Campus Solutions, we now have the capability to migrate (“Lift and Shift”) your on-premise environment to Oracle Cloud.  Cloud Manager’s “Lift” utility will pack up your on-premise environment and “Shift” it to Oracle Cloud automatically. 

Configurable Persistent Boot Volume for Deployed Instances 

Boot volumes of PeopleSoft VMs were not configurable, thereby limiting the size of boot volumes to a standard image size.  You can now modify this setting to specify the larger sizes that you may need.

Security Updates

A couple of security fixes has been addressed in this release related to masking of sensitive data and hardening rules to control network access between provisioned VMs.

Bug Fixes

Improvements and fixes in areas of DBCS provisioning and environment management were addressed in this image. 

New Oracle Linux image for Cloud Manager

Along with the latest Cloud Manager image, we now have a reference Oracle Linux image to get you started quickly.  You can find it in Marketplace: PeopleSoft Linux Image for Cloud Manager.  Configure this image as the Linux image in Cloud Manager Settings and it will be used for all instance provisioning.  

You may also create your own custom image starting with Cloud Manager image 04.  To do that, first deploy an instance of the reference Oracle Linux image.  Next, connect to it and install/upgrade required packages, modify kernel parameters and customize configurations to suit your needs.  When you are done with customizations, create a new image from it and use it with Cloud Manager.  You can find more details about creating a custom Linux image for PeopleSoft Cloud Manager in the Creating a Custom Linux Image for PeopleSoft Cloud Manager tutorial.

Go ahead and enjoy the latest image, and continue to let us know what additional capabilities you would like to have in Cloud Manager!

 

12cR2 PDB refresh as a poor-man standby?

Yann Neuhaus - Mon, 2017-06-12 15:04
Disclaimer

My goal here is only to show that the Refreshable PDB feature works by shipping and applying redo, and then can synchronize a copy of the datafiles. I do not recommend to use it for disaster recovery in any production environment yet. Even if I’m using only supported features, those features were not designed for this usage, and are quite new and not stable yet. Disaster Recovery must use safe and proven technologies and this is why I’ll stick with Dbvisit standby for disaster recovery in Standard Edition.

This post explains what I had in my mind whith the following tweet:
CapturePoorManSBY

Primary PRDPDB

On my primary server, I have a CDB1 container database in Standard Edition with one Pluggable Database: PDRDPDB:

21:36:45 SQL> connect sys/oracle@//192.168.78.105/CDB1 as sysdba
Connected.
 
21:36:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO

I need a user there to be able to remote clone from it:

21:36:46 SQL> grant create session, sysoper, dba to C##DBA identified by oracle container=all;
Grant succeeded.

Standby server

On my standby server, I have a CDB1 container database in Standard Edition, where I create a database link to the production CDB using the user created above to connect to it:

21:36:46 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:36:46 SQL> create database link CDB1A connect to C##DBA identified by oracle using '//192.168.78.105/CDB1A';
Database link created.

My standby server runs Grid Infrastructure and has the database created on /acfs which is an ACFS filesystem. We will see the reason later when we will need to create a PDB snapshot copy. Any filesystem where you can use PDB snapshot copy would be fine.

Standby SBYPDB

The creation of the ‘standby’ pluggable database is done with a simple remote clone command and can be run in 12cR2 with the source PRDPDB still opened read write:


21:36:46 SQL> create pluggable database SBYPDB from PRDPDB@CDB1A
21:36:46 2 file_name_convert=('/u01/oradata/CDB1A/PRDPDB','/acfs/oradata/CDB1/SBYPDB')
21:36:46 3 refresh mode every 1 minutes;
 
Pluggable database created.

The REFRESH MODE is a 12cR2 feature which primary goal is to maintain and refresh a master clone for further provisioning of thin clones. This clone is refreshed every 1 minute, which means that I expect to have at most a one minute gap between PRDPDB and SBYPDB data, with the additional time to apply the 1 minute redo, of course.

Activity on the source

I will simulate a crash of the primary server and a failover to the standby, when transactions are running. I’ll run this activity on the SCOTT.EMP table:

21:39:03 SQL> connect scott/tiger@//192.168.78.105/PRDPDB;
Connected.
 
21:39:04 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 17-nov-1981 00:00:00 5000

I’m now updating the date and incrementing the number each second.

21:39:09 SQL> exec for i in 1..150 loop update emp set hiredate=sysdate, sal=sal+1; dbms_lock.sleep(1); commit; end loop
 
PL/SQL procedure successfully completed.

Here is the latest data on the primary server:

21:41:39 SQL> select * from emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

Crash the primary

The primary server is not supposed to be accessible in case of Disaster Recovery, so I’m crashing it:

21:41:39 SQL> disconnect
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
21:41:39 SQL> connect / as sysdba
Connected.
21:41:39 SQL> shutdown abort
ORACLE instance shut down.

Activate the standby

The datafiles are up to date, with a maximum 1 minute gap and all I want is open it and have the application re-connect to it. However a refreshable clone can be opened only read-only. This makes sense: you cannot apply more redo from source once opened read-write. So my first idea was to stop the refresh mode:

21:41:45 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:41:45 SQL> alter session set container=SBYPDB;
Session altered.
 
21:41:45 SQL> alter pluggable database SBYPDB refresh mode none;
alter pluggable database SBYPDB refresh mode none
*
ERROR at line 1:
ORA-17627: ORA-12514: TNS:listener does not currently know of service requested
in connect descriptor
ORA-17629: Cannot connect to the remote database server

It seems that Oracle tries to do one last refresh when you stop the refresh mode, but this fails here because the source is not accessible. I think that it should be possible to open read-write without applying more redo. However, these refreshable clones were not designed for failover.

I hope that one day we will just be able to end refresh mode without connecting to source, accepting to lose the latest transactions.

Open Read Only

Without an access to the source, I stay in refresh mode and I can only open read only:
21:41:45 SQL> alter pluggable database SBYPDB open read-only;
Pluggable database altered.
 
21:41:47 SQL> alter session set container=SBYPDB;
Session altered.
&nsbp;
21:41:47 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:41:47 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

My data is there, with my less than one minute gap, but that’s not sufficient for me. I want to run my application on it.

Snapshot Clone

My first idea to get the PDB read write on the standby server is to clone it. Of course, the failover time should not depend on the size of the database, so my idea is to do a snapshot copy, and this is why I’ve setup my standby CDB on ACFS. Here I’m cloning the SBYPDB to the same name as the primary: PRDPDB

21:41:47 SQL> alter session set container=CDB$ROOT;
Session altered.
 
21:41:47 SQL> create pluggable database PRDPDB from SBYPDB file_name_convert=('SBYPDB','PRDPDB') snapshot copy;
Pluggable database created.
 
21:42:03 SQL> alter pluggable database PRDPDB open;
Pluggable database altered.

I have now my new PRDPDB opened read write with the latest data that was refreshed:

21:42:26 SQL> alter session set container=PRDPDB;
Session altered.
 
21:42:26 SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
Session altered.
 
21:42:26 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:01 5113

I’m running on a snapshot here. I can stay like that, or plan to move it out of the snapshot in the future. There is no online datafile move in Standard Edition, but there is the online pluggable database relocate. Anyway, running the database in a snapshot is sufficient to run a production after a Disaster Recovery and I can remove the SBYPRD so that there is no need to copy the ACFS extents on future writes.

Keep the snapshot

At that point, you should tell me that I cannot snapshot copy a PDB within the same CDB here because I’m in Standard Edition. And that’s right: you can create only one PDB there and you are supposed to get a ‘feature not enabled’. But I was able to do it here in my lab, with a small trick to inverse the CON_ID sequence:

Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PRDPDB READ WRITE NO
4 SBYPDB MOUNTED

Remote snapshot clone should be possible as well. But there’s another licensing issue here. Using ACFS snapshots for the database is not allowed in Standard Edition. This means that this solution probably requires another snapshot solution than the one I’m using here in my lab.

If you don’t fear to violate the single-tenant rules, you may prefer to keep the SBYPRD for a while. Imagine that you are able to restart the crashed server for a few minutes, then you can do the last refresh of SBYPRD to have a look at the transactions that were lost in the 1 minute window.

I re-start the crashed CDB:

21:42:26 SQL> connect / as sysdba
Connected to an idle instance.
21:42:27 SQL> startup
ORACLE instance started.
 
Total System Global Area 859832320 bytes
Fixed Size 8798552 bytes
Variable Size 356519592 bytes
Database Buffers 486539264 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.

and now, on my standby server, I can finally stop the refresh mode:

21:42:51 SQL> connect sys/oracle@//192.168.78.106:1522/CDB1 as sysdba
Connected.
21:42:52 SQL> alter pluggable database SBYPDB close;
Pluggable database altered.
 
21:42:52 SQL> alter session set container=SBYPDB;
Session altered.
 
21:42:52 SQL> alter pluggable database SBYPDB refresh mode none;
Pluggable database altered.

Be careful not to have jobs or services starting here because your production is now on the snapshot clone PRDPDB running on the same server. Let’s open it:

21:43:02 SQL> alter pluggable database SBYPDB open restricted;
Pluggable database altered.
 
21:43:24 SQL> select * from scott.emp where ename='KING';
 
EMPNO ENAME JOB MGR HIREDATE SAL
---------- ---------- --------- ---------- -------------------- ----------
7839 KING PRESIDENT 11-jun-2017 21:41:38 5150

And here we are with the data at the moment of the crash. Then, the application owner can manually check what was missed between the last refresh (which made its way to PRDPDB) and the crash (visible in SBYPDB).

Unplug/Plug

I was not very satisfied by the snapshot clone because of the limitations in Standard Edition, which is where this solution may be interesting. I have the datafiles but cannot open the SBYPDB read write. I tried to unplug them but cannot because of the refresh mode:

SQL> alter pluggable database SBYPDB unplug into '/tmp/tmp.xml';
 
Error starting at line : 1 in command -
alter pluggable database SBYPDB unplug into '/tmp/tmp.xml'
Error report -
ORA-01113: file 23 needs media recovery
ORA-01110: data file 23: '/acfs/oradata/CDB1/SBYPDB/undotbs01.dbf'
01113. 00000 - "file %s needs media recovery"
*Cause: An attempt was made to online or open a database with a file that
is in need of media recovery.
*Action: First apply media recovery to the file.

I know that I don’t need more recovery. So let’s unplug it in another way:

SQL> alter pluggable database SBYPDB open read only;
Pluggable database SBYPDB altered.
 
SQL> exec dbms_pdb.describe('/tmp/tmp.xml','SBYPDB');
PL/SQL procedure successfully completed.

Then drop it but keep the datafiles:

SQL> alter pluggable database SBYPDB close;
Pluggable database SBYPDB altered.
 
SQL> drop pluggable database SBYPDB;
Pluggable database SBYPDB dropped.

And plug it back:

SQL> create pluggable database SBYPDB using '/tmp/tmp.xml';
Pluggable database SBYPDB created.
 
SQL> alter pluggable database SBYPDB open;
Pluggable database SBYPDB altered.

Here it is. This takes a bit longer than the snapshot solution but still ready to activate the ‘standby’ PDB without copying datafiles.

So what?

All the new 12cR2 multitenant features are available in all Editions, which is very good. Here with ALTER PLUGGABLE DATABASE … REFRESH we have log shipping and apply, for free in Standard Edition, at PDB level. And I’ve tested two ways to open this standby PDB in case of disaster recovery. I’m using only supported features here, but be careful that those features were not designed for this goal. The normal operations on refreshable clone require that the remote CDB is accessible. But there are workarounds here because you can describe/drop/plug or snapshot clone from a PDB that you can open read only.

 

Cet article 12cR2 PDB refresh as a poor-man standby? est apparu en premier sur Blog dbi services.

Unify: See Your Data From Every Perspective

Rittman Mead Consulting - Mon, 2017-06-12 09:09
 See Your Data From Every Perspective

 See Your Data From Every Perspective

Ad hoc access to accurate and secured data has always been the goal of business intelligence platforms. Yet, most fall short of balancing the needs of business users with the concerns of IT.

Rittman Mead has worked with hundreds of organizations representing all points on the spectrum between agility and governance. Today we're excited to announce our new product, Unify, which allows Tableau users to directly connect to OBIEE, providing the best of both worlds.

Governed Data Discovery

Business users get Tableau's intuitive data discovery features and the agility they need to easily blend their departmental data without waiting on IT to incorporate it into a warehouse. IT gets peace of mind, knowing their mission-critical data is protected by OBIEE's semantic layer and row-level security.

Unify Essentials

Unify runs as a desktop app, making it easy for departmental Tableau users to connect to a central OBIEE server. Unify also has a server option that runs alongside OBIEE, for organizations with a large Tableau user base or those using Tableau Server.

Desktop installation and configuration is simple. Once installed, users can query OBIEE from within Tableau with just a few clicks. Have a look at these short videos demonstrating setup and use of Unify.

Available Today

Download your free 7-day trial of Unify Desktop here.

No Tableau Desktop license? No problem. Unify is compatible with Tableau Public.

Categories: BI & Warehousing

Log Buffer #513: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2017-06-12 08:57

This Log Buffer Edition includes blog posts from Oracle, SQL Server and MySQL.

Oracle:

In 12.2 you can: Convert a simple table to partitioned with multi-column automatic list partitions, partially indexed, with read only segments, filtering out unwanted data, online in one operation.

12c How to UN Expire Password for CDB Users

rman auxiliary (for duplicate)

How long will Oracle APEX remain an included feature of the Oracle Database?

How to easily delete files in the Oracle Cloud using CloudBerry Explorer

SQL Server:

Understanding OPENJSON and FOR JSON in Azure and SQL 2016

How to setup Machine Learning Services in SQL Server 2017 CTP2

The OUTPUT Clause for the MERGE Statements

Simple script to backup all SQL Server databases

Working with Windows Containers and Docker: Into your Stride

MySQL:

MySQL on Docker: Running Galera Cluster on Kubernetes

Summary of recent performance tests for MySQL 5.6, 5.7 and 8

A Quick Look at Parallel Rsync and How it Can Save a System Hours

Docker, MySQL and Experience in Containers

HopsFS running on top of MySQL Cluster 7.5 wins IEEE Scale Challenge 2017

Categories: DBA Blogs

dbms_sqldiag

Jonathan Lewis - Mon, 2017-06-12 06:48

If you’re familiar with SQL Profiles and SQL Baselines you may also know about SQL Patches – a feature that allows you to construct hints that you can attach to SQL statements at run-time without changing the code. Oracle 12c Release 2 introduces a couple of important changes to this feature:

  • It’s now official – the feature had been moved from package dbms_sqldiag_internal to package dbms_sqldiag.
  • The limitation of 500 characters has been removed from the hint text – it’s now a CLOB column.

H/T to Nigel Bayliss for including this detail in his presentation to the UKOUG last week, and pointing out that it’s also available for Standard Edition.

There are a couple of other little changes as you can see below from the two extract from the 12.2 declarations of dbms_sqldiag and dbms_sqldiag_internal below:


dbms_sqldiag
------------
FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

dbms_sqldiag_internal
---------------------
FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

  • The function names changes from i_create_patch to create_patch when exposed in dbms_sqldiag.
  • There are two versions of the function – one that requires you to supply the exact SQL text, and a new version that allows you to supply an SQL ID.
  • The internal function also adds a creator to the existing parameter list – and it doesn’t have a default, so if you’ve got some code to use the internal version already it’s not going to work on 12.2 until you change it.

I was prompted to write this note by a tweet asking me if there’s any SQL available to see the contents of an SQL Profile in 11g and 12c. (I published some simple code several years ago for 10g, but Oracle changed the base tables in 11g). The answer is yes, probably on the Internet somewhere, but here’s some code I wrote a couple of years ago to scan the more recent versions of Oracle:

rem
rem     sql_profile_baseline_11g.sql
rem     J.P.Lewis
rem     July 2010
rem

set pagesize 60
set linesize 132
set trimspool on

column hint format a70 wrap word
column signature format 999,999,999,999,999,999,999

break on signature skip 1 on opt_type skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                'Other'
        )       opt_type,
        prf.plan_id,
        extractvalue(value(tab),'.')    hint
from
        (
        select
                /*+ no_eliminate_oby */
                *
        from
                sqlobj$data
        where
                comp_data is not null
        order by
                signature, obj_type, plan_id
        )       prf,
        table(
                xmlsequence(
                        extract(xmltype(prf.comp_data),'/outline_data/hint')
                )
        )       tab
;


This will report the hints associated with SQL Baselines, SQL Profiles, and SQL Patches – all three store the data in the same base table. As a minor variation I also have a query that will reported a named profile/baseline/patch, but this requires a join to the sqlobj$ table. As you can see from the substitution variable near the end of the text, the script will prompt you for an object name.


set pagesize 60
set linesize 180
set trimspool on

column  plan_name format a32
column  signature format 999,999,999,999,999,999,999
column  category  format a10
column  hint format a70 wrap word

break on plan_name skip 1 on signature skip 1 on opt_type skip 1 on category skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.plan_name,
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                  'Other'
        )       opt_type,
        prf.category,
        prf.plan_id,
        extractvalue(value(hnt),'.') hint
from
        (
        select
                /*+ no_eliminate_oby */
                so.name         plan_name,
                so.signature,
                so.category,
                so.obj_type,
                so.plan_id,
                sod.comp_data
                from
                        sqlobj$         so,
                        sqlobj$data     sod
                where
                        so.name = '&m_plan_name'
                and     sod.signature = so.signature
                and     sod.category = so.category
                and     sod.obj_type = so.obj_type
                and     sod.plan_id = so.plan_id
                order by
                        signature, obj_type, plan_id
        )       prf,
        table (
                select
                        xmlsequence(
                                extract(xmltype(prf.comp_data),'/outline_data/hint')
                        )
                from
                        dual
        )       hnt
;


Lagniappe:

One of the enhancements that appeared in 12c for SQL Baselines was that the plan the baseline was supposed to produce was stored in the database so that Oracle could check that the baseline would still reproduce the expected plan before applying it. These plans (also generated for Profiles and Patches) are stored in the table sqlobj$plan, and the dbms_xplan package has been enhanced with three new functions to report them:


FUNCTION DISPLAY_SQL_PATCH_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PLAN_BASELINE RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_HANDLE                     VARCHAR2                IN     DEFAULT
 PLAN_NAME                      VARCHAR2                IN     DEFAULT
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PROFILE_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

e.g.
SQL> select * from table(dbms_xplan.display_sql_profile_plan('SYS_SQLPROF_015c9bd3bceb0000'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL text: select        t1.id, t2.id from       t1, t2 where    t1.id between 10000 and
          20000 and     t2.n1 = t1.n1 and       t2.n1 = t2.v2
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL Profile Name: SYS_SQLPROF_015c9bd3bceb0000
Status:           ENABLED
Plan rows:        From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3683239666

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          | 10501 |   287K|   248   (4)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |     0   (0)|          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10002 | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN BUFFERED   |          | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   4 |     PX RECEIVE          |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|   5 |      PX SEND HASH       | :TQ10000 | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       PX BLOCK ITERATOR |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|*  7 |        TABLE ACCESS FULL| T1       | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |     PX RECEIVE          |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   9 |      PX SEND HASH       | :TQ10001 |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  10 |       PX BLOCK ITERATOR |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWC |            |
|* 11 |        TABLE ACCESS FULL| T2       |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T2"."N1"="T1"."N1")
   7 - filter("T1"."ID"<=20000 AND "T1"."ID">=10000)
  11 - filter("T2"."N1"=TO_NUMBER("T2"."V2"))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Disclaimer – I’ve checked only the SQL_PROFILE function call on 12.2, after creating a profile to check that my old 11g report still worked in 12c.

 


Pages

Subscribe to Oracle FAQ aggregator