As a follow up to a previous video, Pythian CTO Alex Gorbachev gives an overview of the advanced security features within Hadoop.
We have a core group of about 8 of us mostly DBA types but also a couple of developers. Every two weeks we get together to discuss/review a chapter from Tom Kyte's ( latest version ) Expert Database Architecture book.
The idea is that each person is responsible themselves for reading the chapter coming up ( which for as good as Tom Kyte's book is really means reading it multiple times perhaps ). Someone volunteers or is drafted to be the "point person" to guide the discussion of the chapter ... and you rotate through those assignments. At every two weeks well geez it does take a long time to get through a book so that's a downside.
But think about it ... should you do something similar?
After we get through Tom's book the next one up will be Cary Millsap's "Optimizing Oracle Performance" ... ( I think ) ...
I was looking today, for the list of these Diagnostic and Tuning packs components to be sure to not use licensed views in in databases that don’t have those packs. It was not an easy search so I decided to share it with you, if you get lucky you’ll find this page and it will [...]
The Pythian team has received many questions about big data in the cloud, and specifically about Hadoop. Pythian CTO, Alex Gorbachev shares some of his recommendations in our latest video:
As per Oracle 12c documentation, a PDB can
- have its own local temporary tablespace, or
- if it does not have its own temporary tablespace, it can share the temporary tablespace with the CDB.
To demonstrate a PDB sharing the temporary tablespace of CDB, the first step is to create a PDB without a temporary tablespace. By default when a PDB is created from seed, it is created with its local temporary tablespace TEMP and it cannot be dropped as it is the default temporary tablespace of the PDB.
So, the only options I could think of were to
Method – I
- Create a non-CDB without temporary tablespace and the plug it into a CDB
Method – II
- Create a non-CDB / PDB with temporary tablespace,
- Generate its xml file using dbms_PDB
- Edit the xml file to remove the entry for temporary tablespace
- Plug in the non-CDB into a CDB
I will demonstrate the second method. For this purpose, I created a non-CDB orcl2 using DBCA so that it contained default temporary tablespace temp.
– Open the non-CDB in read only mode
ORCL2> shu immediate; startup mount; alter database open read only;
– Generate an XML metadata file for the non-CDB
ORCL2>exec dbms_pdb.describe ('/u01/app/oracle/oradata/orcl2/orcl2.xml');
– Edit the xml file to remove the entry for temp tablespace
[oracle@em12 ~]$ vi /u01/app/oracle/oradata/orcl2/orcl2.xml
– Use the xml file to plug in the non-CDB into CDB2 as PDB_ORCL2
CDB$ROOT@CDB2>create pluggable database pdb_orcl2 using '/u01/app/oracle/oradata/orcl2/orcl2.xml' nocopy;
– Connect to PDB_ORCL2 and run the “$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql” script to clean up the new PDB, removing any items that should not be present in a PDB.
CDB$ROOT@CDB2>connect sys/oracle@pdb_orcl2 as sysdba PDB_ORCL2@CDB2> @?/rdbms/admin/noncdb_to_pdb PDB_ORCL2@CDB2> alter pluggable database open;
– Check that data from non-CDB is available in the new PDB
PDB_ORCL2@CDB2>select count(empno) from scott.emp; COUNT(EMPNO) ------------ 14
– Verify that temporary tablespace has not been created in PDB_ORCL2
PDB_ORCL2@CDB2> select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX USERS EXAMPLE
– Verify that tempfile has not been created in PDB_ORCL2
PDB_ORCL2@CDB2> select name from v$tempfile; no rows selected
So, I was able to create a PDB without temporary tablespace. Now I wanted to check if PDB_ORCL2 uses the temp tablespace of the CDB.
– First check that default temporary tablespace TEMP exists for the CDB
CDB$ROOT@CDB2> select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX UNDOTBS1 TEMP USERS CDB$ROOT@CDB2> select PROPERTY_NAME, PROPERTY_VALUE from database_properties where upper(PROPERTY_NAME) like '%TEMP%'; PROPERTY_NAME PROPERTY_VALUE ------------------------------ --------------- DEFAULT_TEMP_TABLESPACE TEMP
– Set pga_aggregate_target to its lowest allowed value of 10m to force a sort to spill to the temporary tablespace.
CDB$ROOT@CDB2> alter system set pga_aggregate_target=10m;
– Issue a query in PDB_ORCL2 which will spill to temporary tablespace
PDB_ORCL2@CDB2> select * from dba_objects order by 1,2,3,4,5,6,7; select * from dba_objects order by 1,2,3,4,5,6,7 * ERROR at line 1: ORA-00959: tablespace 'TEMP' does not exist
I expected the query to use temporary tablespace TEMP of CDB but it failed as it expected temp tablespace to exist locally for PDB as its default temporary tablespace is set to TEMP (as was in non-CDB)
PDB_ORCL2@CDB2> col property_name for a30 PDB_ORCL2@CDB2> col property_value for a15 PDB_ORCL2@CDB2> l 1 select PROPERTY_NAME, PROPERTY_VALUE from database_properties 2* where upper(PROPERTY_NAME) like '%TEMP%' PDB_ORCL2@CDB2> / PROPERTY_NAME PROPERTY_VALUE ------------------------------ --------------- DEFAULT_TEMP_TABLESPACE TEMP
– I tried to modify the default temporary tablespace to cdb$root:temp but that attempt also failed.
PDB_ORCL2@CDB2> alter database default temporary tablespace cdb$root:temp; alter database default temporary tablespace cdb$root:temp * ERROR at line 1: ORA-00933: SQL command not properly ended
Hence, I was able to create a PDB without temporary tablespace but could not make it use the temporary tablespace of CDB.
Comments: 3 comments on this itemYou might be interested in this:
- 11g R2 RAC: NODE EVICTION DUE TO MISSING NETWORK HEARTBEAT
- 12c: PLUG IN 12c NON-CDB AS PDB
- 12c: ACCESS EM EXPRESS FOR CDB / PDB / Non-CDB
- AUTOMATIC DEGREE OF PARALLELISM (DOP) - PART - II
- Undocumented Parameters in Oracle 11g
Back in September at Oracle OpenWorld 2013, Larry Ellison announced the Oracle Database In-Memory Option to Oracle 12c. Today, one of Pythian’s Advanced Technology Consultants, Christo Kutrovsky shares his thoughts on the new feature in our latest video. Stay tuned for updates while it’s being tested out in the field.
The beats of big data and symphonies of small data are creating a dazzling contrast in the realm of databases. Oracle, SQL Server, and MySQL are creating new tunes and all these tonal qualities are being captured by this Log Buffer Edition. Relax, sit back and enjoy.
This blog shows how you can write a SQL query to recommend products (cross-sell) to a customer based on products already placed in his current shopping cart.
R provides a rich set of statistical functions that we may want to use directly from SQL. Many of these results can be readily expressed as structured table data for use with other SQL tables, or for use by SQL-enabled applications, e.g., dashboards or other statistical tools.
Insight in the number, type and severity of errors that happen in a test or production environment is crucial to resolve them, and to make a stable ADF application that is less error-prone.
In order to get meaningful database statistics, the TIMED_STATISTICS parameter must be enabled for the database instance. The performance impact of having TIMED_STATISTICS enabled is minimal compared to instance performance.
As you all know that ADS is basically a push technology which send data from server to client without any user intervention . However client will send request periodically to server ask for update. Which further configure in adf-config.xml.
SQL Server 2014 and the DBA: Building Bridges.
RS, SharePoint and Forefront UAG Series – Intro.
Introducing the Microsoft Analytics Platform System – the turnkey appliance for big data analytics.
Progressive Insurance data performance grows by factor of four, fueling business growth online experience.
Version 9.04.0013 of the RML Utilities for x86 and x64 has been released to the download center.
Since the dawn of time, MySQL indexes have a limit: they cannot be descending. Yes, ALTER TABLE and CREATE INDEX can contain the ASC and DESC keywords.
It’s now 3 weeks since the MariaDB & MySQL community day in Santa Clara.
FromDual.en: MySQL Environment MyEnv 1.0.3 has been released.
Managing Percona Xtradb Cluster with Puppet.
Every Relation is in First Normal Form, by definition. Every Table may not.
Partner Webcast – Oracle Webcenter: Center of Engagement – Case Study: Information Strategy implementing Webcenter in Educational Institutions
We share our skills to maximize your revenue!
Nationwide Deploys Database Applications 600% Faster
Heath Carfrey of Nationwide, a leading global insurance and
financial services organization, discusses how Nationwide saves time and
effort in database provisioning with Oracle Enterprise Manager.
- Provisioning Databases using Profiles (aka Gold Images)
- Automated Patching
- Config/Compliance tracking
A quick note on how to install EMCLI which is used for various CLI operations from EM . I was looking to test some Database provisioning automation via EMCLI and thus was looking to setup the same .
To set up EMCLI on the host, follow these steps:
1. Download the emcliadvancedkit.jar from the OMS using URL https://<omshost>:<omsport>/em/public_lib_download/emcli/kit/emcliadvancedkit.jar
2. Set your JAVA_HOME environment variable and ensure that it is part of your PATH. You must be running Java 1.6.0_43 or greater. For example:
o setenv JAVA_HOME /usr/local/packages/j2sdk
o setenv PATH $JAVA_HOME/bin:$PATH
3. You can install the EMCLI with scripting option in any directory either on the same machine on which the OMS is running or on any machine on your network (download the emcliadvancedkit.jar to that machine)
java -jar emcliadvancedkit.jar client -install_dir=<emcli client dir>
4. Run emcli help sync from the EMCLI Home (the directory where you have installed emcli) for instructions on how to use the "sync" verb to configure the client for a particular OMS.
5. Navigate to the Setup menu then the Command Line Interface. See the Enterprise Manager Command Line Tools Download page for details on setting EMCLI.
United States 250,986India 116,730United Kingdom 37,784Germany 29,809France 20,570Canada 15,333Russia 13,650Brazil 5,884Australia 4,840Singapore 4,647
I found this blog post about the Oracle 12c OCP exam useful: url
In particular it explained why my Kaplan SelfTest software only covers the new 12c features and not general DBA skills section of the OCP exam.
The Kaplan software I purchased has 50 questions and they are only about new features. The software showed me the gaps in my 12c new features knowledge and gave me practice taking a multiple choice computerized test and I believe the value of these benefits exceed the $99 cost of the software. But, the software surprised me when I discovered that it didn’t cover all the areas that will be on the OCP 12c upgrade exam. The blog post I’ve referenced explains that in the near future Transcender will produce software that includes both sections of the OCP 12c upgrade exam.
Today’s blog post is the second of three in a series dedicated to data infrastructure outsourcing, with excerpts from our latest white paper.
The trends in our first blog post reveal that enterprises are looking for outside help with their data infrastructure not to reduce costs, but to:
- Improve service quality
- Increase productivity
- Access specialized skills
- Increase innovation
In the full white paper, we examine some of the strengths and weaknesses of the three sourcing options available to help you realize these benefits.
Having data management experts on staff provides organizations with greater control over their work. Additionally, in-house experts have in-depth knowledge about their specific systems and processes. However, relying on in-house resources alone presents multiple challenges for an organization.
There’s no doubt that offshore providers have offered cost savings in the past. However, the fact that many companies are now reshoring their operations indicates that those benefits did not compensate for the downsides of this approach—typically reduced productivity.
The right outsourcing vendor can deliver the benefits of the other two sourcing models without the equivalent downsides. For a deeper analysis of the strengths and weaknesses of the three sourcing options, download the rest of our white paper, Data Infrastructure Outsourcing.
Read the first blog post in this series, The Growing Trend Toward Data Infrastructure Outsourcing.
We share our skills to maximize your revenue!
Presentation of Information Lifecycle Management (ILM) With data volume growing, finding adapted storage solutions to storage costs and performance objectives is a real challenge for IT department in large companies. Information Lifecycle management is about managing the data all along its useful life while offering the best performance and storage cost as low as possible. The [...]
Here are notes by Matthew Morris on the upgrade for 9i/10g/11g OCAs : A Lifeline for 9i and 10g OCAs
In Oracle 12c, you can connect to a PDB using two methods :
- Switch the container using Alter system set container …
- Use connect command to connect to PDB using network alias
Let’s compare the two methods :The use of SET CONTAINER avoids the need to create a new connection from scratch.
If there is an existing connection to a PDB / CDB$root, the same connection can be used to connect to desired PDB / CDB$root.
– Connect to CDB
[oracle@em12 ~]$ sqlplus system/oracle@cdb1 CDB$ROOT@CDB1> sho con_name CON_NAME ------------------------------ CDB$ROOT
– Check the PID for the process created on the operating system
[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep oracle 23271 1 0 10:23 ? 00:00:00 oraclecdb1 (LOCAL=NO)
– Change the container to PDB1 using Set container
CDB$ROOT@CDB1> alter session set container=pdb1; sho con_name CON_NAME ------------------------------ PDB1
– Check that the operating system PID remains the same as earlier connection is reused and a new connection has not been created
[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep oracle 23271 1 0 10:23 ? 00:00:00 oraclecdb1 (LOCAL=NO)
– Switch the container back to cdb$root using connect
CDB$ROOT@CDB1> conn system/oracle@cdb1 sho con_name CON_NAME ------------------------------ CDB$ROOT
– Check that a new operating system PID has been created as a new connection has been created
[oracle@em12 ~]$ ps -ef |grep LOCAL |grep -v grep oracle 23409 1 0 10:29 ? 00:00:00 oraclecdb1 (LOCAL=NO)glogin.sql is not executed when Alter session set container is used
To demonstrate it, I have added following lines to my glogin.sql to display CDB/PDB name in SQL prompt:
define gname=idle column global_name new_value gname set heading off set termout off col global_name noprint select upper(sys_context ('userenv', 'con_name') || '@' || sys_context('userenv', 'db_name')) global_name from dual; set sqlprompt '&gname> ' set heading on set termout on
- Let’s connect to PDB1 using “Connect” and verify that glogin.sql is executed and prompt displays CDB/PDB name
SQL> conn sys/oracle@pdb1 as sysdba PDB1@CDB1>
- Verify that the prompt displays current container (PDB1) and container database (CDB1)
PDB1@CDB1> sho con_name PDB1 PDB1@CDB1> sho parameter db_name db_name string cdb1
– Now let’s connect to PDB2 using Alter session set container and verify that glogin.sql is not executed and the same prompt as earlier is displayed
PDB1@CDB1> alter session set container=pdb2; Session altered.
PDB1@CDB1> sho con_name
CON_NAME ------------------------------ PDB2
-- Let's connect to PDB2 using connect and verify that glogin.sql is executed as the prompt displays the PDB name PDB2 PDB1@CDB1> connect sys/oracle@pdb2 as sysdba PDB2@CDB1>Pending transactions are not committed when Alter system set container is used
– Let’s start a transaction in PDB1
PDB1@CDB1> create table pdb1_tab(x number);
PDB1@CDB1> insert into pdb1_tab values (1);
1 row created.
– Switch the container to PDB2
PDB1@CDB1> alter session set container=pdb2;
– Try to start another transaction on PDB2 – does not allow as an active transaction exists in the parent container PDB1
PDB1@CDB1> create table pdb2_tab (x number); create table pdb2_tab (x number) * ERROR at line 1: ORA-65023: active transaction exists in container PDB1
– In another session check that the transaction was not committed and no rows are visible in table pdb1_tab
CDB$ROOT@CDB1> conn system/oracle@pdb1 PDB1@CDB1> select * from pdb1_tab;
no rows selectedAlter session set container cannot be used by local users
– Try to give set container privilege to a local user HR in PDB2 – fails as common privilege cannot be granted to a local user and hence a local user cannot user alter session set container to connect to another PDB
PDB2@CDB1> connect system/oracle@pdb2 PDB2@CDB1> grant set container to hr container=all; grant set container to hr container=all * ERROR at line 1: ORA-65030: one may not grant a Common Privilege to a Local User or Role
I hope this post was useful.
Your comments and suggestions are always welcome.
Comments: 0 (Zero), Be the first to leave a reply!You might be interested in this:
- 12c: IMPROVE BACKUP PERFORMANCE USING IN-DATABASE ARCHIVING
- INSTANCE RECOVERY IN RAC
- 11g R2 RAC: CONVERT NON RAC DATABASE TO RAC DATABASE USING RCONFIG
The post 12c: Connecting to CDB/PDB – Set Container Vs Connect appeared first on ORACLE IN ACTION.
My company has posted a Lead Oracle DBA position located in Phoenix, Arizona which is where I also live.
You have to apply through our web site using this link:
We would love to get someone who has PeopleSoft skills.
You would be joining a friendly and experienced team of Oracle and SQL Server DBAs who support a wide variety of applications. I’ve been here eight years and the time has expanded my scope by exposing me to data warehouse and customer facing web applications that I had not previously supported. It’s a good position for a qualified person.