Feed aggregator

Exadata Express Cloud Service: MAX_PDB_STORAGE

Yann Neuhaus - Fri, 2017-02-03 16:06

The separation of roles between system DBA and application DBA is the way to go for agile development and the PDBaaS managed service is an excellent way to play with this concept: You are PDB administrator but not the CDB administrator.
Here is an example about tablespace creation/deletion, and the kind of problems that may arise with this architecture.

The PDB administrator manages all the PDB objects. He must be able to create users and tablespaces in order to deploy an application.

Create tablespace

So yes, in Exadata Express Cloud Service we can create a tablespace and this is what I did:

SQL> create tablespace IOPS datafile size 10G;
Tablespace created.

and you don’t need to specify the datafile location because db_create_file_dest is defined:

SQL> show parameter db_create_file_dest
 
NAME TYPE VALUE
------------------- ------ -----
db_create_file_dest string +DATA

Actually, this is something that I cannot change because it is set by the CDB administrator when creating the PDB:

SQL> alter session set db_create_file_dest='/var/tmp'
 
Error report -
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-01031: insufficient privileges

This is a 12.2 feature. Impossible to create a file outside:

create tablespace IOPS datafile '/var/tmp/franck.dbf' size 10G
Error report -
ORA-65250: invalid path specified for file - /var/tmp/franck.dbf

I’ve already written about that 12cR2 feature: http://blog.dbi-services.com/12cr2-create_file_dest-for-pdb-isolation/ and this is a rather nice feature.

MAX_PDB_STORAGE

There’s another limit in PDBaaS: you have a limit on the storage you can use. And in this X20 service, the limit is 20G.

When you have reached the maximum, you get the following error:

SQL> create tablespace ANOTHERONE datafile size 30G;
 
create tablespace ANOTHERONE datafile size 30G
Error report -
ORA-65114: space usage in container is too high
65114. 00000 - "space usage in container is too high"
*Cause: Space usage in the current container exceeded the value of MAX_PDB_STORAGE for the container.
*Action: Specify a higher value for MAX_PDB_STORAGE using the ALTER PLUGGABLE DATABASE statement.

Ok. This limit is visible as a property of my PDB:
SQL> select property_name, property_value from database_properties where property_name ='MAX_PDB_STORAGE';
 
PROPERTY_NAME PROPERTY_VALUE
------------- --------------
MAX_PDB_STORAGE 24771223880

And the current size is visible from V$PDBS:
SQL> select con_id,total_size from v$pdbs;
 
CON_ID TOTAL_SIZE
------ -----------
47 24766742528

Nothing hidden here, this is the sum of my PDB files, datafiles and tempfiles:

SQL> select sum(bytes) from dba_data_files;
SUM(BYTES)
----------
21761523712
 
SQL> select sum(bytes) from dba_temp_files;
SUM(BYTES)
----------
3005218816

Of course, I’m not authorized to increase my limit:

SQL> alter pluggable database storage(maxsize 40G);
 
alter pluggable database storage(maxsize 40G)
Error report -
ORA-01031: insufficient privileges
01031. 00000 - "insufficient privileges"
*Cause: An attempt was made to perform a database operation without the necessary privileges.
*Action: Ask your database administrator or designated security administrator to grant you the necessary privileges

And this is once again a very nice feature coming in 12cR2 multitenant.

So… I’ve filled my X20 service. There a need for some housekeeping. But…

Drop tablespace

I’m allowed to create and drop tablespaces. Let’s drop that IOPS tablespace:


SQL> drop tablespace iops;
 
drop tablespace iops
Error report -
ORA-01031: insufficient privileges
01031. 00000 - "insufficient privileges"
*Cause: An attempt was made to perform a database operation without the necessary privileges.
*Action: Ask your database administrator or designated security administrator to grant you the necessary privileges

With lockdown profiles, you have always the same message: no clue about what is not authorized. I know that there is a DROP_TABLESPACE_KEEP_DATAFILES feature that you can disable with lockdown profiles and this makes sense when the CDB administrator do not want that PDB administrators leave dead datafiles in the system. Here we are on ASM, with OMF, so the datafiles are automatically dropped. But the lockdown is working at statement syntax level, so we have to mention the clause:

SQL> drop tablespace iops including contents and datafiles;
 
drop tablespace iops including contents and datafiles
Error report -
ORA-38881: Cannot drop tablespace IOPS on primary database due to guaranteed restore points.
38881. 00000 - "Cannot drop tablespace %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to drop a tablespace on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo dropping of a tablespace.
*Action: Drop all guaranteed restore points first and retry, or delay dropping the tablespace until all guaranteed restore points are removed.

So the syntax is accepted, but here I have another problem. I have a guaranteed restore point and this prevents the drop of tablespace.

Let’s have a look at restore points because I didn’t create one (this is something were are not allowed to do on Exadata Express Cloud Service, which would be a nice feature as this servie is focused at developers).

SQL> select * from v$restore_point;
 
SCN DATABASE_INCARNATION# GUARANTEE_FLASHBACK_DATABASE STORAGE_SIZE TIME RESTORE_POINT_TIME PRESERVED NAME PDB_RESTORE_POINT CLEAN_PDB_RESTORE_POINT PDB_INCARNATION# CON_ID
--- --------------------- ---------------------------- ------------ ---- ------------------ --------- ---- ----------------- ----------------------- ---------------- ------
84602869122 2 YES 18253611008 28-JAN-17 03.23.08.000000000 AM YES PRE_17_1_2 NO NO 0 0

The PDB has been created when I subscribed to the service, on 22-JAN-17 and we are now 03-Feb-17.

So it seems that the CDB administrator (it is a managed service, CDB DBA is Oracle) has created a restore point last Saturday.
The name, PRE_17_1_7, looks like something we do before a maintenance, in case something goes wrong. I had no notification about any maintenance. And anyway, we usually remove the restore point as soon as possible because this fills the FRA.

I can see somme CDB structures, such as the FRA:


SQL> select * from v$recovery_area_usage;
 
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES CON_ID
--------- ------------------ ------------------------- --------------- ------
CONTROL FILE 0 0 1 0
REDO LOG 0 0 0 0
ARCHIVED LOG 0.81 0.81 193 0
BACKUP PIECE 0.2 0 26 0
IMAGE COPY 0.33 0 10 0
FLASHBACK LOG 1.3 0.98 41 0
FOREIGN ARCHIVED LOG 0 0 0 0
AUXILIARY DATAFILE COPY 0 0 0 0

Ok, that’s not too much. There is not a lot of activity during one week here. And anyway, FRA is big:

SQL> show parameter recovery
 
NAME TYPE VALUE
-------------------------- ----------- -----
db_recovery_file_dest string +RECO
db_recovery_file_dest_size big integer 6T

So far so good, but I’m stuck here. The restore point is at CDB level, so I cannot drop it:

SQL> drop restore point PRE_17_1_2;
 
drop restore point PRE_17_1_2
Error report -
ORA-38780: Restore point 'PRE_17_1_2' does not exist.
38780. 00000 - "Restore point '%s' does not exist."
*Cause: The restore point name of the DROP RESTORE POINT command does not exist.
*Action: No action required.

This means that I cannot drop my tablespace. And I cannot even resize the datafiles to their minimum:

SQL> alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M;
 
alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M
Error report -
ORA-38883: Cannot shrink data file +DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351 on primary database due to guaranteed restore points.
38883. 00000 - "Cannot shrink data file %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to shrink a data file on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo the shrinking of a data file.
*Action: Drop all guaranteed restore points first and retry, or delay the data file resize until all guaranteed restore points are removed.

I have an empty tablespace that takes all my allocated storage and I cannot remove it.

So what to do? Try to contact support? Or fill the FRA until raises an alert?
I would try the first one but I received a CSI with my order, but it’s not a valid one…

 

Cet article Exadata Express Cloud Service: MAX_PDB_STORAGE est apparu en premier sur Blog dbi services.

Weekly Link Roundup – Feb 3, 2017

Complete IT Professional - Fri, 2017-02-03 13:31
Here’s a collection of articles I’ve read over the last week that I found interesting. A shorter list than usual, but still some quality articles! Articles I’ve Read Oracle’s Cloud Licensing Change : Be Warned! https://oracle-base.com/blog/2017/01/28/oracles-cloud-licensing-change-be-warned/ Tim Hall from Oracle-Base shared with the community that Oracle’s cloud licensing costs have doubled due to the way […]
Categories: Development

Marketing Asset Management with Documents Cloud, Process Cloud and Sites Cloud Services

WebCenter Team - Fri, 2017-02-03 09:20

Authored by Carlos Picazo, EMEA Digital Specialist Presales team member, on his personal blog "Carlos' Technology Corner".

Marketing Asset Management with Documents Cloud, Process Cloud and Sites Cloud Services

Let’s start 2017 with something different from the previous post. Today we will work with Documents Cloud Service and Process Cloud Service to manage Marketing assets’ life-cycle that are going to be used by a Sites Cloud Service Contributor in a website.

Oracle’s Digital Engagement Portfolio has been designed to work together and with a native integration between Documents Cloud and Process Cloud that allows to kick-off a PCS task when a new asset has been uploaded to a DoCS folder.

To continue reading, please click here

ORA-01722 invalid number Solution

Complete IT Professional - Fri, 2017-02-03 05:00
Have you gotten an “ORA-01722 invalid number” error? I’ll explain what this error is and how you can resolve it in this article. ORA-01722 Cause So, you’ve run an SQL query (which can be SELECT, INSERT, UPDATE, for example), and you’ve gotten this error: ORA-01722: invalid number The reason for this error is that Oracle […]
Categories: Development

ADF 12c New Groovy API to Work with View Object Methods

Andrejus Baranovski - Fri, 2017-02-03 04:01
I have interesting topic to share - new Groovy API in ADF to work with View Object, apply View Criteria, execute it. I have discovered it while experimenting with new features and functionality in ADF 12c. Starting from ADF 12.2.1, we have an option to code Groovy in separate file with extension .bcs - ADF BC Groovy Improvements in ADF 12.2.1. This makes sense especially with this new Groovy API - it is more convenient to code/maintain more complex Groovy logic in separate file. As Oracle docs say - Groovy runs faster when it is coded in separate .bcs file, probably there is no need to parse XML to extract and execute expression.

Sample application - GroovyADFApp.zip, contains Groovy implementation for Employees EO:


There is validation rule for Salary attribute coded in Groovy:


Let's take a look into Groovy method, we can open it in .bcs file - for more convenient to review/edit:


1. Method newView gets instance of Jobs VO, within Employees EO context. To get instance of VO with newView, it needs to be defined for programmatic access. This can be done in Model.jpx file, Groovy section.


2. Method copyNamedViewCriteria returns instance of predefined View Criteria. You need to set property ExtAllowUntrustedScriptAccess=true for View Criteria to be accessible in Groovy:


3. Method setViewVariable allows to set value for bind variable from VO. As a rule, bind variable must be assigned with ExtAllowUntrustedScriptAccess=true to be accessible in Groovy.

Let's see how it works on runtime. First it gets VO instance, applies bind variable value and then executes VO. Based on predefined logic, if row is returned - validation is successful:


When validation fails - message is returned:


This Groovy API can be useful when you need to access VO and execute logic directly from Groovy script, without coding Java method.

Will EBS Work with Windows 10 Anniversary Update?

Steven Chan - Fri, 2017-02-03 02:05

Windows 10 logoMicrosoft recently released a major update to Windows 10 called the Windows 10 Anniversary Update (version 1607):

This update includes a number of features generally aimed at end-users, including updates to Cortana, Edge, etc.

Will this work with EBS?

Yes.  Windows 10 is certified with all current EBS releases, including EBS 12.1 and 12.2. This existing certification applies to the Windows 10 Anniversary Update release. No additional EBS 12.1 or 12.2 certifications are required for Windows 10 Anniversary Update compatibility.

Related Articles


Categories: APPS Blogs

Links for 2017-02-02 [del.icio.us]

Categories: DBA Blogs

Create Context results to ora-01031 insufficient privileges when run using dbms_Sql

Tom Kyte - Thu, 2017-02-02 23:46
I have DB user "test" that has <b>create any context privilege</b>. I create a context using the command below CREATE OR REPLACE CONTEXT apex_ctx using kvs_apex_ctx accessed globally I created the package kvs_apex_ctx and context works fine....
Categories: DBA Blogs

In SQL Developer Query Builder can I construct a query involving 2 tables from separate schemas in the same database

Tom Kyte - Thu, 2017-02-02 23:46
In Query Builder, I would like to execute something akin to the example. I already have two connections configured to the same database which differ only by the username/schema. SELECT a1.foo, b1.bar FROM schema_a.table1 a1 JOIN schema_b.table...
Categories: DBA Blogs

Create Random Sets Based on Criteria

Tom Kyte - Thu, 2017-02-02 23:46
I have a table with about 2000 records of males and females of different races. I would like to create 10 random sets of 50 people using the following criteria 50% of 50 (so 25) females (can be part of non-white) 30% of 50 (so 15) non-white of e...
Categories: DBA Blogs

Oracle - grant package access not wotking for inner tables

Tom Kyte - Thu, 2017-02-02 23:46
Hi! I have two Users on a same Oracle Instance, APP and TRDP. The user APP is a third-party user and the TRDP is one of my own. So the APP user have a package (API) which receives three parameters and updates an internal table. During the processi...
Categories: DBA Blogs

Oracle analytic function for the sum of all records where a value exists

Tom Kyte - Thu, 2017-02-02 23:46
Hello, I need help on analytic query. I have a table like this: <code>CREATE TABLE SCPOMGR.U_CSS_COLORFAM ( CLUSTER_NAME VARCHAR2(40 CHAR), STYLE VARCHAR2(50 CHAR), COLOR_FAM VARCHAR2(50 CHAR), HistQty NUMBER ) </code> w...
Categories: DBA Blogs

AWR - call dbms_space.auto_space_advisor_job_proc ( )

Tom Kyte - Thu, 2017-02-02 23:46
Hi Chris/Connor, On Production (11g 11.2.0.4.0) there were delays in some application queries to execute. While looking into AWR, encountered one entry "call dbms_space.auto_space_advisor_job_proc ()" and it took %IO - 49.85 %CPU - 37.64 Elap...
Categories: DBA Blogs

Sending mails with HTML variable clob attachment (size over 4000 characters)

Tom Kyte - Thu, 2017-02-02 23:46
Hi Tom, I have been reviewing your blog since I find it very interesting, especially the article of sending mails with attachments. I write to you because I have a problem with that subject in case you could throw a cable because I can not get it...
Categories: DBA Blogs

Challenging Block Corruption Questions

Tom Kyte - Thu, 2017-02-02 23:46
Good Day, I have yet another area where I ask for your help in supplementing Oracle documentation. This case is in regards to block corruptions. This has always been one of those items that I have been hesitant to implement. My guess is that < ...
Categories: DBA Blogs

Determine machine name for application server from Oracle database

Tom Kyte - Thu, 2017-02-02 23:46
A front end is hosted on an application server which connects to Oracle database. How to determine the machine name of this application server from database. From SYS_CONTEXT and v$session we do get client machine name (which is machine name of...
Categories: DBA Blogs

Managing Oracle Database Code with SQL Developer, Git, and Developer Cloud Service

Shay Shmeltzer - Thu, 2017-02-02 17:55

Are you coding SQL and PL/SQL code? Need to manage versions & branches? Want to track your to-do tasks? Need to conduct code reviews and peer programming? 

Developer Cloud Service can help you!

And now it comes free with your Oracle Database Cloud Service trial or license - check your service dashboard to see if you got one.  Note that even if your database is not in the cloud, but rather on-premises, you can use the same process shown in the video below.

In the demo you'll learn how to:

  • Provision a new cloud project for your team
  • Check SQL scripts into the Git Repository in DevCS
  • Track tasks and to-do items
  • Branch your SQL script code
  • Conduct code review with members of your team
  • Merge branches of code

Check it out:

I blogged about this topic in the past using JDeveloper, but figured out that most of the Oracle database developers actually use SQL Developer - so I thought it would be good to give them a quick 10 minute demo of what they can do by combining the power of SQL Developer and Developer Cloud Service. (The video can also be useful to just get a basic understanding of how SQLDeveloper works with any Git repo).  

More about Developer Cloud Service here.

Stay tune for more blog entries on features for Database developers in Oracle Developer Cloud Service - coming soon!

Categories: Development

Database Diagram using SQL Developer

Yann Neuhaus - Thu, 2017-02-02 12:13

Last Day a client asked me if I can generate a relational diagram for an oracle schema. He was just preparing a migration and wanted to see how tables are organized in the schema.
In this article we will show how this is possible with SQL Developer . We are using SQL Developper 4.1.3.20
sqldev1
Once SQL Developer started just proceed as following :
Launch File ==>Data Modeler ==>Import==>Data Dictionary
sqldev2
Choose your connection and Click Next
sqldev3
And then after we can choose the schema
sqldev4
In this example we check all tables and click Next
sqldev5
Review the summary
sqldev6
And Then Click Finish
logfile

After closing the log file, we have the diagram
sqldev7

And we also can save the model in a PDF file or an image file
sqldev8

Hope this article will help

 

Cet article Database Diagram using SQL Developer est apparu en premier sur Blog dbi services.

Upgrades

Jonathan Lewis - Thu, 2017-02-02 07:38

This is a note I wrote a couple of years ago, but never published. Given the way it’s written I think it may have been the outline notes for a presentation that I was thinking about rather than an attempt to write a little essay. Since it covers a number of points that are worth considering and since I’ve just rediscovered it by accident I thought I’d publish it pretty much as is. Many of the examples of change are now quite old – the intent was to demonstrate how to be cautious rather than trying to supply every possible change that might your next upgrade.

We start with a couple of

  • The effort worth spending to minimise the risk of performance-related surprises on an upgrade depends on the how critical the system is.
  • The effort needed to minimise the risk of performance-related surprises on an upgrade depends on how complex the system is.
  • The more “rare” features and “cunning strategies” and the greater the level of concurrent activity the more likely you are to find surprising effects.

Typical causes of problems are:

  • New automatic jobs installed by Oracle, which might cause extra load during overnight batch tasks
    • e.g. automatic stats collection (10g)
    • Automatic evolution of baselines (12c)
  • Changes to existing packaged procedures
    • e.g. switch to atomic refresh of MVs (11g)
      • changed the time take to do the refresh itself and added a new load to the redo log activity
  • automatic histograms (10g)
    • changed the time taken to collect stats
    • changed lots of execution plans as a side effect
    • changed the behaviour of cursor_sharing=similar as a side effect,
      • which increased CPU and library cache loading
  • “Notable changes” in behaviour
    • e.g. The standard audit trail no longer updates aud$ rows, it always inserts new records
      • This had side effects on redo generation
      • This also allowed Oracle to drop an index, with side effects on typical DBA queries
    • Oracle also introduced a timestamp with time zone as the “audit timestamp”
      • This had a catastrophic side effects on a particular OEM “failed logins” query
  • New optimizer  features appear
    • This can introduce lots of new plans – and a few may perform badly for your data set
  • Optimizer bugs will be fixed
    • This can also introduce new plans – and a few may perform badly for your data set
  • New stats collection mechanisms
    • e.g. approximate_ndv with auto_sample_size (11g)
    • If introduced by default you could get unlucky
      • (See histogram comment above, for example).
    • If not introduced by default (now) you might want to test and adopt them on the upgrade
    • This may require changes to your current coding, and checks for bad changes in plans.
  • Concurrency issues
    • Could introduce side effects in locking, latching, mutexes on busy systems
    • Very hard to test without knowing the system
    • e.g. partition exchange and query optimisation colliding (various changes in 11g)

Background reading which can prompt you to look at particular features of your application that might be at risk.

  • Read the “New Features” guide
  • Read the “Changes in this release” section of the various reference guides
  • Check MoS for upgrade notes for the version you are aiming at
  • Search MoS for any documents with the text: “Notable changes {in|of} behaviour”
  • Check MoS for bug fix notes for later releases (e.g. 12.1 if upgrading to 11.2)

Test as much as possible, and as realistically as possible. You could take advantage of flashback database to repeat large test suites starting from the same point in time with the same data set, but making changes to database structures or code strategies between tests. Proper concurrency tests are ultimately going to be the hardest things to do right – but you have to work on it (or understand Oracle’s architecture really well) if you are upgrading a highly concurrent system.

 


Recursive Function Calling in PL|SQL

Tom Kyte - Thu, 2017-02-02 05:26
How to call a recursive function in pl\sql function xyz(i_birthdate varchar2) return number as y varchar2(25); x number; o_ret_date varchar2(25); I have written code to generate files which takes i_birthdate as input(it includes some curs...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator