Feed aggregator

Self Join with Aggregate function

Tom Kyte - Mon, 2016-09-12 12:06
Hi, I got this question in one interview using Self Join Table Employee Salary 1 1K 2 2K 3 3K 4 4K 5 5K Output is 1 1K 2 3K 3 6K 4 10K ...
Categories: DBA Blogs

Execute multiple DDL statements at once

Tom Kyte - Mon, 2016-09-12 12:06
Hi Tom, I need your expertise in this regard. I'm using a self programmed version control system to upgrade my customers systems. In it all the changeset with packages, functions, DDL commands, DML commands and all this are saved in BLOBs in the...
Categories: DBA Blogs

Stats Calculation privileges

Tom Kyte - Mon, 2016-09-12 12:06
Hi, We have a database with multiple schemas and we need a proactive gather stats. The idea would be to have one schema(CORE) to gather stats for all tables in all schemas, so everything is centralize. is there any possibility to do this without g...
Categories: DBA Blogs

ORA-00984: column not allowed here in 11g but not in 12c ?

Tom Kyte - Mon, 2016-09-12 12:06
Hi Team, Please check below case are we missing something in 12c ? or incorrect in syntactical wise ? SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Dat...
Categories: DBA Blogs

Database Migration.

Tom Kyte - Mon, 2016-09-12 12:06
I have to migrate database from 11g to 12c. What are the important steps I need to take into consideration. DB is widely used by the user.
Categories: DBA Blogs

Hierarchical Queries

Tom Kyte - Mon, 2016-09-12 12:06
<code>Oracle Version 11G. I have a requirement to show the data in a Hierarchy manner. It will shown as a TREE structure in the UI. I have 3 tables I) T_MAIN,T_ITEMS_MAIN It's having the Static data. II) T_PURCH_ITEMS Having purchase...
Categories: DBA Blogs

Ms access to oracle database connectivity

Tom Kyte - Mon, 2016-09-12 12:06
How to configure Oracle to be able to query a MS Access database (.mdb) that is lcoated in the remote server? I tried using configuring odbc and HS, at last am getting error "ORA-02019: connection description for remote database not found" I...
Categories: DBA Blogs

RMAN restore issue on duplicating database

Tom Kyte - Mon, 2016-09-12 12:06
Hi Team, While trying to perform RMAN restore getting the below error: RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =============...
Categories: DBA Blogs

Rittman Mead at Oracle OpenWorld 2016

Rittman Mead Consulting - Mon, 2016-09-12 11:21

The big show is upon us and once again Rittman Mead will be actively participating at Oracle OpenWorld 2016. Oracle's premier conference runs September 18–22 at the Moscone Center in beautiful San Francisco, California. I'm looking forward to another great conference, as well as attending my first ACE Director briefing, where Oracle ACE Directors get to hear future plans on the direction of Oracle products and can provide feedback to product teams (and even ask questions of Thomas Kurian), on the Thursday/Friday prior to the event.

I'm fortunate enough to have been selected by the ODTUG to present a user group forum session on Sunday, September 18th, the first official day of Oracle OpenWorld 2016. I'll be sharing an updated version of my GoldenGate and Kafka integration talk, Oracle GoldenGate and Apache Kafka: A Deep Dive into Real-Time Data Streaming (Session UGF6161), previously presented at Collaborate and KScope16. The focus will be on how to use the new Oracle GoldenGate for Big Data release 12.2 to replicate database transactions to Apache Kafka. The topic of this session, real-time data streaming, provides a great solution for the challenging task of ingesting data from multiple different sources, and different structures, and making that data readily available for others in the organization to use. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution. Stop by Moscone South 301 on Sunday at 10:30 a.m. to check it out!

On Tuesday morning, Mark Rittman will join Oracle Data Integrator Product Manager Julien Testut and Oracle A-Team Director Christophe Dupupet to discuss Oracle Data Integration and Big Data in their session titled Oracle Data Integration Platform: A Cornerstone for Big Data (Session CON6624). They'll take a look at how the Oracle Data Integration suite of products (Oracle Data Integrator, Oracle GoldenGate, Oracle Metadata Management, and Oracle Big Data Preparation Cloud Service) can help to avoid the complexity often found in big data initiatives and harness the power of the big data technologies. Join them in Moscone West 2022 on Tuesday, September 20th, at 11 a.m. to learn more!

With many successful Oracle Data Integration and Big Data engagements completed throughout the world, the experts at Rittman Mead continue to lead in the Oracle Big Data space. If you'd like to get together to discuss any of the above topics we're presenting, or anything big data, data integration or BI related, feel free to reach out via email (info@rittmanmead.com), Twitter (@mRainey) or drop a note in the comments below. Hope to see you there!

Categories: BI & Warehousing

#OOW16: Content and Experience Management Hands-on-Labs

WebCenter Team - Mon, 2016-09-12 09:44

Authored by: Ellen Gravina, Principal Product Manager, Oracle Product Management

Oracle OpenWorld 2016 is less than two weeks away! As you build out your daily schedule with keynotes, sessions, and visits to the demo grounds, don’t forget to register for one or more of our Hands-on Labs. Come experience the dynamic interaction between product users—experienced and new, our product experts, and the latest offerings in our portfolio. These one-hour, self-paced sessions give you an opportunity to work with the latest product releases through a step-by-step tutorial. Product managers are on hand to answer any questions you have about the lab exercise, or about the Content and Experience portfolio.


Here’s what we have lined up for OOW2016:  

Create an Employee Portal Using Oracle Content & Experience and Process Cloud - HOL7607
Monday, Sep 19  -  2:30-3:30pm  Hotel Nikko—Nikko Ballroom II (3rd Floor)
The latest release of Documents Cloud Service and Sites Cloud Service provides an integrated solution that enables lines of business to rapidly assemble a digital experience for their end users.  In this exercise, you build an interactive employee portal that provides new employees with access to the right training and the right resources to get them on board and connected with their teams. The portal is built with Sites Cloud Service and integrated with Process Cloud Service.  You'll see how easy it is to incorporate existing enterprise content, processes, and social apps all within a single user interface.

Drive Productivity in Your Enterprise Using Oracle Documents Cloud Service  -  HOL7611
Tuesday, Sep 20  -  8:00-9:00am  Hotel Nikko—Nikko Ballroom II (3rd Floor)
Documents Cloud Service is Oracle’s enterprise-grade content collaboration solution, delivering secure cloud-based storage for your enterprise content. It provides anywhere, any device access and flexible file sharing capabilities for collaboration, both inside and outside your organization. In this hands-on lab, you will see how Digital Asset Management, Content Management and Collaboration in the cloud can drive productivity in your enterprise.

Deep Dive into Oracle Content and Experience Cloud Platform  -  HOL7612
Wednesday, Sep 21  -  11:30-12:30pm  Hotel Nikko—Nikko Ballroom II (3rd Floor)
This hands-on-lab gives you an opportunity to understand and use the platform integration features of Documents Cloud Service (DOCS) and Sites Cloud Service. This includes use of REST APIs to interact with content, social conversations and sites, as well as the ability to embed and customize DOCS UI elements into custom applications and sites. You will learn how to build an Application Integration Framework (AIF) application, how to add metadata support to applications, and more. Participants will get the most out of the session if they have a basic understanding of REST and Java Web application development.

Create Rich Solution Templates Using Oracle Content and Experience Cloud  -  HOL7616
Thursday, Sep 22  -  9:45-10:45am  Hotel Nikko—Nikko Ballroom II (3rd Floor)
In this session, you will create templates for Sites Cloud Service. You will create a site, use themes, embed content, develop components, and package the site into a solution template. Business users and IT professionals will see how creating solution templates for Oracle Content and Experience empowers business.

Are you attending OOW? Would you like to register for any of these sessions? Please use My Schedule to get them added to your agenda and reserve your seat! We look forward to seeing you at these sessions. And don’t forget to follow all things #OracleDX and #OOW16 on twitter!

How I Organise Email At Work To Keep Track Of It All

Complete IT Professional - Mon, 2016-09-12 06:00
Email. It can be a mess. Have you ever looked at your inbox at work and seen a massive list of emails, and not know what to do with them? I used to get that as well. This was until about a year ago when I changed the way I handle my email at work. […]
Categories: Development

PeopleSoft Data Mover Security

The Data Mover allows for total manipulation of data within PeopleSoft. You can use it to transfer data among PeopleSoft databases, regardless of operating system and database vendor. To state that Data Mover scripts need to be carefully secured is an understatement – the security of Data Mover scripts and activities must be HIGHLY secured.

When performing a PeopleSoft security audit Integrigy carefully reviews Data Mover scripts and activities. If you want to look today at your environment, locate where Data Mover scripts are being stored. The location should be secured to only those with privileges to use Data Mover. Ideally, a source code control tool should be used to store and secure Data Mover scripts.

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Oracle Database – JSON en de In-Memory Database

Marco Gralike - Mon, 2016-09-12 05:14
Op maandag 24 oktober vindt er een bijeenkomst plaats over JSON en de In-Memory Database…

Posts by Alberto Dell'Era On ASH Wait Times And Materialized Views

Randolf Geist - Sun, 2016-09-11 15:23
This is just a short note that links to different articles on Alberto Dell'Era's blog. Alberto is a long time member of the OakTable and has published a number of posts I find very useful and therefore are linked here:

ASH wait times (including an ASH simulator!)

Posts on Materialized Views

Basicfile LOBs 6

Jonathan Lewis - Sun, 2016-09-11 13:48

One of the nice things about declaring your (basicfile) LOBs as “enable storage in row” is that the block addresses of the first 12 chunks will be listed in the row and won’t use the LOB index, so if your LOBs are larger than 3960 bytes but otherwise rather small the LOB index will hold only the timestamp entries for deleted LOBs. This makes it just a little easier to pick out the information you need when things behave strangely, so in this installment of my series I’m going to take about an example with with storage enabled in row.

I’m going to demonstrate an issue that is causing a major problem. First I’m going to build a table with a LOB column with multiple (4) freepools – because that’s what you do to handle concurrency – then I’m going to start 4 sessions (carefully checking that I have one associated with each free pool) and do a few thousand inserts with commits from each session. The size of the LOB value I insert will be 20KB so it will be “indexed” in the row but stored out of the row taking 3 LOB blocks.

Once I’ve got the data in place I’m going to use three of the sessions to delete three quarters of the rows from the table then use a call to the dbms_space package to show you that the segment contains virtually no free space. I’ve engineered the code so that it will take just three more rows in the table to fill the available free space and force Oracle either to allocate a new extent or to start reclaiming some of the delete reusable LOB space – and I’m going to run that insert from the session that DIDN’T delete any rows.

I’ve been running these tests on 11.2.0.4, but get similar behaviour on 12c.

create table t1(
        id      number constraint t1_pk primary key,
        c1      clob
)
lob (c1)
store as 
    basicfile 
    text_lob(
            enable storage in row
            chunk 8k
            retention
            nocache
            logging
            freepools 4
            tablespace test_8k_assm
)
;

declare
        m_v1 varchar2(32767) := rpad('X',20000,'X');
begin
        for i in 0..0 loop
                insert into t1 values (i, m_v1);
                commit;
        end loop;
end;
/

truncate table t1
;

You’ll notice I’ve used the retention keyword.  Before I built the LOB I set my undo_retention to 10 seconds so that the space from deleted LOBs should become available for reuse very quickly. The name of the tablespace I’ve used for the LOB is a clue that I’m using an 8KB block size and ASSM (the latter is a requirement of the retention option).

Here’s the code to check which freepool (0 to 3) a session will be associated with (this isn’t documented, but seems to be correct);


select mod(pid,4) from v$process where addr = (
        select paddr from v$session where sid = (
                select sid from v$mystat where rownum = 1
        )
)
;

So I can keep starting sessions and running that query until I’ve got a session covering each freepool. (The first time I tried this I had to start 7 sessions before I got all 4 freepools covered). Now I can run the following from all 4 sessions concurrently:

define m_loop_counter = 12027

lock table t1 in row share mode;
commit;

declare
        m_v1 varchar2(32767) := rpad('x',20000,'x');
begin
        for i in 1..&m_loop_counter loop
                insert into t1 values (s1.nextval, m_v1);
                commit;
        end loop;
end;
/

The purpose of the lock table command is to ensure that all 4 processes start running simultaneously. From a fifth session I execute a “lock table t1 in exclusive mode” before starting the other four sessions running, so they all queue on the exclusive lock; then I commit from the fifth session and away we go. The whole thing took about 30 seconds to run. The rather random-looking value 12,027 was a careful choice to ensure that the last extent in the segment had just a few blocks left – and I used my “dbms_space_use.sql” script to check this, getting the following output:

====
ASSM
====

Unformatted                   :        7 /       57,344
Freespace 1 (  0 -  25% free) :        0 /            0
Freespace 2 ( 25 -  50% free) :        0 /            0
Freespace 3 ( 50 -  75% free) :        0 /            0
Freespace 4 ( 75 - 100% free) :        0 /            0
Full                          :  144,324 / ############

PL/SQL procedure successfully completed.

=======
Generic
=======
Segment Total blocks: 145536
Object Unused blocks: 0

I’ve got 7 “unformatted” blocks in the segment – though in fact these might be “formatted but free” from the perspective of the LOB code.

After going to sessions 0, 1, and 3 and deleting 12,000 rows from each in turn (and committing, leaving a total of 12,108 rows in the table) the report doesn’t change: I haven’t made any space free I’ve simply flagged it in the LOB index as “reusable”. So now we go to session 2 and run the following code 3 times – with “set timing on”:


SQL> l
  1  declare
  2     m_v1 varchar2(32767) := rpad('x',20000,'x');
  3  begin
  4     for i in 1..1 loop
  5             insert into t1 values (s1.nextval, m_v1);
  6             commit;
  7     end loop;
  8* end;

The first run took 0.02 seconds – and the unformatted count dropped to 4

The second run took 0.01 seconds – and the unformatted count dropped to 1

The third run took 10.74 seconds, of which 9 seconds was CPU. The session generated 500,000 redo entries totalling 100MB of redo from 1 million db block changes after doing 8.4 million logical I/Os, issuing 108,000 enqueue (lock) requests and running 108,000 index range scans. The report of space usage ended up looking like this:


Unformatted                   :  108,125 /  885,760,000
Freespace 1 (  0 -  25% free) :        0 /            0
Freespace 2 ( 25 -  50% free) :        0 /            0
Freespace 3 ( 50 -  75% free) :        0 /            0
Freespace 4 ( 75 - 100% free) :        0 /            0
Full                          :   36,333 /  297,639,936

PL/SQL procedure successfully completed.

=======
Generic
=======
Segment Total blocks: 145664
Object Unused blocks: 0

My session has cleared every single piece of re-usable space from the LOB and made it free (unformatted) before allocating space for its one LOB. (That’s going to hurt when the client has 2 million LOBs on the reusable list and isn’t running everything on SSDs – which is why I’m working on this problem).

If you’re wondering why it takes so much redo and so many buffer visits to free 36,000 LOBs this (roughly) is what Oracle does to free up one reusable LOB of 3 blocks – which corresponds to a single index entry carrying three block ids:

  • Find the lowest index entry in the freepool, pin the index leaf block
  • Identify the last block in the list of 3
  • Lock the relevant L1 space management block for the segment and set relevant “bit” to “unformatted”
  • Delete the index entry
  • Re-insert the index entry with one block id removed
  • Commit and unlock the L1 bitmap block
  • Repeat delete/insert the cycle for 2nd block id
  • Repeat the cycle for 3rd (or 1st since we’re going backwards) block id – but don’t re-insert the index entry

Oracle reclaims one block (chunk) at a time. And that’s a bit of a clue to a possible workaround because event 44951 gets mentioned a couple of times in MoS and on the internet as a workaround to a particular problem of HW enqueue waits for LOBS. MoS note 740075.1 tells us:

When using Automatic Segment Space Management (ASSM), and the fix for Bug 6376915 has been applied in your database (Included in 10.2.0.4 +) it is possible to adjust the number of chunks that are cleaned up when the chunk cleanup operation is required.

This can be enabled by setting event 44951 to a value between 1 and 1024 (default is 1). With the value between 1 and 1024 setting the number of chunks to be cleaned up each time a chunk reclaimation operation occurs. This can therefore reduce the number of requests for the High Watermark Enqueue.

Other notes explain that by default only one chunk is cleaned up at a time – which is exactly the behaviour I’m seeing. So what happens when I bounce the database with this event set at level 5 (an arbitrary choice, but larger than the LOBs I’ve been inserting) in the parameter file and repeat the experiment ? On the first attempt it made no difference, but then I changed the experiment slightly and started again. Initially I had done my first re-insert from the one session that hadn’t deleted any rows – which made it an extreme boundary condition; on the second attempt I deleted two rows from the session that had not yet deleted any data (and waited for the retention time to elapse) before doing the inserts from that session.

Deleting two rows would put 6 blocks (in two index entries) onto my re-usable list, so I was starting the inserts with 7 free blocks, 6 reusable blocks and the event set to level 5. Here’s what I saw as I inserted rows one by one.

  • Insert one row: “Unformatted” blocks went up to 9:  I had freed 5 of the reusable blocks then used 3 of them for my lob (7 + 5 – 3 = 9)
  • Insert one row: “Unformatted” blocks went down to 7: I had freed the last reusable block then used 3 blocks for my lob (9 + 1 – 3 = 7)
  • Insert one row: “Unformatted” blocks went down to 4
  • Insert one row: “Unformatted” blocks went down to 1
  • Insert one row: Oracle cleared all the reusable space (11 seconds, 500MB redo), then added an extent (!) to the segment and used 2 of its blocks for part of the new LOB.

So the event isn’t really connected with my problem – though it adds some efficiency to the processing – and my  “boundary condition” is one that’s likely to occur fairly frequently if you’ve got a basicfile LOB defined with multiple freepools. Fortunately it’s probably going to require two pre-conditions before it’s a big problem: first that you’re handling a large number of LOBs and second that your pattern of inserting and deleting is not symmetric – it’s when you use a large number of concurrent sessions for small batches of inserts but a single session for large bulk deletes that all hell can break loose shortly after a delete.

tl;dr

As with many other features of Oracle, skew plays a part in making things break. If you’re doing lots of inserts and deletes of basicfile lobs make sure the mechanisms you use for inserting and deleting look similar: in particular similar numbers of processes to do similar amounts of work for both operations.

 

P.S. It gets worse.

P.P.S. Don’t even start to think that you can work around this by using securefiles.

P.P.P.S. I got an hint from one test that if a reusable LOB is exactly the same size as the LOB being inserted then Oracle very cleverly takes the entry index entry and rewrites it to be the LOB index entry rather than freeing (and then potentially using) the space it identifies.

 


Getting from http to https

Mathias Magnusson - Sun, 2016-09-11 13:02

The world is moving to https, but that was not the reason for the move.

Initially I was happy to use whatever Digital Ocean (DO) supplied in the WordPress droplet. But as I explained in my last post, I had some problems with moving from wordpress.com to my self hosted site at DO.

In short the problem turned out to be Chrome caching sites that accepts https making my site unavailable to every visitor that has been to my site in the past including myself. It seemed like DO forced https but had not configured the droplet fully. That was not the case at all, it was configured for http, but not for https and doing that was well documented.

Had I understood that and what happened from the beginning it would have made this so much easier, but I only realized what Chrome did a couple of minutes after my troubleshooting and fixing what I thought was a half baked config. It would have saved me from spinning up about five extra droplets, reading a lot more about Apache2 than I really needed. However, I learned a lot so it was good that I mistook what happened for being something the droplet did.

To make it work I started with performing the work of the initl server setup they recommended. Most of it was things I knew I ought to do so it made sense, but the real reason was that the https-post they had referred to is as the inital step. So I followed it just to make sure I did not miss a required step.

Then I ran four commands to make Apache load ssl and configure virtual servers for it.

cd /etc/apache2/mods-enabled
ln -s ../mods-available/ssl.conf
ln -s ../mods-available/ssl.load
ln -s ../mods-available/socache_shmcb.load
cd /etc/apache2/sites-enabled
ln -s ../sites-available/default-ssl.conf

In hindsight I’ve realized that the proper way to enable modifications would have been to just do “a2enmod ssl”. However, I have not tried that in a fresh droplet so I leave that here just as a suggestion.

All that remains now is to use yet another fantastic writeup DO provides. It shows how to create a free ssl certificate using Let’s Encrypt and configure Apache to use it including how to make it renew automatically. It is easy, fast and works with no complication at all. At least it did for me.

In addition to Digital Ocean that I find extremely impressive, Let’s Encrypt is by far one of the most impressive sites I’ve found recently that I never knew existed. I recommend everyone to go there and read up on what they do and how it works.

Full disclosure: The links to Digital Ocean (DO) in this article uses my referal link. Using that I get a discount from them and so do you. I had this post planned before I happened to get a referral-id the other day, they are quite honestly one of the most impressive destinations for IT-geeks I can think of. Go check them out using my referral or just enter digitalocean.com into your favorite webbrowser (that link is referal free – use the other to save money).

Basicfile LOBS 5

Jonathan Lewis - Sun, 2016-09-11 12:49

At the end of the last installment we had seen a test case that caused Oracle to add a couple of redundant new extents to a LOB segment after one process deleted 3,000 LOBs and another four concurrent processes inserted 750 LOBs each a few minutes later (after the undo retention period had elapsed). To add confusion the LOBINDEX seemed to show that all the “reusable” chunks had been removed from the index which suggests that they should have been re-used. Our LOB segment started at 8,192 blocks, is currently at 8,576 blocks and is only using 8,000 of them.

How will things look if I now connect a new session (which might be associated with a different freepool), delete the oldest 3,000 LOBs, wait a little while, then get my original four sessions to do their concurrent inserts again ? And what will things look like after I’ve repeated this cycle several times ?

I had to drop the tables from my original test since writing the previous article, so the following results start from recreating the whole test from scratch and won’t align perfectly with the previous sets of results. Here’s what the index treedump looked like after going through the serial delete / concurrent insert cycle 12 times:

----- begin tree dump
branch: 0x1800204 25166340 (0: nrow: 71, level: 1)
   leaf: 0x1800223 25166371 (-1: nrow: 0 rrow: 0)
   leaf: 0x1800227 25166375 (0: nrow: 0 rrow: 0)
   leaf: 0x1800236 25166390 (1: nrow: 0 rrow: 0)
   leaf: 0x180023d 25166397 (2: nrow: 63 rrow: 63)
   leaf: 0x1800206 25166342 (3: nrow: 81 rrow: 81)
   leaf: 0x1800225 25166373 (4: nrow: 81 rrow: 81)
   leaf: 0x1800229 25166377 (5: nrow: 81 rrow: 81)
   leaf: 0x180020a 25166346 (6: nrow: 81 rrow: 81)
   leaf: 0x180020e 25166350 (7: nrow: 81 rrow: 81)
   leaf: 0x1800212 25166354 (8: nrow: 76 rrow: 76)
   leaf: 0x1800216 25166358 (9: nrow: 81 rrow: 81)
   leaf: 0x180021a 25166362 (10: nrow: 81 rrow: 81)
   leaf: 0x180021e 25166366 (11: nrow: 81 rrow: 81)
   leaf: 0x1800222 25166370 (12: nrow: 126 rrow: 126)

   leaf: 0x1800266 25166438 (13: nrow: 0 rrow: 0)
   leaf: 0x180025e 25166430 (14: nrow: 39 rrow: 39)
   leaf: 0x1800262 25166434 (15: nrow: 81 rrow: 81)
   leaf: 0x1800243 25166403 (16: nrow: 81 rrow: 81)
   leaf: 0x1800261 25166433 (17: nrow: 76 rrow: 76)
   leaf: 0x1800269 25166441 (18: nrow: 81 rrow: 81)
   leaf: 0x180026d 25166445 (19: nrow: 81 rrow: 81)
   leaf: 0x1800271 25166449 (20: nrow: 81 rrow: 81)
   leaf: 0x1800275 25166453 (21: nrow: 81 rrow: 81)
   leaf: 0x1800279 25166457 (22: nrow: 81 rrow: 81)
   leaf: 0x180027d 25166461 (23: nrow: 81 rrow: 81)
   leaf: 0x180024a 25166410 (24: nrow: 118 rrow: 118)

   leaf: 0x1800263 25166435 (25: nrow: 0 rrow: 0)
   leaf: 0x180024c 25166412 (26: nrow: 0 rrow: 0)
   leaf: 0x1800254 25166420 (27: nrow: 0 rrow: 0)
   leaf: 0x1800264 25166436 (28: nrow: 1 rrow: 0)
   leaf: 0x1800274 25166452 (29: nrow: 2 rrow: 0)
   leaf: 0x180027c 25166460 (30: nrow: 2 rrow: 0)
   leaf: 0x180025d 25166429 (31: nrow: 2 rrow: 0)
   leaf: 0x1800241 25166401 (32: nrow: 2 rrow: 0)
   leaf: 0x1800245 25166405 (33: nrow: 2 rrow: 0)
   leaf: 0x1800265 25166437 (34: nrow: 1 rrow: 0)
   leaf: 0x1800251 25166417 (35: nrow: 3 rrow: 0)
   leaf: 0x1800249 25166409 (36: nrow: 4 rrow: 0)
   leaf: 0x1800242 25166402 (37: nrow: 1 rrow: 0)
   leaf: 0x1800255 25166421 (38: nrow: 2 rrow: 0)
   leaf: 0x1800259 25166425 (39: nrow: 3 rrow: 0)
   leaf: 0x1800246 25166406 (40: nrow: 1 rrow: 0)

   leaf: 0x1800214 25166356 (41: nrow: 38 rrow: 0)
   leaf: 0x1800218 25166360 (42: nrow: 81 rrow: 0)
   leaf: 0x180021c 25166364 (43: nrow: 81 rrow: 0)
   leaf: 0x1800220 25166368 (44: nrow: 0 rrow: 0)
   leaf: 0x180022d 25166381 (45: nrow: 26 rrow: 26)
   leaf: 0x1800231 25166385 (46: nrow: 81 rrow: 81)
   leaf: 0x1800219 25166361 (47: nrow: 81 rrow: 81)
   leaf: 0x1800235 25166389 (48: nrow: 81 rrow: 81)
   leaf: 0x1800239 25166393 (49: nrow: 81 rrow: 81)
   leaf: 0x180022c 25166380 (50: nrow: 81 rrow: 81)
   leaf: 0x180023c 25166396 (51: nrow: 81 rrow: 81)
   leaf: 0x180022b 25166379 (52: nrow: 81 rrow: 81)
   leaf: 0x180022f 25166383 (53: nrow: 81 rrow: 81)
   leaf: 0x1800233 25166387 (54: nrow: 81 rrow: 81)
   leaf: 0x1800237 25166391 (55: nrow: 81 rrow: 81)
   leaf: 0x180023b 25166395 (56: nrow: 79 rrow: 79)
   leaf: 0x180023f 25166399 (57: nrow: 81 rrow: 81)
   leaf: 0x1800208 25166344 (58: nrow: 81 rrow: 81)
   leaf: 0x180020c 25166348 (59: nrow: 81 rrow: 81)
   leaf: 0x1800210 25166352 (60: nrow: 120 rrow: 120)

   leaf: 0x180021d 25166365 (61: nrow: 0 rrow: 0)

   leaf: 0x1800248 25166408 (62: nrow: 21 rrow: 21)
   leaf: 0x1800268 25166440 (63: nrow: 81 rrow: 81)
   leaf: 0x180026c 25166444 (64: nrow: 152 rrow: 152)
   leaf: 0x180026b 25166443 (65: nrow: 152 rrow: 152)
   leaf: 0x180026f 25166447 (66: nrow: 152 rrow: 152)
   leaf: 0x1800273 25166451 (67: nrow: 152 rrow: 152)
   leaf: 0x1800277 25166455 (68: nrow: 152 rrow: 152)
   leaf: 0x180027b 25166459 (69: nrow: 66 rrow: 66)
----- end tree dump

As usual I’ve split the treedump into the sections that reflect the freepools, each of which could consist of two parts the LOBs (key values starting with even numbers) and the “reusable chunks” (key values starting with odd numbers). The dump suggests that things have worked well: as you can see it’s grown a few blocks after my 12 cycles but there are only 6 sections (not the full 8 that might be there), and only a few leaf blocks showing “empty” (rrows = 0). As “reusable” sections have appeared the index has grown a little, then the reusable entries have been taken off the index and the index has shrunk a bit; you can even see that freepool 3 (the highest numbered one) is still showing a pattern of 152 LOBs indexed per block – this is despite the fact that at one point a reusable section for freepool 3 (00 07) appeared above this section and then disappeared as those reusable chunks were reclaimed.

All in all the index seems to be behaving extremely well, with only a little growth and (probably temporarily) a couple of little glitches of empty leaf blocks.

Here’s the dump of the (slightly edited) “col 0” values to confirm where the freepool breaks were –

 0:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 2c cc
 1:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 31 61
 2:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 33 61
 3:     col 0; len  9; ( 9):  00 00 00 01 00 00 09 df 36
 4:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 37 1e
 5:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 38 37
 6:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 39 1e
 7:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 3a 37
 8:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 3b 50
 9:     col 0; len  9; ( 9):  00 00 00 01 00 00 09 df 3c
10:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 3d 4b
11:     col 0; len  9; ( 9):  00 00 00 01 00 00 09 df 3e
12:     col 0; len 10; (10):  00 00 00 01 00 00 09 df 3e e7

13:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 25 9b
14:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 32 a0
15:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 34 1d
16:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 36 c6
17:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 39 3d
18:     col 0; len  9; ( 9):  00 02 00 01 00 00 09 df 3d
19:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 3f 52
20:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 40 cf
21:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 41 20
22:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 41 71
23:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 41 c2
24:     col 0; len 10; (10):  00 02 00 01 00 00 09 df 42 13

25:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
26:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
27:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
28:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
29:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
30:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
31:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
32:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
33:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
34:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
35:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
36:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
37:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
38:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
39:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00
40:     col 0; len 10; (10):  00 03 57 bc ba 2f 00 00 00 00

41:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 26 52
42:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 2a 27
43:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 2a dc
44:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 2b 2d
45:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 31 34
46:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 33 15
47:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 34 92
48:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 34 e3
49:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 35 34
50:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 35 85
51:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 35 d6
52:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 36 27
53:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 38 6c
54:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 38 ef
55:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 3a d0
56:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 3c 4d
57:     col 0; len  9; ( 9):  00 04 00 01 00 00 09 df 3d
58:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 3e 4b
59:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 3f 96
60:     col 0; len 10; (10):  00 04 00 01 00 00 09 df 40 7d

61:     col 0; len 10; (10):  00 05 57 bc b9 db 00 00 00 00

62:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 32 5b
63:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 33 a6
64:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 36 4f
65:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 38 13
66:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 3a 09
67:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 3b cd
68:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 3d 91
69:     col 0; len 10; (10):  00 06 00 01 00 00 09 df 3f 23

As you can see at leaf block 61 we didn’t quite empty the reusable list from freepool 2 (00 05 = 2 * 2 + 1), and leaf blocks 25 to 40 tell us that freepool 1 (00 03 = 2 * 1 + 1) was the freepool used on the last big delete. Despite the odd little glitches it looks as if this strategy of “deleted LOBs go to my process’ freepool” seems to do a good job of reusing index space.

But there IS a problem. Here’s the output from a script I wrote using the dbms_space package to show how space in the LOB segment has been used:


Unformatted                   :    4,508 /   36,929,536
Freespace 1 (  0 -  25% free) :        0 /            0
Freespace 2 ( 25 -  50% free) :        0 /            0
Freespace 3 ( 50 -  75% free) :        0 /            0
Freespace 4 ( 75 - 100% free) :        0 /            0
Full                          :    8,000 /   65,536,000

Segment Total blocks: 12672
Object Unused blocks: 0

The LOB segment has grown from an initial 8,192 blocks with a few unformatted blocks and 8,000 used blocks (2 blocks per LOB, 4,000 LOBs) to 12,672 blocks with 4,508 blocks unformatted. (The difference between Full + Unformatted and Segment Total blocks is the set of bitmap space management blocks for the segment). After only 12 cycles we have “leaked” an overhead of 50% of our real data space – maybe this helps to explain why the client that started me down this set of blogs has seen Oracle allocate 700GB to hold just 200GB of LOBs.

The tablespace is declared as locally managed with 1MB uniform extents and automatic segment space management. By writing a simple script I can get Oracle to write a script to dump the first block of each extent – and they will all be Level 1 bitmap space management blocks. Running grep against the trace file I can pick out the lines that tell me how many data blocks are mapped by the bitmap and how many of them have been formatted or not. This is the result (I have 99 extents in the segment – 99 * 128 = 12,672):

   unformatted: 0       total: 64        first useful block: 4
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 24      total: 64        first useful block: 2
   unformatted: 52      total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 26      total: 64        first useful block: 2
   unformatted: 42      total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 42      total: 64        first useful block: 2
   unformatted: 0       total: 64        first useful block: 2
   unformatted: 26      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 62      total: 64        first useful block: 2
   unformatted: 127     total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 1       total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 1       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 108     total: 128       first useful block: 1
   unformatted: 1       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 105     total: 128       first useful block: 1
   unformatted: 96      total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 1       total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 38      total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 1       total: 128       first useful block: 1
   unformatted: 2       total: 128       first useful block: 1
   unformatted: 65      total: 128       first useful block: 1
   unformatted: 98      total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 0       total: 128       first useful block: 1
   unformatted: 0       total: 128       first useful block: 1
   unformatted: 0       total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1
   unformatted: 125     total: 128       first useful block: 1

You’ll notice that for the first 63 extents Oracle says there are 64 blocks mapped by the bitmap with the first useful block at block 2, thereafter it says there are 128 blocks and the first useful block is block 1 (Oracle is counting from zero). While Oracle thinks the segment is “quite small” it allocates two level 1 bitmap blocks per 1MB extent and I’ve only dumped the first block from each extent; but when the segment reaches 64MB Oracle decides that it’s getting pretty big and there’s no point in wasting space so changes to using a single level 1 bitmap block per 1MB extent. It’s just one of those tiny details you discover when you happen to look a little closely and how things work. (On a much larger test with 8MB uniform extents Oracle got to the point where it was using one L1 bitmap block for the whole 8MB.)

There’s a fascinating pattern in the later extents of 3 full extents followed by 3 empty extents – my first guess had been that Oracle was allocating new extents but not using them, but clearly that’s not right, it’s removing “reusable chunks” from the index and then not re-using them but using the new extents instead (some of the time). Something is seriously wrong with the way Oracle is handling the “reusable chunks” part of the index. With a little luck it’s some nasty side effect of the “one process delete / multiple process insert” strategy we have adopted, so: (a) we need to repeat the entire experiment with a concurrent delete mechanism and (b) we need to think about how we might re-engineer a REALLY BIG system that has followed this unfortunate strategy for a long time. Of course if (a) turns out to be a disaster as well we don’t really need to think too hard about (b) until we have discovered a good way of dealing with our rolling pattern of inserts and deletes.

Some (minimum effort, we hope) ideas we will have to look at for (b):

  • Oracle has an option to change the freepools count on a LOB segment – do we need to use it, how much work would it entail, would it require downtime
  • Oracle has an option to “rebuild” the freepools on a LOB segment
  • We can always try the “shrink space compact” option on the LOB
  • Should we just rebuild (move) the LOB segment – and use a larger extent size while we’re at it
  • Should we recreate the table and change the LOB to Securefiles as we do so – and do all the testing all over again
  • If we’re deleting old data on such a regular pattern should we try to bypass the deletes by partitioning the table in some clever way

TO BE CONTINUED.

 


Space Usage

Jonathan Lewis - Sun, 2016-09-11 12:09

Here’s a simple script that I’ve used for many years to check space usage inside segments.  The comment about freelist groups may be out of date  – I’ve not had to worry about that for a very long time. There is a separate script for securefile lobs.


rem
rem     Script:         dbms_space_use.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2002
rem     Purpose:
rem
rem     Last tested 
rem             12.1.0.1
rem             11.2.0.4
rem             11.1.0.7
rem             10.2.0.3
rem              9.2.0.8
rem     Not tested
rem             10.2.0.5
rem     Not relevant
rem              8.1.7.4
rem
rem     Notes:
rem     For accuracy in free space you (once) needed to set the
rem     scan limit; and for those rare objects cases where you 
rem     had defined multiple freelist groups you still have to
rem     work through each free list group in turn
rem
rem     For the ASSM calls:
rem             FS1     => 0% - 25% free space
rem             FS2     => 25% - 50% free space
rem             FS3     => 50% - 75% free space
rem             FS4     => 75% - 100% free space
rem             Bytes = blocks * block size
rem
rem     Expected errors:
rem             ORA-10614: Operation not allowed on this segment
rem                     (MSSM segment, ASSM call)
rem             ORA-10618: Operation not allowed on this segment
rem                     (ASSM segment, MSSM call)
rem             ORA-03200: the segment type specification is invalid
rem                     (e.g. for LOBINDEX or LOBSEGMENT)
rem                     11g - "LOB" is legal for LOB segments
rem                         - use "INDEX" for the LOBINDEX
rem
rem     For indexes
rem             Blocks are FULL or FS2 (re-usable)
rem
rem     Special case: LOB segments.
rem     The number of blocks reported by FS1 etc. is actually the
rem     number of CHUNKS in use (and they're full or empty). So 
rem     if your CHUNK size is not the same as your block size the
rem     total "blocks" used doesn't match the number of blocks 
rem     below the HWM.
rem
rem     The package dbms_space is created by dbmsspu.sql
rem     and the body is in prvtspcu.plb
rem
rem     11.2 overloads dbms_space.space_usage for securefile lobs
rem     See dbms_space_use_sf.sql
rem


define m_seg_owner      = &1
define m_seg_name       = &2
define m_seg_type       = &3

define m_segment_owner  = &m_seg_owner
define m_segment_name   = &m_seg_name
define m_segment_type   = &m_seg_type

@@setenv

spool dbms_space_use

prompt  ===================
prompt  Freelist management
prompt  ===================

declare
        wrong_ssm       exception;
        pragma exception_init(wrong_ssm, -10618);

        m_free  number(10);
begin
        dbms_space.free_blocks(
                segment_owner           => upper('&m_segment_owner'),
                segment_name            => upper('&m_segment_name'),
                segment_type            => upper('&m_segment_type'),
--              partition_name          => null,
--              scan_limit              => 50,
                freelist_group_id       => 0,
                free_blks               => m_free
        );
        dbms_output.put_line('Free blocks below HWM: ' || m_free);
exception
        when wrong_ssm then
                dbms_output.put_line('Segment not freelist managed');
end;
/


prompt  ====
prompt  ASSM
prompt  ====

declare
        wrong_ssm       exception;
        pragma exception_init(wrong_ssm, -10614);

        m_UNFORMATTED_BLOCKS    number;
        m_UNFORMATTED_BYTES     number;
        m_FS1_BLOCKS            number;
        m_FS1_BYTES             number;
        m_FS2_BLOCKS            number;  
        m_FS2_BYTES             number;

        m_FS3_BLOCKS            number;
        m_FS3_BYTES             number;
        m_FS4_BLOCKS            number; 
        m_FS4_BYTES             number;
        m_FULL_BLOCKS           number;
        m_FULL_BYTES            number;

begin
        dbms_space.SPACE_USAGE(
                upper('&m_segment_owner'),
                upper('&m_segment_name'),
                upper('&m_segment_type'),
--              PARTITION_NAME                  => null,
                m_UNFORMATTED_BLOCKS,
                m_UNFORMATTED_BYTES, 
                m_FS1_BLOCKS , 
                m_FS1_BYTES,
                m_FS2_BLOCKS,  
                m_FS2_BYTES,
                m_FS3_BLOCKS,  
                m_FS3_BYTES,
                m_FS4_BLOCKS,  
                m_FS4_BYTES,
                m_FULL_BLOCKS, 
                m_FULL_BYTES
        );

        dbms_output.new_line;
        dbms_output.put_line('Unformatted                   : ' || to_char(m_unformatted_blocks,'999,990') || ' / ' || to_char(m_unformatted_bytes,'999,999,990'));
        dbms_output.put_line('Freespace 1 (  0 -  25% free) : ' || to_char(m_fs1_blocks,'999,990') || ' / ' || to_char(m_fs1_bytes,'999,999,990'));
        dbms_output.put_line('Freespace 2 ( 25 -  50% free) : ' || to_char(m_fs2_blocks,'999,990') || ' / ' || to_char(m_fs2_bytes,'999,999,990'));
        dbms_output.put_line('Freespace 3 ( 50 -  75% free) : ' || to_char(m_fs3_blocks,'999,990') || ' / ' || to_char(m_fs3_bytes,'999,999,990'));
        dbms_output.put_line('Freespace 4 ( 75 - 100% free) : ' || to_char(m_fs4_blocks,'999,990') || ' / ' || to_char(m_fs4_bytes,'999,999,990'));
        dbms_output.put_line('Full                          : ' || to_char(m_full_blocks,'999,990') || ' / ' || to_char(m_full_bytes,'999,999,990'));

exception
        when wrong_ssm then
                dbms_output.put_line('Segment not ASSM');
end;
.
/


prompt  =======
prompt  Generic
prompt  =======

declare
        m_TOTAL_BLOCKS                  number;
        m_TOTAL_BYTES                   number;
        m_UNUSED_BLOCKS                 number;
        m_UNUSED_BYTES                  number;
        m_LAST_USED_EXTENT_FILE_ID      number;
        m_LAST_USED_EXTENT_BLOCK_ID     number;
        m_LAST_USED_BLOCK               number;
begin
        dbms_space.UNUSED_SPACE(
                upper('&m_segment_owner'),
                upper('&m_segment_name'),
                upper('&m_segment_type'),
--              PARTITION_NAME                  => null,
                m_TOTAL_BLOCKS,
                m_TOTAL_BYTES, 
                m_UNUSED_BLOCKS,  
                m_UNUSED_BYTES,
                m_LAST_USED_EXTENT_FILE_ID, 
                m_LAST_USED_EXTENT_BLOCK_ID,
                m_LAST_USED_BLOCK
        );

        dbms_output.put_line('Segment Total blocks: '  || m_total_blocks);
        dbms_output.put_line('Object Unused blocks: '  || m_unused_blocks);

end;
.
/


spool off


Here’s a sample of output (from a segment using ASSM):


===================
Freelist management
===================
Segment not freelist managed

PL/SQL procedure successfully completed.

====
ASSM
====

Unformatted                   :  132,385 / ############
Freespace 1 (  0 -  25% free) :        0 /            0
Freespace 2 ( 25 -  50% free) :        0 /            0
Freespace 3 ( 50 -  75% free) :        0 /            0
Freespace 4 ( 75 - 100% free) :        0 /            0
Full                          :   12,327 /  100,982,784

PL/SQL procedure successfully completed.

=======
Generic
=======
Segment Total blocks: 145920
Object Unused blocks: 0

PL/SQL procedure successfully completed.

(Looks like I should have increased the length of the byte-count output over the years ;)


What the hell are these template0 and template1 databases in PostgreSQL?

Yann Neuhaus - Sun, 2016-09-11 00:45

When people start to work with PostgreSQL, especially when they are used to Oracle, some things might be very confusing. A few of the questions we usually get asked are:

  • Where is the listener and how can I configure it?
  • When you talk about a PostgreSQL cluster where are the other nodes?
  • Why do we have these template databases and what are they used for?
  • …and some others…

In this post we’ll look at the last point: Why do we have two template databases (template0 and template1) and additionally a database called “postgres”? That makes three databases by default. In Oracle we only have one, well two when you use pluggable databases (the root and pdb$seed). Why does PostgreSQL need three by default? Isn’t that only overhead? Lets see …

To begin with: Assuming these databases are really not required we can drop them, can’t we?

(postgres@[local]:5432) [postgres] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres

Can we drop the “postgres” database?

(postgres@[local]:5432) [postgres] > drop database postgres;
ERROR:  cannot drop the currently open database
Time: 1.052 ms

Ok, this is the first point to remember: You can not drop a database which users are currently connected to (in this case it is my own connection). So lets try to connect to template1 and then drop the “postgres” database:

postgres@pgbox:/home/postgres/ [PG960] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > drop database postgres;
DROP DATABASE
Time: 489.287 ms
(postgres@[local]:5432) [template1] > 

Uh, our default “postgres” database is gone. Does it matter? Not really from a PostgreSQL perspective but probably all your clients (pgadmin, monitoring scripts, …) will have a problem now:

postgres@pgbox:/home/postgres/ [postgres] psql
psql: FATAL:  database "postgres" does not exist

The “postgres” database is meant as a default database for clients to connect to. When you administer a PostgreSQL instance which runs under the postgres operating system user the default database that is used for a connection is the same as the username => postgres. Now that this database does not exist anymore you can not longer connect if you do not provide a database name in you connection request. But we can still connect to “template1″:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

Second thing to remember: The “postgres” database is meant as a default database for connections. It is not required, you can drop it but probably a lot of tools you use will need to be adjusted because they assume that the “postgres” database is there by default.

Luckily we can easy recover from that:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create database postgres;
CREATE DATABASE
(postgres@[local]:5432) [template1] > \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

What happened? We connected to template1 again and re-created the “postgres” database. Of course everything we added to the “postgres” database before we dropped it is not any more available. This brings us to the next question: When we create a new database what or who defines the initial contents?

Third thing to remember: When you create a new database by using the syntax “create database [DB_NAME]” you get an exact copy of template1.

Really? What happens when I modify template1? Lets add one table and one extension:

postgres@pgbox:/home/postgres/ [postgres] psql template1
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [template1] > create table my_test_tab ( a int );
CREATE TABLE
(postgres@[local]:5432) [template1] > create extension hstore;
CREATE EXTENSION
Time: 123.722 ms
(postgres@[local]:5432) [template1] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

If the above statement is true every new database created with the above syntax should contain the table and the extension, right?

(postgres@[local]:5432) [postgres] > \c db_test
You are now connected to database "db_test" as user "postgres".
(postgres@[local]:5432) [db_test] > \d
            List of relations
 Schema |    Name     | Type  |  Owner   
--------+-------------+-------+----------
 public | my_test_tab | table | postgres
(1 row)

(postgres@[local]:5432) [db_test] > \dx
                           List of installed extensions
  Name   | Version |   Schema   |                   Description                    
---------+---------+------------+--------------------------------------------------
 hstore  | 1.4     | public     | data type for storing sets of (key, value) pairs
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Whatever you put into template1 will be available in a new database if you use the following syntax: “create database [DB_NAME];” This can simplify your deployments a lot if you rely on pre-installed objects for e.g. monitoring or development.

Ok, I got it, but what is template0 for then? For this to understand we first take a look at pg_database, especially two columns: datallowconn and datistemplate:

(postgres@[local]:5432) [db_test] > select datname,datallowconn,datistemplate from pg_database order by 3;
  datname  | datallowconn | datistemplate 
-----------+--------------+---------------
 postgres  | t            | f
 db_test   | t            | f
 template1 | t            | t
 template0 | f            | t
(4 rows)

When you take a look at “datallowcon” the only database that has set this to false is “template0″. Do you remember the beginning of this post when we tried to delete the “postgres” database? You can only delete a database when there are no connections. But: You can only create a database from another database if there are no connections to the source, too. Really? Why then can I create a new database when I am connected to template1 when template1 is the source for the new database?

(postgres@[local]:5432) [template1] > create database db_test_2;
CREATE DATABASE

Confusing? This does not work anymore if there is another session connected to template1.

Lets try to create another new database but this time we we use db_test as the source. Yes, this is possible if you slightly adjust the syntax. But before we create the database we create a another connection to db_test:

(postgres@[local]:5432) [template1] > \q
postgres@pgbox:/home/postgres/ [PG960] psql db_test
psql (9.6rc1 dbi services build)
Type "help" for help.

In another session we try to create new database with db_test as the source:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.

(postgres@[local]:5432) [postgres] > create database db_test_3 template db_test;
ERROR:  source database "db_test" is being accessed by other users
DETAIL:  There is 1 other session using the database.

Fourth point to remember: For creating new databases you can use whatever database you like as the source when you specify then template explicitly.
Fifth point to remember: When you want to drop a database or when you want to create a database there must be no connections to the database you either want to drop or you want to create a new database from.

Coming back to the “datistemplate” and “datallowcon” settings: template0 is the only database that has “datallowcon” set to false, why? Because template0 is meant as the default unmodifiable database. You never should make any changes there. In a brand new PostgreSQL instance template0 and template1 are exactly the same. But why do I need both of them then? Assume you messed up template1 somehow (installed too many objects, for example). Using template one you still can recover from that easily:

postgres@pgbox:/home/postgres/ [PG960] psql postgres
psql (9.6rc1 dbi services build)
Type "help" for help.
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = false where datname = 'template1';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database template1;
DROP DATABASE
(postgres@[local]:5432) [postgres] > create database template1 template template0;
CREATE DATABASE
(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'template1';
UPDATE 1

What happened here? I modified template1 to not being a template anymore because you can not drop a database flagged as a template. Then I dropped and re-created the template1 database by using template0 as the template. Now my template1 is an exact copy of template0 again and all the things I messed up are gone. Another use case for this is: Image you modified template1 to include the stuff you rely on but at some point in the future you need a new database which shall be without your modifications (e.g. restoring a dump). For this you always can use template0 as a template because template0 is always clean. And this is why connections are not allowed to template0.

Of course you can create your own template database(s) by setting the “datistemplate” to true:

(postgres@[local]:5432) [postgres] > update pg_database set datistemplate = true where datname = 'db_test';
UPDATE 1
(postgres@[local]:5432) [postgres] > drop database db_test;
ERROR:  cannot drop a template database

What about the overhead:

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template0' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'template1' ));
 pg_size_pretty 
----------------
 7233 kB
(1 row)

(postgres@[local]:5432) [postgres] > select * from pg_size_pretty ( pg_database_size ( 'postgres' ));
 pg_size_pretty 
----------------
 7343 kB
(1 row)

Should not really be an issue on todays hardware. Hope this puts some light on these default databases.

 

Cet article What the hell are these template0 and template1 databases in PostgreSQL? est apparu en premier sur Blog dbi services.

Working with dates

Tom Kyte - Sat, 2016-09-10 05:06
Hello Again Need your help.. How can convert a date to YYYY-MM-DD hh24:00:00 format I have date like 2016-09-01 09:33:23 i want to have the output like 2016-09-01 08:00:00 (want to subtract and hour from the time and this format) I want t...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator