Skip navigation.

Feed aggregator

Microsoft Hadoop: Taming the Big Challenge of Big Data – Part Two

Pythian Group - Thu, 2014-10-02 11:13

Today’s blog post is the second in a three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In our first blog post, we revealed just how much data is being generated globally every minute – and that it has doubled since 2011.

Traditional database management tools were not designed to handle the elements that make big data so much more complex—namely its key differentiators: volume, variety, and velocity.Variety Volume Velocity graphic

Volume is the quantity of data, variety refers to the type of data collected (image, audio, video, etc.), and velocity is its expected growth rate. Many people assume that big data always includes high volume and intuitively understand the
challenges that it presents. In reality, however, data variety and velocity are much more likely to prevent traditional management tools from being able to efficiently capture, store, report, analyze, and archive the data, regardless of volume.

Download our full white paper which explores the technical and business advantages of effectively managing big data, regardless of quantity, scope, or speed.

 

Categories: DBA Blogs

Oracle Technology Network Wednesday in Review / Thursday Preview - Oracle OpenWorld and JavaOne

OTN TechBlog - Thu, 2014-10-02 10:20

Annual Blogger Meetup was a hoot!  Thanks for joining us.

OTN Lounge activities come to a close today last chance to learn more about the Oracle ACE Program and the Oracle Community from the program leads.  See more below -

Oracle ACE Program – 11:30 to 1:30 - Oracle ACE Program Recognizes prominent advocates. Learn how to become an Oracle ACE, Gain community recognition for sharing your knowledge and expertise, Advance your career and network with like-minded peers

Oracle Community - 11:30 to 3:30 - Learn about the Oracle Technology Network Community Platform, and get a preview of the new badges that are coming soon! Get answers to questions, network with peers, and be rewarded for your expertise in the Oracle Community

Don't forget the OTN team has been busy shooting video and attending sessions.  See what they've been up to so far -

Blogs -
The Java Source Blog
OTN DBA/DEV Watercooler

YouTube Channels -
OTN
Java
OTN Garage
OTN ArchBeat

Follow @JavaOneConf for conference-specific announcements

See you next year!

OCP 12C – Auditing

DBA Scripts and Articles - Thu, 2014-10-02 09:57

Unified Audit Data Trail Unifed Auditing offers a consolidated approach, all the audit data is consolidated in a single place. Unified Auditing consolidate audit records for the following sources : Standard Auditing Fine-grained auditing (DBMS_FGA) RAC security auditing RMAN auditing Database Vault auditing Oracle Label Security auditing Oracle Data Mining Oracle Data Pump Oracle SQL*Loader In [...]

The post OCP 12C – Auditing appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Kuali Foundation: Clarification on future proprietary code

Michael Feldstein - Thu, 2014-10-02 08:35

Well that was an interesting session at Educause as described at Inside Higher Ed:

It took the Kuali leadership 20 minutes to address the elephant in the conference center meeting room.

“Change is ugly, and change is difficult, and the only difference here is you’re going to see all the ugliness as we go through the change because we’re completely transparent,” said John F. (Barry) Walsh, a strategic adviser for the Kuali Foundation. “We’re not going to hide any difficulty that we run into. That’s the way we operate. It’s definitely a rich environment for people who want to chuck hand grenades. Hey, have a shot — we’re wide open.” [snip]

Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

While the article describes the communication and pushback issues with Kuali’s creation of a for-profit entity quite well (go read the whole article), I think it’s worth digging into what Carl generously describes as a “back-and-forth”. What happened was that there was a slide describing the relicensing of Kuali code as AGPL, and the last bullet caught my attention:

  • AGPL > GPL & ECL for SaaS
  • Full versions always downloadable by customers
  • Only feature “held back” is multi-tenant framework

If you need a read on the change of open source licenses and why this issue is leading to some of the pushback, go read Chuck Severance’s blog post.

Does ‘held back’ mean that the multi-tenant framework to enable cloud hosting partially existed but is not moving to AGPL, or does it mean that the framework would be AGPL but not downloadable by customers, or does it mean that the framework is not open course? That was the basis of my question.

Several Kuali Foundation representatives attempted to indirectly answer the question without addressing the license.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

The back-and-forth involved trying to get a clear answer, and the answer is that the multi-tenant framework to be developed / owned by KualiCo will not be open source – it will be proprietary code. I asked Joel Dehlin for additional context after the session, and he explained that all Kuali functionality will be open source, but the infrastructure to allow cloud hosting is not open source.

This is a significant clarification on the future model. While Kuali has always supported an ecosystem with commercial partners that can offer proprietary code, this is the first time that Kuali itself will offer proprietary, non open source code.[1]

What is not clear is whether any of the “multi-tenant framework” already exists and will be converted to a proprietary license or if all of this code will be created by KualiCo from the ground up. If anyone knows the answer, let me know in the comments.

From IHE:

“Unfortunately some of what we’re hearing is out of a misunderstanding or miscommunication on our part,” said Eric Denna, vice president of IT and chief information officer at the University of Maryland at College Park. “Brad [Wheeler, chair of the foundation’s board of directors,] and I routinely are on the phone saying, ‘You know, we have day jobs.’ We weren’t hired to be communications officers.”

Suggestion: Simple answers such as “What ‘held back’ means is that the framework will be owned by KualiCo and not open source and therefore not downloadable” would avoid some of the perceived need for communication officers.

  1. Kuali Foundation is partial owner and investor in KualiCo.

The post Kuali Foundation: Clarification on future proprietary code appeared first on e-Literate.

Day 3 at Oracle Open World 2014 - Cloud: Private or Public?

Yann Neuhaus - Thu, 2014-10-02 07:16

One of the main subject of this year's Oracle OpenWorld was the Cloud. In this post I will share some thoughts on this: is the Cloud a dream, a reality, fog or smoke?

 

Before

Before going to OOW 2014 I did not have a fixed position about Cloud. I had some questions, for instance about security like, I guess, other people. I saw several sessions and I started to write this blog two days ago to summarize my reflection. Fortunately I was in a session on Wednesday where the discussion was "Private Cloud? Public Cloud?" and after that I had to update, in the good way, this post.

 

Now

Now as we can choose between public, private or even mixed Cloud the sky is less .... cloudy.

I am a bit more convinced. I would use Public Cloud for the development environment as this will reduce the implementation time, flexibility for the developer. Why not also for the training as we do not have to use production data? For the QA, clone and production I would more use the Private Cloud for the time being. In both cases we will benefit from great agility, low expenses, better utilization of resources aso.

But there are still some questions:

  • Although we can install the Private Cloud like the Public one, what will be the impact of the budget? 

  • What will be the impact on our day to day work?

  • What will be our role as an integrator working more on the infrastructure layer?

  • Do we have a view and can we have some valuable discussions with the people who manage our system in the Public Cloud in case we hit issues? Can a good relationship be build?

  • Today we increase our skills while we are working also in Dev on the premise installation. We can hit sometimes, during the integration phase, issues that have to be solved. This of course avoid to have later the same problems in the other environments. How will this work in case we use a mixed environment, Public for Dev, Private for Prod?

  • Who will do the load&stress tests in case the Public Cloud is used?

  • In a validated system we have to follow strict procedures (e.g. GxP). How is this managed in the Public Cloud? Are we still compliant? Are the people managing the Public Cloud trained using the company's procedures? The last session confirmed that in that case we have to use the Private Cloud.

  • Besides the regulatory rules in the different countries, what will be the acceptance in the different countries? Will it be the same in Europe as in the USA? Do we have the same culture about this subject?

  • Another question which is not related to the technology; how the future young IT people have to be positionned? Which kind of training they have to follow? Is this already in place in the universities? Are the teacher aware of those changes?

I've got lots of information on the technical side but I am also interested on the social and human point of view. It would be interesting to have the view(s) from a sociologist or philosopher as this new approach will certainly have an impact on our life - like the Big Data will have.

Probably I missed some information in all this flow and I don't have all keys. But I think we are at the beginning of a revolution? evolution? or opportunity.

Let's see what will happen in the next few years, and think about it in the meantime...

Free Oracle Special Edition eBook - Enterprise Mobility

Oracle Mobile is about simplifying enterprise mobility giving customers a complete mobile solution and the choice and flexibility to develop their own unique enterprise mobile strategy. Whether you...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Index Compression Part V: 12c Advanced Index Compression (Little Wonder)

Richard Foote - Thu, 2014-10-02 04:28
I’ve finally managed to find some free time in the evening to write a new blog piece :) This will have to be the record for the longest time between parts in a series, having written Part IV of this Index Compression series way way back in February 2008 !! Here are the links to the previous articles […]
Categories: DBA Blogs

Free Formspider Webinar (an APEX Alternative)

Gerger Consulting - Thu, 2014-10-02 04:11
We'd like to invite you to our free Formspider webinar on October 7th. In the webinar, you'll get a chance to find out how Formspider can help you build great web applications with PL/SQL and why we think Formspider is better than APEX.

More importantly, current Formspider customers will join the webinar and talk about their experience with the product.

The webinar is free but space is limited. Sign up now.
Categories: Development

Oracle OEM Cloud Control 12c upgrade to 12.1.0.4

Yann Neuhaus - Thu, 2014-10-02 02:12

In this blog post, I will describe how to upgrade from Oracle Enterprise Manager Cloud Control 12.1.0.3 to OEM 12.1.0.4.0. I have already described the main new features of Cloud Control 12.1.0.4 version in an earlier post (Oracle OEM Cloud Control 12.1.0.4 - the new features). The first pre-requisite is to apply the patch 11061801 on the repository database in 11.2.0.3 version, using the classical opatch apply method. Then, we can begin the upgrade phase.

First, we should explicitly stop the OMS jvmd and adp engines:

 

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms jvmd stop -allOracle Enterprise Manager Cloud Control 12c Release 3Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.Please enter the SYSMAN password:Stopping all Engines{}

No engines found for this operation

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms adp stop -a

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.

No valid registry entry found for verb jv

 

Then we stop the OMS:

 

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl stop oms -all

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.Stopping WebTier...

WebTier Successfully Stopped

Stopping Oracle Management Server...

Oracle Management Server Successfully Stopped

AdminServer Successfully Stopped

Oracle Management Server is Down

 

We stop the management agent:

 

oracle@vmtestoraem12c:/home/oracle/ [agent12c] emctl stop agent

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.

Stopping agent ..... stopped.


Finally after unzipping the 12.1.0.4 binary files, we can run the installer:

 

cc1

 

We choose not to receive security updates:

 

cc2

cc3

 

 

We choose to skip the updates:

 

cc4

 

All the prerequisites checks have succeeded :=)

 

cc5

 

We select a One System Upgrade and the Oracle_Home where the 12.1.0.3 version is installed:

 

cc7

 

We select the new Middleware Home:

 

cc8

 

We enter the administration passwords:

 

cc9

 

The installer reminds you that you have correctly patched the repository database. Let's check if it is correct:

 

Interim patches (1) :

Patch 11061801 : applied on Mon Aug 04 16:52:51 CEST 2014

Unique Patch ID: 16493357

Created on 24 Jun 2013, 23:28:20 hrs PST8PDT

Bugs fixed: 11061801

 

cc10

 

We did not copy the emkey to the repository, so we have to run:

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12103/oms/bin/ [oms12c] emctl config emkey -copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=vmtestoraem12c.it.dbi-services.com)(PORT=1521)))(CONNECT_DATA=(SID=OMSREP)))"' -repos_user sysman -emkey_file /u01/app/oracle/MiddleWare_12103/oms/sysman/config/emkey.ora

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.

All rights reserved.

Enter Admin User's Password :

Enter Enterprise Manager Root (SYSMAN) Password :

The EMKey has been copied to the Management Repository.

This operation will cause the EMKey to become unsecure.

 

After the required operation has been completed, secure the EMKey by running "emctl config emkey -remove_from_repos":

 

cc11

 

We select Yes to let the installer fix the isssue automatically:

 

cc12

 

We select Next:

 

cc13

 

We can select additionnal plugins:

 

cc14

 

We enter the weblogic password:

 

cc15

 

We select install:

 

cc16

 

And finally we run the allroot.sh script connected as root:

 

cc17

 

The upgrade is successfull! Let's check the OMs status:

 

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12cR4/oms/ [oms12c] emctl status oms -details

Oracle Enterprise Manager Cloud Control 12c Release 4

Copyright (c) 1996, 2014 Oracle Corporation.

All rights reserved.

Enter Enterprise Manager Root (SYSMAN) Password :

Console Server Host : vmtestoraem12c.it.dbi-services.com

HTTP Console Port : 7789

HTTPS Console Port : 7801

HTTP Upload Port : 4890

HTTPS Upload Port : 4901

EM Instance Home : /u01/app/oracle/gc_inst/em/EMGC_OMS1

OMS Log Directory Location : /u01/app/oracle/gc_inst/em/EMGC_OMS1/sysman/log

OMS is not configured with SLB or virtual hostname

Agent Upload is locked.

OMS Console is locked.

Active CA ID: 1

Console URL: https://vmtestoraem12c.it.dbi-services.com:7801/em

Upload URL: https://vmtestoraem12c.it.dbi-services.com:4901/empbs/upload

WLS Domain InformationDomain Name : GCDomain

Admin Server Host : vmtestoraem12c.it.dbi-services.com

Admin Server HTTPS Port: 7102

Admin Server is RUNNING

Oracle Management Server Information

Managed Server Instance Name: EMGC_OMS1

Oracle Management Server Instance Host: vmtestoraem12c.it.dbi-services.com

WebTier is Up

Oracle Management Server is Up

BI Publisher is not configured to run on this host.

 

Now we have access to the Enterprise Manager Cloud Control 12.1.0.4:

 

cc18

 

The next step consists in upgrading the management agents. From the Setup menu, we select Upgrade Agents:

 

cc19

cc20

 

The management agent is detected:

 

cc21

 

The operation is successfull:

 

cc22

 

The update to 12.1.0.4 Enterprise Manager version did not cause any problem and has a new feature which checks the correct patching of the Enterprise Manager repository database.

Run #em12c on #db12c? – Discussion from Oracle Open World (MOS Note: 1920632.1)

DBASolved - Wed, 2014-10-01 13:38

Ok Folks, I’ve been here are Oracle Open World for a few days now.  In that time, I’ve had numerous conversations about running Oracle Enterprise Manager 12c on Oracle Database 12c.  I will be honest and say that I’ve enjoyed these conversations; however, after about the fourth time I decided I need to write a quick post on the explanation discussed in these conversations.  

Early this year (August) I wrote a post about the what came out of the OEM CAB in May 2014 and how to get OEM 12c to work on DB12c.  The concept of running OEM 12c on DB12c, pluggable or not, have many people excited and looking forward to configuring OEM to do that very configuration.  Heck, I’ve even installed it for a few customers in that configuration (non-PDB).  So I’m a bit sad in having to say this:  ORACLE DATABASE 12c SUPPORT FOR THE REPOSITORY DATABASE IS TEMPORARILY SUSPENDED!  I say this due to the My Oracle Support (MOS) Note: 1920632.1.

Note 1920632.1 states the following:

Due to some recently discovered scenarios, we (Oracle) are temporarily suspending the certification of DB 12.1.0.1 and DB 12.1.0.2 as a Certified Repository version for EM 12c R4 until additional testing is complete.

Now what does this mean for those customers and clients that have already built their OEM 12c repository on DB 12c (12.1.0.1 or 12.1.0.2)?  The MOS note outlines what to do in the action section of the note:

Until testing is complete on the 12c Database, Oracle recommends using DB 11.2.0.4 as the EM 12c R4 Repository. 

If you are currently running a 12c DB repository, please be aware that additional testing is underway and there are currently no bugs or patches required; but if testing proves a patch is required, we will update this announcement.  You do not need to deinstall EM 12c or move the repository to an 11.2.0.4 database.

Sure hope Oracle quickly finishes testing and can restore support for DB 12c as the OEM repository.  In the meantime, everyone should know about this note number and be aware when making architecture changes related to their OEM 12c environment.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

OCP 12C – In Database Archiving and Temporal Validity

DBA Scripts and Articles - Wed, 2014-10-01 13:14

In Database Archiving In Database Archiving is a new feature of Oracle 12c meant to solve management of historical data inside the database. Like its name says, this functionnality leaves data into the database, so it remains accessible if you need it. With In Database Archiving, historical data remains in the database but is invisible to [...]

The post OCP 12C – In Database Archiving and Temporal Validity appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OOW14 Day 3 - 12c Adaptive Bitmap Pruning

Yann Neuhaus - Wed, 2014-10-01 13:10

I'm currently following the session 'Real-World Performance of Star and Snowflake Schemas' with Michael Hallas, Andrew Holdsworth, John Clarke. It's really a good presentation. the Real Performance team tries to spread messages about how to design for performance. But today I'll not blog about what I've seen but about what I've not seen. Everybody talks about those nice 12c features that are useful in BI workloads, such as adaptive joins, adaptive parallel distribution, vector 'in-memory' aggregation, etc. Nobody talks about Adaptive Bitmap Pruning.

If you google for for it there is only one result (at least before my blog is published) which is the patent about all all those new CBO features that came in 12.1.0.1

And when I assist to a session that shows star transformation and execution plans with and without temporary table, I'm frustrated that I don't see anything about that great feature that stabilizes our BI reports on star schemas. I'm preparing our tuning workshop (dates here - 10% discount if you book before the end of the year) and it's something I can't skip when talking about star transformation and bitmap indexes.

So let's have a look to it. It you want a refresh about star transformation, please just wait about the next SOUG newsletter. But if you're already familiar with it, this is for you.

Let's have a look at an execution plan in 12c after running a query on a star schema. You have the same as in 11g except that we have that grayed ‘STATISTICS COLLECTOR’. Star transformation is good when the predicate is selective enough to filter few rows. Imagine that the cardinality estimation was wrong and most of FACT rows have the required value. This is what happened here, and the optimizer has chosen to stop iterating in that bitmap branch. It just ignores the predicate at that step and the join back to the dimension Cartesian join will filter it anyway.

If you check the execution plan with predicates you will see the predicate on dimension in the two table access.

12cAdaptiveBitmapPruning.png

Look at the end. When the statistics collector has seen that the threshold has been passed over, it has decided to skip that bitmap branch. This is Adaptive Bitmap Pruning. The bitmap branch is good only if it helps to filter a lot of rows. If it’s not the case, then it’s just an overhead and it is skipped, coming back - for that branch only - to the behavior we have when star transformation was disabled.

As with the other adaptive plans, the threshold is calculated at parse time.

See more details about the inflection point in a previous blog about Adaptive Joins

 

OCP 12C – Information Lifecycle Management and Storage Enhancements

DBA Scripts and Articles - Wed, 2014-10-01 10:16

Information Lifecycle Management Before reading this flashcard I recommend you to read my article on ILM. Automatic Data Optimization (ADO) Policy based data management Let you define policies at : Tablespace level Segment level Row level Policies can either compress data or move data to a different tablespace Policies let you define when, what and [...]

The post OCP 12C – Information Lifecycle Management and Storage Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Join me in a FREE live webcast about Real-Time Query!

The Oracle Instructor - Wed, 2014-10-01 08:50

On Thursday, 2nd October, 12:30 CET I will be doing a Live Webcast with many demonstrations about Data Guard Real-Time Query.

The shown features all work with 11g already.

Register here.

805-banner-dataguardrealtime-v1-2294812Addendum: The webcast was done already.


Tagged: Active Data Guard, Data Guard, OU Streams
Categories: DBA Blogs

Oracle APEX 5 Update from OOW

Scott Spendolini - Wed, 2014-10-01 08:18
The big news about Oracle APEX from OOW is not so much about what, but more about when.  Much to many people's disappointment, APEX 5.0 is still going to be a few months out.  The "official" release date has been updated from "calendar year 2014" to "fiscal year 2015".  For those not in the know, Oracle's fiscal year ends on May 31st, so that date represents the new high-water mark.

Despite this bit of bad news, there were a number of bits of good news as well.  First of all, there will be an EA3.  This is good because it demonstrates that the team has been hard at work fixing bugs and adding features.  Based on the live demonstrations that were presented, there are some subtle and some not-so-subtle things to look forward to.  The subtle include an even more refined UI, complete with smooth fade-through transitions.  I tweeted about the not-so-subtle the other day, but to recap here: pivot functionality in IRs, column toggle and reflow in jQuery Mobile.

After (or right before - it wasn't 100% clear) that E3 is released, the Oracle APEX team will host their first public beta program.  This will enable select customers to download and install APEX 5.0 on their own hardware.  This is an extraordinary and much-needed positive change in their release cycle, as for the first time, customers can upgrade their actual applications in their environment and see what implications APEX 5.0 will bring.  Doing a real-world upgrade on actual APEX applications is something that the EA instances could never even come close to pulling of.

After the public beta, Oracle will upgrade their internal systems to APEX 5.0 - and there's a lot of those.  At last count, I think the number of workspaces was just north of 3,000.  After the internal upgrade, apex.oracle.com will have it's turn.  And once that is complete, we can expect APEX 5.0 to be released.

No one like delays.  But in this case, it seems that the extra time required is quite justified, as APEX 5.0 still needs some work, and the upgrade path from 4.x needs to be nothing short of rock-solid.  Keep in mind that with each release, there are a larger number of customers using a larger number of applications, so ensuring that their upgrade experience is as smooth as possible is just as, if not more important than any new functionality.

In the mean time, keep kicking the tires on the EA instance and provide any feedback or bug reports!

Shrink Tablespace

Jonathan Lewis - Wed, 2014-10-01 07:55

In a comment on my previous post on shrinking tablespaces Jason Bucata and Karsten Spang both reported problems with small objects that didn’t move to the start of the tablespace. This behaviour is inevitable with dictionary managed tablespaces (regardless of the size of the object), but I don’t think it’s likely to happen with locally managed tablespaces if they’ve been defined with uniform extent sizes. Jason’s comment made me realise, though, that I’d overlooked a feature of system allocated tablespaces that made it much harder to move objects towards the start of file. I’ve created a little demo to illustrate the point.

I created a new tablespace as locally managed, ASSM, and auto-allocate, then created a few tables or various sizes. The following minimal SQL query reports the resulting extents in block_id order, adding in a “boundary_1m” column which subtracts 128 blocks (1MB) from the block_id, then divides by 128 and truncates to show which “User Megabyte” in the file the extent starts in.  (Older versions of Oracle typically have an 8 block space management header, recent versions expanded this from 64KB to 1MB – possibly as a little performance aid to Exadata).


select
        segment_name, block_id, blocks , trunc((block_id - 128)/128) boundary_1M
from
        dba_extents where owner = 'TEST_USER'
order by
        block_id
;

SEGMENT_NAME               BLOCK_ID     BLOCKS BOUNDARY_1M
------------------------ ---------- ---------- -----------
T1                              128       1024           0
T1                             1152       1024           8
T2                             2176       1024          16
T2                             3200       1024          24
T3                             4224          8          32
T4                             4232          8          32
T5                             4352        128          33

As you can see t3 and t4 are small tables – 1 extent of 64KB each – and t5, which I created after t4, starts on the next 1MB boundary. This is a feature of auto-allocate: not only are extents (nearly) fixed to a small number of possible extent sizes, the larger extents are restricted to starting on 1MB boundaries and the 64KB extents are used preferentially to fill in odd-sized” holes. To show the impact of this I’m going to drop table t1 (at the start of file) to make some space.


SEGMENT_NAME               BLOCK_ID     BLOCKS BOUNDARY_1M
------------------------ ---------- ---------- -----------
T2                             2176       1024          16
T2                             3200       1024          24
T3                             4224          8          32
T4                             4232          8          32
T5                             4352        128          33

Now I’ll move table t3 – hoping that it will move to the start of file and use up some of the space left by t1. However there’s a 1MB area (at boundary 32) which is partially used,  so t3 moves into that space rather than creating a new “partly used” megabyte.


SEGMENT_NAME               BLOCK_ID     BLOCKS BOUNDARY_1M
------------------------ ---------- ---------- -----------
T2                             2176       1024          16
T2                             3200       1024          24
T4                             4232          8          32
T3                             4240          8          32
T5                             4352        128          33

It’s a little messy trying to clear up the tiny fragments and make them do what you want. In this case you could, for example, create a dummy table with storage(initial 64K next 64K minextents 14) to use up all the space in the partly used megabyte, then move t3 – which should go to the start of file – then move table t4 – which should go into the first partly-used MB (i.e. start of file) rather than taking up the hole left by t3.

Even for a trivial example it’s messy – imagine how difficult it can get to cycle through building and dropping suitable dummy tables and move objects in the right order when you’ve got objects with several small extents scattered through the file, and objects with a mixture of small extents and large extents.


OCP 12C – Backup, Recovery and Flashback for a CDB/PDB

DBA Scripts and Articles - Wed, 2014-10-01 07:43

Backup a CDB/PDB To make a database backup you need the SYSBACKUP or SYSDBA privilege. You can backup the CDB and all the PDBs independantly, all together, or by specifying a list. You can backup a PDB by connecting directly to it and use: RMAN> BACKUP DATABASE: You can backup a PDB by connecting to [...]

The post OCP 12C – Backup, Recovery and Flashback for a CDB/PDB appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Solving customer issues at OOW14: Dbvisit Replicate can replicate tables without primary key

Yann Neuhaus - Wed, 2014-10-01 06:39

Usually, the logical replication of changes uses the primary key. Each row updated or deleted generate a statement to be applied on the target, which affects only one row because it accesses with the primary key. If there is no primary key, we need to have something unique and at worst it is the whole row. But sometimes old applications were designed before being implemented into relational database and have no unicity. It it a problem for logical replication? We will see that Dbvisit replicate can address that.

Here is the case I encountered at a customer. The application has a master-detail table design, and the detail tables are inserted/deleted all together for the same master key. And there is no primary key, and even nothing unique. The only value that may help is a timestamp but sometimes timestamps do not have the sufficient precision to be unique. And anyway, imagine what happens if we change back the system time, or during daylight saving changes.

At dbi services we have very good contact with our partner Dbvisit and it's the kind of question that can be addressed quickly by the support. Anyway, I was at the Oracle Open World and then was able to discuss directly with the Dbvisit replicate developers. There is a solution and it is even documented.

The basic issue is that when the delete occurs, a redo entry is generated for each row that is deleted and then Dbvisit replicate generates an update statement to do the same on the target. But when there are duplicates the first statement will affect several rows and the next statement will affect no rows.

This is the kind of replication complexity that is addressed with conflict resolution. It can be addressed manually: the replication stops when a conflict is detected and continues once we have decided what to do. But we can also set rules to address it automatically when the problem occurs again so that the replication never stops.

Here is the demo about that as I tested it before providing the solution to my customer. 

Note that it concerns only deletes here but the same can be done with updates.

1. I create a table with 4 identical rows for each value of N:

  create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level
SQL> connect repoe/repoe Connected.
SQL> create table TESTNOPK as select n,'x' x from (select rownum n from dual connect by level   Table created.

2. Status of replication from the Dbvisit console:


| Dbvisit Replicate 2.7.06.4485(MAX edition) - Evaluation License expires in 29 days MINE IS running. Currently at plog 35 and SCN 796568 (10/01/2014 01:08:04). APPLY IS running. Currently at plog 35 and SCN 796566 (10/01/2014 01:08:04). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:40/40           Unrecov:0/0         Applied:40/40       Conflicts:0/0       Last:01/10/2014 01:08:02/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

3. I delete the lines with the value 10:


SQL> select * from TESTNOPK where n=10;
         N X ---------- -         10 x         10 x         10 x         10 x
SQL> delete from TESTNOPK where n=10;
4 rows deleted.
SQL> commit;
Commit complete.

5. apply is stop on a conflict: too many rows affected by the delete


MINE IS running. Currently at plog 35 and SCN 797519 (10/01/2014 01:10:56). APPLY IS running. Currently at plog 35 and SCN 796928 (10/01/2014 01:09:08) and 1 apply conflicts so far (last at 01/10/2014 01:10:57) and WAITING on manual resolve of apply conflict id 35010009996. Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:                90%  Mine:44/44           Unrecov:0/0         Applied:40/40       Conflicts:1/1       Last:01/10/2014 01:09:17/RETRY:Command affected 4 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.     dbvrep> list conflict Information for conflict 35010009996 (current conflict): Table: REPOE.TESTNOPK at transaction 0008.003.0000022b at SCN 796930 SQL text (with replaced bind values): delete from "REPOE"."TESTNOPK" where (1=1) and "N" = 10 and "X" = 'x'
Error: Command affected 4 row(s). Handled as: PAUSE Conflict repeated 22 times.

6. I resolve the conflict manually, forcing the delete of all rows

                                                                                                                                                       dbvrep> resolve conflict 35010009996 as force Conflict resolution set.   At that point, there is 3 following conflicts that I need to force as well because of the other deletes affecting no rows. I don't reproduce them here.

7. Once the conflits are resolved, the replication continues:

  MINE IS running. Currently at plog 35 and SCN 800189 (10/01/2014 01:19:16). APPLY IS running. Currently at plog 35 and SCN 800172 (10/01/2014 01:19:14). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:44/44           Unrecov:0/0         Applied:44/44       Conflicts:4/4       Last:01/10/2014 01:18:21/RETRY:Command affected 0 row(s). -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.                                                                                                                                                           dbvrep> list conflict Information for conflict 0 (current conflict): No conflict with id 0 found.  

8. Now I want to set a rule that manages that situation automatically. I add a 'too many rows' conflict rule to touch only one line for each delete:


dbvrep> SET_CONFLICT_HANDLERS FOR TABLE REPOE.TESTNOPK FOR DELETE ON TOO_MANY TO SQL s/$/ and rownum = 1/ Connecting to running apply: [The table called REPOE.TESTNOPK on source is handled on apply (APPLY) as follows: UPDATE (error): handler: RETRY logging: LOG UPDATE (no_data): handler: RETRY logging: LOG UPDATE (too_many): handler: RETRY logging: LOG DELETE (error): handler: RETRY logging: LOG DELETE (no_data): handler: RETRY logging: LOG DELETE (too_many): handler: SQL logging: LOG, regular expression: s/$/ and rownum = 1/ INSERT (error): handler: RETRY logging: LOG TRANSACTION (error): handler: RETRY logging: LOG]                                                                                                                                                        9. Now testing the automatic conflict resolution:   SQL> delete from TESTNOPK where n=9;
4 rows deleted.
SQL> commit;
Commit complete.
10.  the conflicts are automatically managed:   MINE IS running. Currently at plog 35 and SCN 800475 (10/01/2014 01:20:08). APPLY IS running. Currently at plog 35 and SCN 800473 (10/01/2014 01:20:08). Progress of replication dbvrep_XE:MINE->APPLY: total/this execution -------------------------------------------------------------------------------------------------------------------------------------------- REPOE.CUSTOMERS:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ADDRESSES:              100%  Mine:1864/1864       Unrecov:0/0         Applied:1864/1864   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.CARD_DETAILS:           100%  Mine:1727/1727       Unrecov:0/0         Applied:1727/1727   Conflicts:0/0       Last:30/09/2014 02:38:30/OK REPOE.ORDER_ITEMS:            100%  Mine:12520/12520     Unrecov:0/0         Applied:12520/12520 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.ORDERS:                 100%  Mine:10040/10040     Unrecov:0/0         Applied:10040/10040 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.INVENTORIES:            100%  Mine:12269/12269     Unrecov:0/0         Applied:12269/12269 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.LOGON:                  100%  Mine:12831/12831     Unrecov:0/0         Applied:12831/12831 Conflicts:0/0       Last:30/09/2014 02:38:35/OK REPOE.TESTNOPK:               100%  Mine:48/48           Unrecov:0/0         Applied:48/48       Conflicts:7/7       Last:01/10/2014 01:19:57/OK -------------------------------------------------------------------------------------------------------------------------------------------- 8 tables listed.  

Now the replication is automatic and the situation is correctly managed.


 oow-imattending-200x200-2225057.gif  

As I already said, Dbvisit is a simple tool but is nethertheless very powerfull. And Oracle Open World is an efficient way to learn: share knowlege during the day, and test it during the night when you are too jetlagged to sleep...