Feed aggregator

OTN Appreciation Day : Transportable tablespaces

Yann Neuhaus - Tue, 2016-10-11 01:15

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. Here is my favorite feature that I described at the “EOUC Database ACES Share Their Favorite Database Things“: Transportable Tablespaces appeared in Oracle 8.1.5

ROWID

I’ll start with a change that came between Oracle 7 and Oracle 8. The ROWID, which identifies the physical location of a row within an database (with file ID, block offset, and row directory number) changed to be the location within a tablespace only. The format did not change, but the file ID was changed to be a relative file number instead of an absolute file number.
Here is the idea:
CaptureTTS-ROWID

Actually, to be able to migrate without visiting each block (the ROWID is present in all blocks, all redo vectors, etc) they used the same number but that number is unique only within a tablespace. The first goal was to hold more datafiles per tablespace (Oracle 8 was the introduction of VLDB – Very Large Databases – concepts). The limit of 255 datafiles per database became a limit of 255 datafiles per tablespace. So the numbers starts the same as before but can go further.

This change was simple because anytime you want to fetch a row by its ROWID you know which table you query, so you know the tablespace. The exception is when the ROWID comes from a global index on a partitioned table, and for this case Oracle 8 introduced an extended ROWID that contains additional bytes to identify the segment by its DATA_OBJECT_ID.

By the way, this makes tablespaces more independent on the database that contains them because all row addressing is relative.

Locally Managed Tablespaces

Another change in 8i was Locally Managed Tablespaces. Before, the space management of the tablespaces was centralized in the database dictionary. Now, it is delocalized in each tablespace. What was stored in UET$ system table is now managed as a bitmap in the first datafile header.

Pluggable tablespaces

The original name of transportable tablespace was “pluggable tablespaces”. Because they are now more self-contained, you can detach them from a database an attach them to another database, without changing the content. This means that data is moved physically which is faster than the select/inserts that are behind a logical export/import. There are only two things that do not come with the datafiles.

The open transactions store their undo in the database UNDO tablespace. This means that if you detach a user tablespace you don’t have the information to rollback the ongoing transactions when you re-attach it elsewhere. For this reason, this ‘detach’ is possible only when there are no on-going transactions: you have to put the tablespace READ ONLY.

The user object metadata is stored in the database dictionary. Without them, the datafiles is just a bunch of bytes. You need the metadata to know what is a table or index, and which one. So, with transportable tablespaces, a logical export/import remains for the metadata only. This was done with exp/imp when TTS were introduced and is now done with DataPump. Small metadata is moved logically. Large data is moved physically.

CaptureTTS-EXP

Transportable tablespaces

TTS is faster than simple DataPump because data is moved physically by moving the datafiles. TTS is more flexible than an RMAN duplicate because you can move a subset of a database easily. Because the metadata is still transported logically, and datafiles are compatible with newer versions, TTS can be done cross-version, which makes it a nice way to migrate and upgrade. It is used also for tablespace point-in-time recovery where you have to recover to an auxiliary instance and then transport the tablespace to the target.
TTS is also used to move data quickly from operational database to a datawarehouse ODS.
It is also a good way to publish and share a database in read-only, on a DVD for example.

And beyond

Except with the move to DataPump for the metadata transfer, TTS has not change a lot until 12c. In 12.1 you have full transportable tablespace which automates the operations when you want to move a whole database. This can be used to migrate from non-CDB to multitenant architecture.

With multitenant, pluggable databases is an extension of TTS. Because user metadata come with the PDB system tablespaces yon don’t need to export them logically anymore: you transport the whole PDB. That’s the first restriction relieved. The second restriction, the need for read only, will be relieved as well when the UNDO will be local to the PDB and I don’t think I disclose any secret when telling that local UNDO has been announced for 12.2

OTN Appreciation Day

This was my contribution to the “EOUC Database ACES Share Their Favorite Database Things” at Oracle Open World 2016 organized by Debra Lilley. Tim Hall idea of “OTN Appreciation Day” comes from that. You still have time to contribute for this day. No need for long posts – I always write a but more than what I plan to. The “rules” for this day is described in oracle-base.com

 

Cet article OTN Appreciation Day : Transportable tablespaces est apparu en premier sur Blog dbi services.

OTN Appreciation Day: Oracle Data Integrator 12c - Flexibility

Rittman Mead Consulting - Mon, 2016-10-10 22:49

As you may already know by now, it’s OTN Appreciation Day! The idea was thought up by Oracle ACE Director Tim Hall to give thanks to the Oracle Technical Network that we all love and use on a daily basis (see #ThanksOTN on Twitter).

The focus for these blog posts is not only to give thanks to OTN, but also to generate some interesting conversation around different features of various Oracle products. One of my favorite features of Oracle Data Integrator 12c is its flexibility. This isn’t necessarily a single feature of the product, but more of an overall assessment of ODI as a whole. Let me breakdown a few of the features that make ODI flexible and easy to adapt. Note: I actually mentioned a couple of these in my previous blog post, “Oracle Data Integrator 12c: Getting Started - What is ODI?”, but why not point them out again?

Knowledge Modules

The typical main goal of Oracle Data Integrator is to develop mappings that use on or more source datastores to load one or more target datastores. These mappings can have many different components that join, filter, aggregate or transform the data before it reaches its final destination. Logically, this is completed in ODI by dragging lines between boxes and configuring properties for each component.

The physical implementation of a mapping is set to use one or more Knowledge Modules to generate the code or objects required for specific loading and integration types. These KMs are code templates that use the ODI Substitution API to fill in the mapping metadata based on the logical implementation. The great part is that these KMs can be easily customized and modified to fit your data integration needs! Your environment doesn’t have to conform to the tool, you get to make the tool conform to your environment.

"But wait!", says the current ODI developers. “We can’t modify Component Knowledge Modules.” Yes, that’s a true statement. The current ODI 12c version has two types of KMs: Template and Component. The former have been in the product since inception and are easily customizable while the latter are more of a black box. But, as announced at Oracle OpenWorld last month, a brand new, highly extensible Knowledge Module framework, and the ability to edit all KMs, is coming soon!

Procedures

If Knowledge Module customization doesn’t give you the flexibility necessary to perform specific actions in your data integration project, we also have ODI Procedures. These objects are used to execute a specific set of code for any sort of task. With so many different technologies accessible via ODI, you can write nearly any bit of code to execute. Procedures are often used for exception handling, file processing, and all types of scripting via Jython or Groovy. These objects just add to the ultimate flexibility of Oracle Data Integrator.

Java API

Last, but definitely not least, we have the ODI SDK Java API. Using the SDK, we can perform nearly every action that can be completed via ODI Studio. That’s a huge amount of flexibility! Now if there is a batch creation of objects or change necessary, I can write a few lines of Groovy code, execute the script, and it’s all completed in seconds rather than days of manual work. Take a look at a couple of examples I’ve posted in the past, adding columns to a set of ODI Datastores and creating Interfaces (in ODI 11g) based on a SQL query. Any chance I get I try to use the ODI SDK to make my development life just that much easier.

There you have it, my favorite feature of Oracle Data Integrator 12c - flexibility. I hope you all have a great OTN Appreciation Day and thanks for reading! And of course, #ThanksOTN!

Categories: BI & Warehousing

OTN Appreciation Day : Undo and Redo

Hemant K Chitale - Mon, 2016-10-10 20:06
On OTN Appreciation Day, let me say I like the Undo and Redo features of Oracle.  I name them together as they work together.

Undo also supports MultiVersionReadConsistency -- a great advantage of Oracle.

Redo, with Archive Logging, also supports Online Backups -- an absolute necessity.

These features have been around for almost 30 years now.

Here are some Quick and Rough Notes on Undo and Redo.
.
.
.
Categories: DBA Blogs

Undo and Redo

Hemant K Chitale - Mon, 2016-10-10 20:04
Quick and Rough Notes :


Undo and Redo


Undo is where Oracle logs how to reverse a transaction (one or more DMLs in a transaction)

Redo is where Oracle logs how to replay a transaction

Undo and Redo are written to as the transaction proceeds, not merely at the end of the transaction
(imagine a transaction that consists of 1million single-row inserts, each distinct insert is written to undo and redo)
Undo segments
Oracle dynamically creates and drops Undo segments depending on transaction volume
An undo segment consists of multiple extents. As a transaction grows beyond the current extent, a new extent may be allocated
One undo segment can support multiple transactions but a transaction cannot span multiple undo segments
After COMMIT the undo information is retained for undo_retention or autotuned_undo_retention.
At the end of the retention period, the undo is discarded, the extent is expired

Undo retention
Oracle may autotune the undo retention
If the datafile(s) for the active undo tablespace are set to autoextend OFF, Oracle automatically uses the datafile to the fullest and ignores undo_retention
If the datafile(s) are set to autoextend ON, Oracle autotunes undo_retention to match query lengths
Check V$undostat for this information

Undo and Read Consistency
Oracle's implementation of MultiVersionReadConsistency relies on a user session being able to read the undo generated by another session
A session may need to read the prior image of data because the data has been modified (and may even have been commited) by another session
It clones the current version of the block it is reading and applies the undo for that block to create its read consistent version
Flashback Query is supported by reading from Undo
Isolation levels (READ COMMITTED, SERIALIZABLE, READ ONLY) 
Read Consistency with READ COMMITTED is at *statement* level by default
A session running multiple queries may each read a different version by default because Read Committed is enforced for each statement
(This also means that if you have a PLSQL block running the same SQL multiple times, each execution can see a different version of the data-- if the data is modified by another session between executions of the SQL !)
A session can choose to set it's ISOLATION LEVEL to SERIALIZABLE which means that every query sees the same version of data
This works only for short running queries and with few changes to the data or read only data.
SERIALIZABLE can update data provided that the same data hasn't been updated and committed by another session after the start (else you get ORA-08177)
READ ONLY does not allow the session to make changes

Transactions
When a transaction is in progress, it is identified by the Transaction Address, Undo segment, slot and sequence
The ITL slot in the block header contains the reference (address) to the Undo
The SCN is assigned at commit time (therefore a transaction doesn't begin with an SCN)

Temp Undo
12c also allows temporary undo
Normally, changes to GTT generate undo which needs to be written to undo segments
With 12c temp undo, those undo entries are also, like the actual changes, temporary and can be discarded when the commit is issued
Thus, the undo doesn't need to be written to disk (remember data in a GTT is not visible to another session, so there is no need to persist the undo)
Redo also captures Undo One transaction (or multiple concurrent transactions) may have updated multiple database blocks So, DBWR may have written down some of the modified buffers to disk, even before the transaction COMMIT has been issued This means that some of the blocks on disk may have uncomitted changes What happens if the instance were to fail (e.g. a bug takes down a background process or the server crashes due to an OS bug or a CPU failure ?) On instance recovery, Oracle must identify the uncommited transactions and roll them back But if the undo for that was only in memory and was lost on instance/server failure, how can Oracle rollback the uncomitted transaction ? Oracle knows that it must "undo" modified blocks This is done by protecting the undo through the redo as well Before a modified buffer is written to disk by DBWR, LGWR writes the redo for it That redo also captures the undo This ensures that, on the need to do Instance Recovery or Media Recovery, the undo is also available The Rollforward process writes the undo to the undo segments This allows Oracle to rollback the uncommitted transaction because the undo is now on disk (and not lost from memory) Redo Strands Redo consists of multiple strands Since 10g, Oracle has introduced private strands for single-instance databases This allows a process to manage it's private strand of redo until it decides to commit At commit time, the private strand is written into the public redo area and this allows LGWR to flush the redo to disk IMU Similarly, Oracle also manages undo "in memory" (using IMU pools). This means that, for a short period or small transactions, Undo is managed in memory rather than through undo segments Therefore, Oracle doesn't have to track undo segment changes in the redo This also allows bundling the undo for multiple changes into a single redo record, instead of separate redo records RAC In RAC, every instance has (a) a seperate Redo Thread (b) a separate Undo Tablespace However, the redo thread must be readable by every other instance -- as instance recovery by another (surviving) instance needs to read the redo Similarly, the undo tablespace is read by any other instance because queries in instance 2 may need to read undo of instance 1 for read-consistency
Categories: DBA Blogs

Export APEX application with SQLcl

Kris Rice - Mon, 2016-10-10 19:23
APEXExport has been around a long time for exporting an application and anything else like images, feedback,websheets,.. into a file system so that they can be version controlled.  This is a must if there is ever a need to rollback or see what the application was X days ago.  This is a java program that is part of the apex distribution.  The catch for some folks is that it's a java program and

Input parameters tramsission into logging procedure

Tom Kyte - Mon, 2016-10-10 18:06
Hello, Tom! I'm sure, my problem is not nontrivial one, but I've certainly stuck on it and need your help. This is it. I need to write a logging procedure that will catch exceptions from other pl-sql blocks and assembly error messages with the str...
Categories: DBA Blogs

Inconsistent results from queries involving date predicates

Tom Kyte - Mon, 2016-10-10 18:06
Hey there, I am confused to see inconsistent query results from (seemingly) same queries: <code> SELECT NVL(ROUND(SUM(TRD_BRKRG_VL)/10000000,2),0) FROM TRD_TRD_DTLS WHERE TRUNC(TRD_TRD_DT) BETWEEN '01-Apr-2014' AND '31-Mar-2015' </code>...
Categories: DBA Blogs

identify increasing amount.

Tom Kyte - Mon, 2016-10-10 18:06
Hi Tom, I have just came across a requirement, where I need to identify the increasing amount. say I have a table for quarterly expenses and I need to identify the dept which has increasing expense by 105 or more from previous quarter, and if any dep...
Categories: DBA Blogs

Passing array size for a structure dynamically in PRO*C using a select query in a function.

Tom Kyte - Mon, 2016-10-10 18:06
Hi, I have declared a C variable in declare section and trying to get its value using a select query from a function as follwing, Note: I have mentioned only relevant codelines. ---------------------------------------------------------------- #...
Categories: DBA Blogs

How to Update millions or records in a table

Tom Kyte - Mon, 2016-10-10 18:06
Good Morning Tom. I need your expertise in this regard. I got a table which contains millions or records. I want to update and commit every time for so many records ( say 10,000 records). I dont want to do in one stroke as I may end up in Roll...
Categories: DBA Blogs

ODBC changes national characters to latin equivelant

Tom Kyte - Mon, 2016-10-10 18:06
Hi all, I ran into a bit of a problem using Oracle database through ODBC from C++. The situation is as follows: - There is an Oracle database installed on a Linux OS - There is a C++ component which is communicating with that database throug...
Categories: DBA Blogs

What I Have Learned as an Oracle WebCenter Consultant in My First Three Months at Fishbowl Solutions

This post comes from Fishbowl Solutions’ Associate Software Consultant, Jake Jehlicka.

Finishing college can be an intimidating experience for many. We leave what we know behind to open the gates to brand new experiences. Those of us fortunate enough to gain immediate employment often find ourselves leaving school and plunging headfirst into an entirely new culture a mere few weeks after turning in our last exam. It is exciting, yet frightening, and what can make-or-break the whole experience is the new environment in which you find yourself if. I consider myself one of the lucky ones.

Intern shirt-back

I have been with Fishbowl Solutions for just over three months, and the experience is unlike any that I had encountered in my previous internships, work, or schooling in Duluth. I moved to the Twin Cities within a week of accepting the position. I was terrified, but my fears were very soon laid to rest. Fishbowl welcomed me with open arms, and I have learned an incredible amount in the short time that I have spent here. Here are just a few of the many aspects of Fishbowl and the skills I’ve gained since working here as an associate software consultant.

Culture

One of the things that really jumped out at me right away is how a company’s culture is a critical component to making work enjoyable and sustainable. Right from the outset, I was invited and even encouraged to take part in Fishbowl’s company activities like their summer softball team and happy hours celebrating new employees joining the team. I have seen first-hand how much these activities bring the workplace together in a way that not only makes employees happy, but makes them very approachable when it comes to questions or assistance. The culture here seems to bring everyone together in a way that is unique to Fishbowl, and the work itself sees benefits because of it.

Teamwork

Over the past three months, one thing that I have also learned is the importance of working together. I joined Fishbowl a few weeks after the other trainees in my group, and they were a bit ahead of me in the training program when I started. Not only were they ready and willing to answer any questions that I had, but they also shared their knowledge that they had acquired in such a way that I was able to catch up before our training had completed. Of course the other trainees weren’t the only ones willing to lend their assistance. The team leads have always been there whenever I needed a technical question answered, or even if I just wanted advice in regard to where my own career may be heading.

People Skills

The team leads also taught me that not every skill is something that can be measured. Through my training, we were exposed to other elements outside of the expected technical skills. We were given guidance when it comes to oft-neglected soft skills such as public speaking and client interactions. These sorts of skills are utterly necessary to learn, regardless of which industry you are in. It is thanks to these that I have already had positive experiences working with our clients.

Technical Skills

As a new software consultant at Fishbowl, I have gained a plethora of knowledge about various technologies and applications, especially with Oracle technologies. The training that I received has prepared me for working with technologies like Oracle WebCenter in such a way that I have been able to dive right into projects as soon as I finished. Working with actual systems was nearly a foreign concept after working with small individual projects in college, but I learned enough from my team members to be able to proceed with confidence. The training program at Fishbowl has a very well-defined structure, with an agenda laid out of what I should be working on in any given time period. A large portion of this was working directly with my own installation of the WebCenter content server. I was responsible for setting up, configuring, and creating a custom code for the servers both in a Windows and Linux environment. The training program was very well documented and I always had the tools, information, and assistance that was needed to complete every task.

Once the formal training ended, I was immediately assigned a customer project involving web development using Oracle’s Site Studio Designer. The training had actually covered this application and I was sufficiently prepared to tackle the new endeavor! With that said, every single day at Fishbowl is another day of education; no two projects are identical and there is always something to be learned. For example, I am currently learning Ext JS with Sencha Architect in preparation for a new project!

Although we may never know with absolute certainty what the future has in store for us, I can confidently say that the experiences, skills, knowledge that I have gained while working at Fishbowl Solutions will stay with me for the rest of my life.

Thank you to the entire Fishbowl team for everything they have done for me, and I look forward to growing alongside them!

j_jehlicka

Jake Jehlicka is an Associate Software Consultant at Fishbowl Solutions. Fishbowl Solutions was founded in 1999. Their areas of expertise include Oracle WebCenter, PTC’s Product Development System (PDS), and enterprise search solutions using the Google Search Appliance. Check out our website to learn more about what we do. 

The post What I Have Learned as an Oracle WebCenter Consultant in My First Three Months at Fishbowl Solutions appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Automatic Time-Based Dismiss for Oracle ADF Popups

Shay Shmeltzer - Mon, 2016-10-10 11:09

This blog entry is about a nice little new feature that was introduced into ADF in the 12.2.1.1 version, and didn't get a mention in the "what's new" document.

Self dismissing messages  are popping up everywhere these days (when you get an email, when you have a new calendar invite etc), and you might want to use this UI pattern in your ADF apps too.

There is a new property for af:popup components - autoDismissalTimeout - that allow popups to automatically dismiss after a certain number of seconds that you can specify. This is very useful for all sort of messages that you want to show to the user, but you don't want to require the user to do any activity to dismiss.

Here is an example of such a message that you can associate with a save button:

popup sample

And here is the code you'll need to do this:

            <af:popup id="p1" animate="true" autoDismissalTimeout="2">

                <af:panelGroupLayout id="pgl1" layout="horizontal">

                    <af:image source="stat_confirm_16.png" id="i1"/>

                    <af:outputFormatted value="Your changes have been saved" id="of1"/>

                </af:panelGroupLayout>

            </af:popup>

One more (small) reason to adopt the new versions of Oracle ADF! 

Categories: Development

Documentum story – Management of DARs and unexpected errors

Yann Neuhaus - Mon, 2016-10-10 09:30

During a recent project at one of our customers, we often saw the message “Unexpected errors occurred while installing DARs”. In our case, this message happened when installing, migrating or upgrading a docbase on an already existing Content Server. We never saw this message during the first initial phase of installation of our repositories but we started to see it some months later with the first migration/upgrade. In this blog I will show you where does this issue can come from and how DARs are managed by Documentum for new/migrated docbases. In a future blog I will show you a home-made script that can be used to manually install DARs on docbases which tips, aso…

 

For this blog, let’s use the following:

  • Documentum CS 7.2
  • RedHat Linux 6.6
  • $DOCUMENTUM=/app/dctm/server
  • $DM_HOME=/app/dctm/server/product/7.2

 

Most of the time, these errors are thrown because Documentum isn’t able to install the needed DARs but what’s the reason behind that? First of all, there is one important thing to know: when installing a new docbase, Documentum will check which DARs should be installed by default. This list is dynamically generated based on a file and this file is the following one:

[dmadmin@content_server_01 ~]$ cat $DM_HOME/install/darsAdditional.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<actions>
    <dar name="TCMReferenceProject">
        <description>TCMReferenceProject</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/TCMReferenceProject.dar</darFile>
    </dar>
    <dar name="Forms">
        <description>Forms</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/Forms.dar</darFile>
    </dar>
    <dar name="Collaboration Services">
        <description>Collaboration Services</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/Collaboration Services.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="xcp">
        <description>xCP</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/xcp.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="bpm">
        <description>BPM</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/BPM.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="C2-DAR">
        <description>C2-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/C2-DAR.dar</darFile>
        <javaOptions>
           <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="D2-DAR">
        <description>D2-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/D2-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="D2Widget-DAR">
        <description>D2Widget-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/D2Widget-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="D2-Bin-DAR">
        <description>D2-Bin-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/D2-Bin-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
    <dar name="O2-DAR">
        <description>O2-DAR</description>
        <darFile>/app/dctm/server/product/7.2/install/DARsInternal/O2-DAR.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>
</actions>

 

Some of the dars inside this file are configured by Documentum directly (BPM & xcp DARs) and some others have been added by us, manually (D2 DARs). If you want to install new DARs for future installations of a docbase, then you can just update this file with a new section using this template:

    <dar name="DAR_NAME">
        <description>DAR_NAME</description>
        <darFile>/ABSOLUTE_LOCATION_OF_FILE/DAR_NAME.dar</darFile>
        <javaOptions>
            <javaOption>-XX:MaxPermSize=256m</javaOption>
            <javaOption>-Xmx1024m</javaOption>
        </javaOptions>
    </dar>

 

By default Documentum will always put the DARs inside the folder $DM_HOME/install/DARsInternal/ so I would recommend you to do the same, update the xml file and that’s it. Now this *can* also bring some trouble where some DARs aren’t installed anymore with the error shown at the beginning of this blog and the reason for that is – most of time – simply because there is a space in the name of the DAR… Yes from time to time, depending on the DARs, Documentum might not be able to properly manage spaces in the name of the DARs. It doesn’t always happen and that’s the annoying part because I didn’t find any logical behavior.

 

There is a way to verify which DAR might cause this issue and which one will not: when installing a CS patch, the folder “$DOCUMENTUM/patch/bin” is usually created and inside this folder, there is a file named “repositoryPatch.sh”. This script will be used by the patch to do some work and to install some DARs if needed. The interesting thing here is that this script include a small bug which you can use to find the troublesome DARs and you can also easily fix the script. After doing that, you will be able to use this script for all DARs no matter if they include spaces or not. So let’s take a look at the default file in one of our Content Server:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM/patch/bin/repositoryPatch.sh | grep "^dars"
dars="/app/dctm/server/product/7.2/install/DARsInternal/LDAP.dar,/app/dctm/server/product/7.2/install/DARsInternal/MessagingApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/MailApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/Extended Search - SearchTemplates.dar,/app/dctm/server/product/7.2/install/DARsInternal/ATMOS Plugin.dar,/app/dctm/server/product/7.2/install/DARsInternal/VIPR Plugin.dar"

 

As you can see above, you just need to define the full path of each DAR file separated by a comma. To fix this script for all DARs, a first solution would be to rename the DARs but there is actually a simpler solution: use single quotes instead of double quotes in the dars definition:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM/patch/bin/repositoryPatch.sh | grep "^dars"
dars='/app/dctm/server/product/7.2/install/DARsInternal/LDAP.dar,/app/dctm/server/product/7.2/install/DARsInternal/MessagingApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/MailApp.dar,/app/dctm/server/product/7.2/install/DARsInternal/Extended Search - SearchTemplates.dar,/app/dctm/server/product/7.2/install/DARsInternal/ATMOS Plugin.dar,/app/dctm/server/product/7.2/install/DARsInternal/VIPR Plugin.dar'

 

By doing that, you corrected the bug in the script and now you should be able to execute this script to deploy all DARs to a single repository using:

[dmadmin@content_server_01 ~]$ $DOCUMENTUM/patch/bin/repositoryPatch.sh DOCBASE USERNAME PASSWORD

 

Note: As always, if you are using the Installation Owner as the USERNAME, then the PASSWORD can be a dummy password like “xxx” since there is the local trust on the Content Server.

 

This conclude this blog about the principal issue that we can face when installing a DAR, about how to manage the automatic deployment of some DARs in new docbases and finally how to use the script provided by a patch to do that manually. See you!

 

Cet article Documentum story – Management of DARs and unexpected errors est apparu en premier sur Blog dbi services.

Documentum story – Build and manage a Documentum Platform

Yann Neuhaus - Mon, 2016-10-10 09:12

Documentum story. What is behind this story?
Normally I should write Documentum Platform story. Indeed dbi services is involved to build and manage a worldwide Documentum Platform. Our part covers all middleware components related to the infrastructure excepting the OS, DB and applications which are managed by other teams.

So we installed and manage so far:
– 30+ Documentum Servers, 45+ repositories
– 25+ Fulltext Servers,  50+ IndexAgent
– 15+ ADTS servers
– 30+ WebLogic servers, 30+ Domains, 70+ Managed Servers
– 15+ DA and the Content Server part for D2.

As the projects started two years ago, we also had to do some upgrades, patches, hotfixes during this phase.
Why patches and hotfixes? Let me think… Oh yeah to solve issues!

In the next few weeks, a serie of blogs (all starting with Documentum story) will be published to explain how we solved issues, managed challenges we had to deal with and to provide you hint and tips on different subjects.
The project is not finished yet, we had lots to do and some new missions are coming but we think, it is now time to share our experience related to installation, SSL and security, xPlore, Thumbnail server and ADTS, monitoring, WebLogic, SSO, performances, jobs etc.

Hopefully you will learn something with our posts.

 

Cet article Documentum story – Build and manage a Documentum Platform est apparu en premier sur Blog dbi services.

Oracle Sites Cloud Service: Getting Started with Custom Components

WebCenter Team - Mon, 2016-10-10 08:56
Authored by: Igor Polyakov, Senior Principal Product Manager, Oracle

In this 9 step tutorial you will learn how to create a simple Oracle Sites Cloud Servcie (SCS) component that uses a <div> element with the “contenteditable" attribute enabled to allow end users to enter their own text in the page that will be saved with the component. It shows you how to convert the pre-defined files you get when you create a component into your own implementation. The intent is to describe what is required and what is optional within those seeded files. When you create a component you get a set of seeded files that will work out of the box. The seed code covers most of the functionality of a component within the product and the "Tutorial: Developing Components with Knockout" section in the SCS documentation explains how all the pieces of the components fit together. 

In this tutorial, we cover how to change the seeded code to create your own component. It also covers other aspects not covered in the standard tutorial such as:

  • How to provide different templating based on the viewMode of the component
  • Saving data from the component instead of from the settings panel
  • Integration with the page's undo/redo events 

Note: This sample is simply aimed at explaining the various pieces that make up a generic SCS Custom Component. It should not be used "as is" as a production component. 

Step 1: Create New Component
After this step you will have created your component with the Sites Cloud Service that you can immediately drop onto the page. This is the starting point for creating any new component. 

To create a local component: 

  1. Navigate to Sites -> Components
  2. Select option Create -> Create Local Component
  3. Enter a name, for example “BasicTextEditor” and optionally, description
  4. Click "Create" to create new component  

Checkpoint 1
Now that you have successfully created a component, you should see this component in the Component catalog as well as in the Add > Custom component palette for any site you create. Use the following steps to validate your component creation: 

  1. Create a new site using any seeded Template, for example create a site called “ComponentTest” using the “StarterTemplate" template.
  2. Select Edit option and create an update for the site to open it in the Site Builder
  3. Edit a page within the site that you created
  4. Click on the Add "+" button on the left bar and select "Custom" for the list of custom components
  5. Select the "BasicTextEditor" from the custom component Palette and drop it onto the page. You should now see a default rendering for the local component you created
  6. Select the context menu in the banner for the component you dropped
  7. Select "Settings" from the drop-down menu. You can change setting to see how seeded component rendering will change.

To continue reading and see steps 2-9, please click here!

Oracle – Suspending/Resuming an Oracle PID with “oradebug suspend/resume” or the OS kill command

Yann Neuhaus - Mon, 2016-10-10 08:23

Sometimes, if you kick off a huge load or transformation job, your Archive Destination might right full faster than your RMAN backup job cleans up the destination. To avoid an Archiver Stuck in such situations, the “oradebug suspend/resume” can be helpful, or the UNIX kill command. Usually, the “oradebug suspend” and the UNIX kill command work quite well.

Before kicking off you SQL script, get all the information you need from your session.

select s.username as Username, 
       s.machine as Machine, 
	   s.client_info as Client_Info, 
	   s.module as Module, 
	   s.action as Action, 
	   s.sid as SessionID,
	   p.pid as ProcessID,
	   p.spid as "UNIX ProcessID"
from
v$session s, v$process p
where s.sid = sys_context ('userenv','sid')
and s.PADDR = p.ADDR;

USERNAME     MACHINE      CLIENT_INFO  MODULE                           ACTION        SESSIONID  PROCESSID UNIX ProcessID
------------ ------------ ------------ -------------------------------- ------------ ---------- ---------- ------------------------
SYS          oel001                    sqlplus@oel001 (TNS V1-V3)                           148         69 7186

In another SQL session you can now set the PID with setorapid or the Server Process ID with setospid. HINT: When the Oracle multiprocess/multithread feature is enabled, RDBMS processes are mapped to threads running in operating system processes, and the SPID identifier is not unique for RDBMS processes. When the Oracle multiprocess/multithread feature is not enabled on UNIX systems, the SPID identifier is unique for RDBMS processes.

SQL> oradebug setorapid 69
Oracle pid: 69, Unix process pid: 7186, image: oracle@oel001 (TNS V1-V3)
SQL> -- oradebug setospid 7186
SQL> oradebug suspend
Statement processed.

-- Now your session is suspended and any command executed by the suspended session is hanging, even select's
-- SQL> select * from dual;

-- Now you can take your time and clean up the archive destination e.g. by moving all archivelogs
-- to tape and delete those in the archive destination afterwards "RMAN> backup archivelog all delete all input;"
-- After the job is done, resume your operation.

SQL> oradebug resume
Statement processed.

-- Now the "select * from dual" comes back.
SQL> select * from dual;

D
-
X

In case you are running 11.2.0.2, it might happens that you see an ORA-600 after running oradebug suspend. No problem, in those cases we can achive the same thing with the UNIX kill command as well.

On Linux you would run:

$ kill -sigstop $SPID
$ kill -sigcont $SPID

$ kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

On AIX you would run:

$ kill -17 $SPID
$ kill -19 $SPID

$ kill -l
1) HUP                  14) ALRM                 27) MSG                  40) bad trap             53) bad trap
2) INT                  15) TERM                 28) WINCH                41) bad trap             54) bad trap
3) QUIT                 16) URG                  29) PWR                  42) bad trap             55) bad trap
4) ILL                  17) STOP                 30) USR1                 43) bad trap             56) bad trap
5) TRAP                 18) TSTP                 31) USR2                 44) bad trap             57) bad trap
6) ABRT                 19) CONT                 32) PROF                 45) bad trap             58) RECONFIG
7) EMT                  20) CHLD                 33) DANGER               46) bad trap             59) CPUFAIL
8) FPE                  21) TTIN                 34) VTALRM               47) bad trap             60) GRANT
9) KILL                 22) TTOU                 35) MIGRATE              48) bad trap             61) RETRACT
10) BUS                 23) IO                   36) PRE                  49) bad trap             62) SOUND
11) SEGV                24) XCPU                 37) VIRT                 50) bad trap             63) SAK
12) SYS                 25) XFSZ                 38) ALRM1                51) bad trap
13) PIPE                26) bad trap             39) WAITING              52) bad trap

Be very careful, different UNIX systems have different mappings between the signal number and the signal itself. Make sure you look it up first with “kill -l” to get the correct one. From my point of view, the suspend/resume feature, either with the Oracle oradebug or the UNIX kill command  is very useful.

Cheers,
William

 

Cet article Oracle – Suspending/Resuming an Oracle PID with “oradebug suspend/resume” or the OS kill command est apparu en premier sur Blog dbi services.

InMemory Bonus

Jonathan Lewis - Mon, 2016-10-10 07:13

It should be fairly well known by now that when you enable the 12c InMemory (Columnar Store) option (and set the inmemory_size) your SQL may take advantage of a new optimizer transformation know as the Vector Transformation, including Vector Aggregation. You may be a little surprised to learn, though, that some of your plans may change even when they don’t produce any sign of a vector transformation as a consequence. This is because In-Memory Column Store isn’t just about doing tablescans very quickly it’s also about new code paths for doing clever things with predicates to squeeze all the extra benefits from the technology. Here’s an example:


rem
rem     Script:         12c_inmemory_surprise.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2016
rem
rem     Last tested
rem             12.1.0.2
rem

drop table t2 purge;
drop table t1 purge;

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                                          id,
        trunc((rownum - 1)/100)                         n1,
        trunc((rownum - 1)/100)                         n2,
        trunc(dbms_random.value(1,1e4))                 rand,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1
;

create table t2
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                                          id,
        trunc((rownum - 1)/100)                         n1,
        trunc((rownum - 1)/100)                         n2,
        trunc(dbms_random.value(1,1e4))                 rand,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;

create index t1_n1   on t1(n1)   nologging;
create index t2_rand on t2(rand) nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for columns (n1,n2) size 1'
        );
end;
/

There’s nothing particularly special about these two tables – I engineered them for a different demo, and the call to gather extended stats on the column group (n1, n2) is just a minor detail in getting better cardinality estimates in the upcoming plans. At this point I haven’t specified that either table should be in memory, so let’s see what plan I get from dbms_xplan.display_cursor() when I run a query that should do a hash join using t1 as the build table and t2 as the probe table:


select
        /*+
                qb_name(main)
        */
        count(*)
from    (
        select
                /*+ qb_name(inline) */
                distinct t1.v1, t2.v1
        from
                t1,t2
        where
                t1.n1 = 50
        and     t1.n2 = 50
        and     t2.rand = t1.id
        )
;

select * from table(dbms_xplan.display_cursor);

Plan hash value: 1718706536

-------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |       |       |       |  2441 (100)|          |
|   1 |  SORT AGGREGATE                         |       |     1 |       |            |          |
|   2 |   VIEW                                  |       | 10001 |       |  2441   (4)| 00:00:01 |
|   3 |    HASH UNIQUE                          |       | 10001 |   351K|  2441   (4)| 00:00:01 |
|*  4 |     HASH JOIN                           |       | 10001 |   351K|  2439   (4)| 00:00:01 |
|*  5 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1    |   100 |  2100 |     3   (0)| 00:00:01 |
|*  6 |       INDEX RANGE SCAN                  | T1_N1 |   100 |       |     1   (0)| 00:00:01 |
|   7 |      TABLE ACCESS FULL                  | T2    |  1000K|    14M|  2416   (4)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."RAND"="T1"."ID")
   5 - filter("T1"."N2"=50)
   6 - access("T1"."N1"=50)

Thanks to the column group the optimizer has estimated correctly that the number of rows selected from t1 would be 100. Beyond that there’s very little exciting about this execution plan.

So let’s modify t2 to be in-memory and see how the plan changes as we re-execute the query:


alter table t2 inmemory;

select
        /*+ qb_name(main) */
        count(*)
...

select * from table(...);


Plan hash value: 106371239

----------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |         |       |       |   259 (100)|          |
|   1 |  SORT AGGREGATE                          |         |     1 |       |            |          |
|   2 |   VIEW                                   |         | 10001 |       |   259  (27)| 00:00:01 |
|   3 |    HASH UNIQUE                           |         | 10001 |   351K|   259  (27)| 00:00:01 |
|*  4 |     HASH JOIN                            |         | 10001 |   351K|   257  (26)| 00:00:01 |
|   5 |      JOIN FILTER CREATE                  | :BF0000 |   100 |  2100 |     3   (0)| 00:00:01 |
|*  6 |       TABLE ACCESS BY INDEX ROWID BATCHED| T1      |   100 |  2100 |     3   (0)| 00:00:01 |
|*  7 |        INDEX RANGE SCAN                  | T1_N1   |   100 |       |     1   (0)| 00:00:01 |
|   8 |      JOIN FILTER USE                     | :BF0000 |  1000K|    14M|   234  (20)| 00:00:01 |
|*  9 |       TABLE ACCESS INMEMORY FULL         | T2      |  1000K|    14M|   234  (20)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T2"."RAND"="T1"."ID")
   6 - filter("T1"."N2"=50)
   7 - access("T1"."N1"=50)
   9 - inmemory(SYS_OP_BLOOM_FILTER(:BF0000,"T2"."RAND"))
       filter(SYS_OP_BLOOM_FILTER(:BF0000,"T2"."RAND"))

The cost of the tablescan drops dramatically as the optimizer assumes the table will be in the in-memory column store (IMCS) but (in manys way more significantly) we suddenly have a (serial) Bloom filter for a hash join – which eliminates (most of) the data that wouldn’t have survived the hash join without having to use the CPU that would normally be spent probing the build table.

This is an interesting example of what the in-memory code path can do for us. There’s no point in using the Bloom filter for a serial hash join in “classic” Oracle because the Bloom filter is basically the bitmap of the hash table that is the first thing Oracle examines when probing the hash table – but with “in-memory” Oracle there are some enhancements to the Bloom filter and the code path that make using the Bloom filter an effective strategy. Most significant, perhaps, is that the in-memory code path can use SIMD instructions to perform multiple probes from t2 simultaneously, so not only do we get the benefits of avoiding disk access, buffer activity and row-by-row access to columns, we also reduce the CPU time spent on making the first-stage comparisons of the hash join. (And shared columnar dictionaries in 12.2 could reduce this even further!)

Footnote: I also have a note I scribbled dowsn at the Trivadis performance days last month that the Bloom filter used with IMCS carries the actual low and high values from the build table.  I may have misinterpreted this as I wrote it, but if that’s correct then it’s another step in eliminating data very quickly and very early when using IMCS (or Exadata if the same feature exists in the Bloom filters that get pushed to the storage servers.)

 


4 Things To Spend Money On for Work

Complete IT Professional - Mon, 2016-10-10 06:00
There are many things that your workplace will provide as part of your employment. Notebooks, pens, computer, and a chair. There are some other things that, if you spend money on, they can really improve the quality and efficiency of your work. 1 – A Decent Pen Every office I’ve worked in has had a […]
Categories: Development

Notes on anomaly management

DBMS2 - Mon, 2016-10-10 02:35

Then felt I like some watcher of the skies
When a new planet swims into his ken

— John Keats, “On First Looking Into Chapman’s Homer”

1. In June I wrote about why anomaly management is hard. Well, not only is it hard to do; it’s hard to talk about as well. One reason, I think, is that it’s hard to define what an anomaly is. And that’s a structural problem, not just a semantic one — if something is well enough understood to be easily described, then how much of an anomaly is it after all?

Artificial intelligence is famously hard to define for similar reasons.

“Anomaly management” and similar terms are not yet in the software marketing mainstream, and may never be. But naming aside, the actual subject matter is important.

2. Anomaly analysis is clearly at the heart of several sectors, including:

  • IT operations
  • Factory and other physical-plant operations
  • Security
  • Anti-fraud
  • Anti-terrorism

Each of those areas features one or both of the frameworks:

  • Surprises are likely to be bad.
  • Coincidences are likely to be suspicious.

So if you want to identify, understand, avert and/or remediate bad stuff, data anomalies are the first place to look.

3. The “insights” promised by many analytics vendors — especially those who sell to marketing departments — are also often heralded by anomalies. Already in the 1970s, Walmart observed that red clothing sold particularly well in Omaha, while orange flew off the shelves in Syracuse. And so, in large college towns, they stocked their stores to the gills with clothing in the colors of the local football team. They also noticed that fancy dresses for little girls sold especially well in Hispanic communities … specifically for girls at the age of First Communion.

4. The examples in the previous point may be characterized as noteworthy correlations that surely are reflecting actual causality. (The beer/diapers story would be another example, if only it were true.) Formally, the same is probably true of most actionable anomalies. So “anomalies” are fairly similar to — or at least overlap heavily with — “statistically surprising observations”.

And I do mean “statistically”. As per my Keats quote above, we have a classical model of sudden-shock discovery — an astronomer finding a new planet, a radar operator seeing a blip on a screen, etc. But Keats’ poem is 200 years old this month. In this century, there’s a lot more number-crunching involved.

Please note: It is certainly not the case that anomalies are necessarily found via statistical techniques. But however they’re actually found, they would at least in theory score as positives via various statistical tests.

5. There are quite a few steps to the anomaly-surfacing process, including but not limited to:

  • Collecting the raw data in a timely manner.
  • Identifying candidate signals (and differentiating them from noise).
  • Communicating surprising signals to the most eager consumers (and letting them do their own analysis).
  • Giving more tightly-curated information to a broader audience.

Hence many different kinds of vendor can have roles to play.

6. One vendor that has influenced my thinking about data anomalies is Nestlogic, an early-stage start-up with which I’m heavily involved. Here “heavily involved” includes:

  • I own more stock in Nestlogic than I have in any other company of which I wasn’t the principal founder.
  • I’m in close contact with founder/CEO David Gruzman.
  • I’ve personally written much of Nestlogic’s website content.

Nestlogic’s claims include:

  • For machine-generated data, anomalies are likely to be found in data segments, not individual records. (Here a “segment” might be all the data coming from a particular set of sources in a particular period of time.)
  • The more general your approach to anomaly detection, the better, for at least three reasons:
    • In adversarial use cases, the hacker/fraudster/terrorist/whatever might deliberately deviate from previous patterns, so as to evade detection by previously-established filters.
    • When there are multiple things to discover, one anomaly can mask another, until it is detected and adjusted for.
    • (This point isn’t specific to anomaly management) More general tools can mean that an enterprise has fewer different new tools to adopt.
  • Anomalies boil down to surprising data profiles, so anomaly detection bears a slight resemblance to the data profiling approaches used in data quality, data integration and query optimization.
  • Different anomaly management users need very different kinds of UI. Less technical ones may want clear, simple alerts, with a minimum of false positives. Others may use anomaly management as a jumping-off point for investigative analytics and/or human real-time operational control.

I find these claims persuasive enough to help Nestlogic with its marketing and fund-raising, and to cite them in my post here. Still, please understand that they are Nestlogic’s and David’s assertions, not my own.

Categories: Other

Pages

Subscribe to Oracle FAQ aggregator