Skip navigation.

Feed aggregator

12c Threaded Execution Test

Bobby Durrett's DBA Blog - 6 hours 50 min ago

I did a quick check of some facts I’m studying about Oracle 12c and its new threaded execution mode.  I set this parameter:

alter system set THREADED_EXECUTION=true scope=spfile;

I had to connect SYS as SYSDBA with a password to get the system to bounce.

Then it had these processes only:

oracle    1854     1  0 09:17 ?        00:00:00 ora_pmon_orcl
oracle    1856     1  0 09:17 ?        00:00:00 ora_psp0_orcl
oracle    1858     1  2 09:17 ?        00:00:00 ora_vktm_orcl
oracle    1862     1  3 09:17 ?        00:00:00 ora_u004_orcl
oracle    1868     1 99 09:17 ?        00:00:17 ora_u005_orcl
oracle    1874     1  0 09:17 ?        00:00:00 ora_dbw0_orcl

This differs from some of my 12c OCP study material but agrees with the manuals.  Only pmon, psp, vktm, and dbw have dedicated processes.

Also, I found that I needed this value in the listener.ora:

dedicated_through_broker_listener=on

I needed that value to connect using a thread.  Before I put that in it spawned a dedicated server process when I connected over the network.

Lastly, contrary to what I had read I didn’t need to set the local_listener parameter to get the new connections to use a thread:

SQL> show parameter local_listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------------
local_listener                       string

- Bobby

Categories: DBA Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2014-07-22 14:20

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2014-07-22 14:20

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-07-22 14:20

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-07-22 14:20

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-07-22 14:20

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-07-22 14:20

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-07-22 14:20

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Oracle 12c Security - SQL Translation and Last Logins

Pete Finnigan - Tue, 2014-07-22 14:20

There has been some big new security items added to 12cR1 such as SHA2 in DBMS_CRYPTO, code based security in PL/SQL, Data Redaction, unified audit or even privilege analysis but also as I hinted in some previous blogs there are....[Read More]

Posted by Pete On 31/07/13 At 11:11 AM

Categories: Security Blogs

Oracle Database 12.1.0.2.0 – New Features

Marco Gralike - Tue, 2014-07-22 11:21
The world around us is changing and new stuff in the Oracle database arena is nowadays released on patch level. Although not many new features...

Read More

Macros and parameter passing macros with #goldengate

DBASolved - Tue, 2014-07-22 10:04

Replicating data can be a time consuming process to setup.  Fortunately, Oracle GoldenGate provide a few tools to help ease the  complexity of setup.  One of these tools is the “macro”.  Macros are used to simplify and automate the work associated with setting up and replicating data.  So what exactly is a macro?  Oracle defines a macro as:

A macro is a built-in automation tool that enables you to call a stored set of processing steps from within the Oracle GoldenGate parameter file.

In a nutshell, a macro is a stored set of commands that are used on a frequent basis; consisting of parameters for simple to complex series of substitutions, calculations or conversions.  Macros may be  written inline in the parameter file or stored in a macro library.

What this post will show you is how to add a macro to a library, accept parameters and then pass the parameter to within the parameter file during replication.

First thing that needs to be done is setup a standard directory to contain all the macro files.  A directory called “dirmac” needs to be created in the OGG_HOME.

Note: The “dirmac” directory is something that you will need to create with the following command “mkdir -p $OGG_HOME/dirmac”.  A macro directory can be created anywhere you like, I personally try to keep all OGG items together.

In the macro directory, create a file to use as the macro library.  Ideally, you should have a file for each type of process running in that $OGG_HOME.  Once the file is created, then edit the library file and add the macros desired.


> mkdir -p $OGG_HOME/dirmac
> cd $OGG_HOME/dirmac
> touch <library_name>.mac 
> vi <library_name>.mac 

When the macro library file is open for editing add macro that is desired.  Remember, a macro library can house more than one macro.  In the example below, you will see two examples.  The first example is setting a tokens that can be called.  The second example is setting a macro to map tokens and header information to a specific table.


—Example 1— 

BEGIN
SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’)
END;

MACRO #src_icnt
BEGIN
ICNT = @GETENV('STATS', 'INSERT')
END;

MACRO #src_ucnt
BEGIN
UCNT = @GETENV('STATS', 'UPDATE')
END;

MACRO #src_dcnt
BEGIN
DCNT = @GETENV('STATS', 'DELETE')
END;

MACRO #src_dmlcnt
BEGIN
DMLCNT = @GETENV('STATS', 'DML')
END;

—Example 2— 

MACRO #hb_mappings
PARAMS (#src_schema)
BEGIN
MAP #src_schema.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);
END; 

Notice in example 2 the PARAMS statement.  When using macros this can get a bit confusing since defining a macro uses the hash mark (#) and parameters in the PARAMS statement use the hash mark (#) as well.  Also notice that the parameter #src_schema is used in the MAP statement in the macro.  This is how the value for #src_schema is passed into the macro from the parameter files.

Now, lets take a look at a parameter file.

In my test environment, I have the following processes running:


GGSCI (oel.acme.com) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
JAGENT STOPPED
EXTRACT RUNNING EXT 00:00:09 00:00:09
EXTRACT RUNNING PMP 00:00:00 00:00:07
REPLICAT RUNNING REP 00:00:00 00:00:04

Taking a look at the extract parameter file (apply side); in order to use the macro in example 1, add an INCLUDE statement which references macro library to the parameter file.  Then in the TABLE statement, using the TOKEN string, the macro for the token can be referenced.


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT EXT

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle Login
USERID ggate, PASSWORD ggate

--Warns for a long running transaction
WARNLONGTRANS 1h, CHECKINTERVAL 30m

--Trace process info
--TRACE ./dirrpt/trace_ext.trc

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Ignore transactions for golden gate user
TRANLOGOPTIONS EXCLUDEUSER GGATE

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

<strong>INCLUDE ./dirmac/ops_info.mac</strong>

--Table Mappings
TABLE SCOTT.RANDOM_VALUES, TOKENS(<strong>#src_csn_ts(), #src_icnt(),#src_ucnt(),#src_dcnt(),#src_dmlcnt()</strong>);

Once the extract parameter file is updated, then the extract needs to be restarted. Upon restart of the extract, keep an eye out and make sure the process doesn’t abend.  The VIEW REPORT command can be used to check the report file during startup.  Once the extract starts, you can see how the macro is used and expanded in the parameter file.  From the extract side, the needed info will be captured and placed in the associated trail files.

In the middle, the pump really doesn’t need any changes or restarting.

Now on the replicat (capture) side, in order to use the macro defined in example 2 (above); the replicat parameter file needs to be edited to include the macro library and statements to call the macros.  The next code block shows the contents of my replicat parameter file.


--Specifies the name of the replicat load process.
REPLICAT REP

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle login.
USERID ggate, PASSWORD ggate

--surpress triggers - enable for 11.2.0.2 or later
--DBOPTIONS SUPPRESSTRIGGERS

ALLOWDUPTARGETMAP

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Tracing info
--TRACE ./dirrpt/trace_rep.trc

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 200

--Resolves the TARGETs to be replicated during process start-up
--WILDCARDRESOLVE IMMEDIATE

--Specify error handling rules:
REPERROR(default, discard)
REPERROR(default2, discard)

--Table Mappings
INCLUDE ./dirmac/ops_info.mac

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);

You will notice that I have included the INCLUDE statement to call the macro library.  Then the macro (example 2) that does the table mapping for the desired information can be accessed using #hb_mappings() (last line of parameter file example).   Passing the schema name is simple by placing it in between the parenthesis.  As the example above shows, I’m passing SCOTT as the schema I want to use.

Upon restart of the replicat, by looking at the report (VIEW REPORT), I can see where the macro library is read and how the macro is translated into a map statement for the replicat to use.


— Report Output (summerized)— 

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);
MAP SCOTT.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);

After the replicat has restarted.  Then the table in the mapping statement can be checked to see if any data was inserted (SCOTT.GG_REP_OP_STATUS).  Image 1 below shows the output of the data I requested to be replicated using the macro.

Image 1:
image1_macro_params.png

 

 

 

 

 

This should have shown you a way to use macros within your replication environment.

Enjoy!!!

http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

Beyond In-Memory, what's new in 12.1.0.2 ?

Yann Neuhaus - Tue, 2014-07-22 09:53

It's just a patchset. The delivery that is there to stabilize a release with all the bug fixes. But it comes with a lot of new features as well. And not only the one that has been advertised as the future of the database. It's a huge release.

Let's have a look at what's new.

First, it seems that it will be the only patchest for 12.1

Then, there is that In-Memory option awaited for a while. There has been some demo done by Larry Ellison on Exadata or even on the Oracle SPARC M6. Of course, if you have 32 TB of memory, we can understand the need for an In-Memory optimized storage. For a more real-life usage of that option, stay tune on our blog. We investigate the features in the context of our customer concerns, to fit their needs. For example, In-Memory addresses cases where some customers use Active Data Guard to offload reporting/real-time analytics to another server. But unfortunately In-Memory is not populated on a physical standby. We probably have to wait 12.2 for that.

In-Memory is an option, so available only in Enterprise Edition.

There are other new features related with large memory. There is a part of buffer cache dedicated to big tables (you just set the percentage) to be cached for In-Memory Parallel Query. And there is also a mode where all the database is in buffer cache. About performance and Parallel Query, a new transformation has been introduced to optimize the group by operation when joining a fact table to dimensions.

Second new feature is the range-partitioned hash cluster. Oracle CLUSTER segments is a very old feature but not widely used. Hash cluster is the fastest way to access to a row because the key can be directly transformed to a rowid. Unfortunately maintenance is not easy, especially when the volume increases. And we have partitioning which is the way to ease maintenance with growing tables but, until today, we can't partition a hash cluster. I mean, not in a supported way because Oracle uses it on SPARC for the TPC benchmarks - applying a specific patch (10374168) for it.

Well, the good news is that we can finally partition hash clusters with the simple syntax:

create cluster democ1 (sample_time timestamp,sample_id number)
hashkeys 3600 hash is sample_id size 8192
partition by range (sample_time) (
partition P12 values less than( timestamp'2014-04-26 12:00:00' )
... 

Another nice feature is Attribute Clustering. Lot of other RDBMS has the ability to arrange rows but Oracle puts any insert anywhere in a heap table, depending only on where some free space is left. The alternative is IOT of course. But it can be good to try to cluster rows on one or several columns. It's better for index access, it's better for cache efficiency, it's better for storage indexes (or in-memory min/max), for ILM compression, etc. We can finally do it and I'll blog soon about that. 

Attribute Clustering is not an option, but available only in Enterprise Edition.

 

I think those two features are my favorite ones. Because the best optimization we can do, without refactoring the application design, is to place data in the way it will be retreived.

 

The trend today is to store unstructured data as JSON. XML was nice, but it's verbose. JSON is easier to read and even PostgreSQL can store JSON in its latest version. So Oracle has it in 12.1.0.2: you can store and index it. Once again stay tuned on this blog to see how it works.

Something important was missing in Oracle SQL. How do you grant a read only user? You grant only select privilege? But that's too much because with a select privilege we can lock a table (with LOCK or SELECT FOR UPDATE). So we have now a READ privilege to prevent that. That's my favorite new feature for developers.

Then there are a few improvements on multitenant, such as the possibility to save the state of a pluggable database so that it can be automatically opened when the CDB startup. We already addressed that in 12.1.0.1 in our Database Management Kit. An undocumented parameter, _multiple_char_set_cdb, let us imagine that we will be able to have different characterset for the PDB - probably in the future. Currently it's set to false.

And once again as beta testing partners we have put the pressure to have a fix for what we consider as serious availability bug. The behaviour in 12.1.0.1 beta was even worse about CDB availability and I finally had a bug opened (Bug 19001390 - PDB SYSTEM TABLESPACE MEDIA FAILURE CAUSES THE WHOLE CDB TO CRASH) that should be fixed in 12.1

About fixes, some restrictions are now gone: we can finally use ILM with multitenant and we can have supplemental logging while using a move partition online. And you can have Flashback Data Archive in multitenant as well.

All that is good news, but remember, even if it's only the 4th digit that is increased in the version number, it's a brand new version with lot of new features. So, when do you plan to upgrade ? 11g is supported until January 2015. Extended support is free until January 2016 given that you are in the terminal patchset (11.2.0.4). So either you don't want to be in the latestet release and you will have to upgrade to 11.2.0.4 before the end of the year, waiting for 12.2 maybe in 2016. Or you want those new features and will probably go to 12.1.0.2 for 2015.

Talking about upgrade, there's a bad news. We thought that multitenancy can accelarate upgrade time. Because the data dictionary is shared, you just have to plug a PDB into a newer version CDB and it's upgraded. And we show that in our 12c new features workshop by applying a PSU. But we have tested the upgrade to 12.1.0.2 in the same way, and it's not that simple. Plugging is quick when you have only new patches that did not change the dictionary. It's still true for PSU when the dictionary changes are limited to the root container. But when you upgrade to 12.1.0.2 you have to synchronize all the PDB dictionaries (all that magic behind object links and metadata links) and that takes time. It takes the same time as upgrading a non-CDB. Conclusion: you don't save time when you do it by plug/unplug.

But I have good news as well for that because I've tested a 1 minute downtime migration from 12.1.0.1 to 12.1.0.2. Dbvisit replicate, the affordable replication solution, supports multitenant in it's latest version, both as source and target. If your application is compatible (which is easy to check with the 30 days trial) then it's a good way to migrate without stress and with minimal downtime. It's available for Standard Edition as well, but currently the 12.1.0.2 download can install only an Enterprise Edition.

Still here

Michael Armstrong-Smith - Tue, 2014-07-22 07:50
Hi everyone
I am still here. Just wanted to let you know that I am still in the business of working with Discoverer even though Oracle recently announced that it would be de-supported. If you need help just get in touch.

Also, you may not be aware but we have updated our Discoverer Handbook with the latest 11g version. You can find it on Amazon

University of California’s $220 million payroll project reboot

Michael Feldstein - Mon, 2014-07-21 16:11

Chris Newfield has an excellent post at Remaking the University about the University of California’s budget situation and how it relates to the recent Moody’s negative outlook on higher education finances. The whole article is worth reading, but one section jumped off the page for me [emphasis added].

The sadder example of ongoing debt is the request for “external financing for the UCPath project.” UC Path was UCOP’s flagship solution to UC inefficiencies that were allegedly wasting taxpayers’ money–in other words, new enterprise software for the systemwide consolidation of payroll and human resources functions. This is boring, important back office stuff, hardly good material for a political campaign to show the state “UC means business,” but that’s what it became. Rather than funding each campus’s decades-old effort to upgrade its systems on its own, UCOP sought centralization, which predictably introduced new levels of cost, complexity, and inefficiency, since centralization is often not actually efficient.

I had heard nothing good about UC Path from people trying to implement it on campuses, and have tried to ignore it, but this week it has resurfaced as a problem at the Regental level. The project timeline has grown from 48 to 72 months, and its costs are said to be $220 million (it had spent $131 million by May 2014) . Worse, the repayment schedule has mushroomed from seven to twenty years. Annual payments are to be something like $25 million. Campuses are to be taxed to pay for 2015-era systems until 2035, which is like taking out a twenty year mortgage to pay for your refrigerator, except that your fridge will be working better in 2035 than next year’s PeopleSoft product. Since the concurrent budget document notes efficiency savings of $30 million per year (top of page 4), UCOP may be spending $220 million to save a net $5 million per year over a couple of decades–and going into debt to do it. In the end, an efficiency measure has turned into a literal liability.

What the hell – a $220 million project to save money? How did this project get in this much trouble?

The UCPath project concept originated in 2009 with the project announcement coming in late 2011. The goal is to replace the Payroll Personnel System (PPS) that runs separately for each of the 11 UC locations with Oracle’s PeopleSoft payroll and HR systems. PPS is over 30 years old, and there are major risk issues with such an old system as well as a host of inefficient processes. The original project plans were based on a $170 million budget1 with the first wave of go-live for the Office of the President and 3 campuses scheduled for early 2013. All campuses would be live on the new system by late 2014.2

In a presentation to the Board of Regents in January 2012:

Over the same period, cost reductions are expected to be approximately $750 million from technology efficiency gains, process standardization and consolidation of transactional activities into a UC-wide shared services center. Overall, the project has a net present value of approximately $230 million (at a nine percent discount rate) with breakeven in year 5.

Subsequent promises were made in March of 2012:

We think this project is likely to pay for itself within five years, and UC could be accruing over $100 million in annual savings by the eighth year,” said Peter Taylor, UC’s chief financial officer. “We also expect to deliver HR and payroll services with increased efficiency, accuracy and quality.”

At the Board of Regents’ meeting last week, the project team gave the first update to the regents since January 2012 (itself a troubling sign). See this Sharestream video from 2:56:10 – 3:22:40.

By Fall 2013 the project was in trouble, and UC leadership brought in new leadership for the project: Mark Cianca as Deputy CIO and Sabu Varghese as Program Director. Their first act was to do a health check on the project, and the results were not pretty (as described in last week’s Board of Regents’ meeting).

  • The project team and implementation partner (Oracle) had treated the project as a software replacement rather than a fundamental business transformation initiative.
  • The individual campuses had not been consulted on changes in business processes, and in fact they had not even been asked to sign off on future state business processes that each campus would have to run to stay in operation.
  • The new project team had to go through more than 100 future state processes with campuses and get agreement on how to proceed.

The result, as described by UC President Janet Napolitano at last week’s meeting, was the team having to “reboot the entire project”.

Based on the reboot, the current plan is $220 million with first wave complete by February 2016 and all campuses live by mid 2017. That’s $50 million over budget and 24 months over schedule.

Deployment Schedule Jul 2014

But the planning is not complete. They are working up their “final” replan of budget and timeline, which they will present in January 2015.

Topics for Jan 2015

How solid is the current estimate? The implementation schedule is listed as the highest risk, even with the delays.

Major Risks Jul 2014

The project financing has changed so much that UC is now facing the need to use external financing over a much longer term, as described in the material for last week’s board meeting.

Therefore, this item seeks approval to refinance the UCPath loan out of CapEquip and into external financing to achieve the financing customization required. As indicated above, the original repayment plan based on the $220.5 million budget was expected to have been repaid with annual debt service of $25 million. This would have resulted in a 12-year loan term once principal was to be repaid. In January 2015, UCPath project leadership plans to present a revised project timeline, a revised project budget and a revised estimated loan repayment schedule. Project leadership will work with the campus budget officers (and campus budget department staff) to develop: (1) an appropriate campus cost allocation strategy; (2) an estimated repayment schedule that will reflect commencement of principal repayments in conjunction with the final campus deployment (estimated to be early 2017); and (3) an estimated 15-20 year loan repayment period.

Notes
  • The new project team seems quite credible, and for the most part they addressed the right points during the briefing. Kudos to UC for making this change in leadership.
  • This is a major project turnaround (or reboot, in Napolitano’s words), but I’m not sure that UC had communicated the significance of the project changes to system campuses (and certainly not to the media).
  • I would view the current plan of $220 million and Q1 2017 full deployment as best case situation – the team told the regents that they were going to update the plan, and ERP project almost never come in earlier than planned.
  • The actual amount is much higher than $220 based on this footnote: “The $10 million in tenant improvements approved for the UCPath Center Riverside site as well as the $17.4 million purchase of the facility (UCPath is currently projected to use no more than 50 percent of the building) are not included in the figures above.”
  • How do you go 2.5 years between updates from what is now a quarter billion dollar project?
  • What about the current estimate of benefits – is it $30 million per year as Chris described or closer to $100 million per year? One big concern I have is that the information on project benefits was not updated, presented to the regents, or asked by the regents. While I question the $25 million financing and $30 million benefits numbers, I think Chris got it exactly right by noting how UC administration is failing to ask hard questions:

Moving forward, I’m afraid that officials are going to have to get much better at admitting mistakes like UCPath, and then actually undoing them. I couldn’t listen to the recording of the UCPath conversation, but Cloudminder made it sound like a lot of restrained finger-pointing with no solution in sight. Did anyone say, “well, this seemed like a good idea at the time, but it’s not. Let’s just cancel it, figure out where we went wrong, and come up with something better”?

It is possible that continuing with the rebooted project is the right answer, but UC is not even asking the question. Failing to ask whether 15-20 year financing of a new ERP makes sense seems like a major oversight. Won’t this lock UC into an Oracle system that is already antiquated for another two decades or more? It seems stunning to me that UC is planning to commit to $220 million of external financing without asking some basic questions.

  1. one regent last week stated the original request was actually $156 million.
  2. All public projects should fear the Wayback Machine for checking old web pages.

The post University of California’s $220 million payroll project reboot appeared first on e-Literate.

Oracle Data as a Service for Business - Launch Webcast

Running a data-driven enterprise is key to gaining competitive advantage, but many businesses still struggle with a myriad of point solutions that are siloed, varied in quality, and complex to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How To Correlate Oracle Database Transaction with GoldenGate

Pythian Group - Mon, 2014-07-21 13:19

So there I was troubleshooting GoldenGate issue and was puzzled as to why GoldenGate transactions were not seen from Oracle database.

I had the transaction XID correct; however, I was filtering by ACTIVE transaction from Oracle which was causing the issue.

Please allow me to share a test case so that you don’t get stumped like I did.

Identify current log and update table

ARROW:(SOE@san):PRIMARY> select max(sequence#)+1 from v$log_history;

MAX(SEQUENCE#)+1
----------------
             196

ARROW:(SOE@san):PRIMARY> update INVENTORIES set QUANTITY_ON_HAND=QUANTITY_ON_HAND-10 where PRODUCT_ID=171 and WAREHOUSE_ID=560;

1 row updated.

ARROW:(SOE@san):PRIMARY>

From GoldenGate, find opened transactions for duration of 10 minutes

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.21 18343248 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_140404.1029_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Apr  4 2014 15:18:36

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.



GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     ESAN        00:00:00      00:00:05
EXTRACT     STOPPED     PSAN_LAS    00:00:00      68:02:14
REPLICAT    STOPPED     RLAS_SAN    00:00:00      68:02:12


GGSCI (arrow.localdomain) 2> send esan, status

Sending STATUS request to EXTRACT ESAN ...


EXTRACT ESAN (PID 2556)
  Current status: Recovery complete: At EOF

  Current read position:
  Redo thread #: 1
  Sequence #: 196
  RBA: 5861376
  Timestamp: 2014-07-21 10:52:59.000000
  SCN: 0.1653210
  Current write position:
  Sequence #: 7
  RBA: 1130
  Timestamp: 2014-07-21 10:52:52.621948
  Extract Trail: /u01/app/ggs01/dirdat/ss



GGSCI (arrow.localdomain) 3> send esan, showtrans duration 10m

Sending showtrans request to EXTRACT ESAN ...


Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 196, RBA 4955152

------------------------------------------------------------
XID:                  3.29.673
Items:                1
Extract:              ESAN
Redo Thread:          1
Start Time:           2014-07-21:10:41:41
SCN:                  0.1652053 (1652053)
Redo Seq:             196
Redo RBA:             4955152
Status:               Running


GGSCI (arrow.localdomain) 4>

Note the Redo Seq: 196 matches the sequence when the update was performed from Oracle database.
Also, note XID: 3.29.673

Let’s find the transaction from the database an notice the XID matches between GoldenGate and Oracle database.

ARROW:(SYS@san):PRIMARY> @trans.sql

START_TIME           XID              STATUS          SID    SERIAL# USERNAME           STATUS   SCHEMANAME         SQLID              CHILD
-------------------- ---------------- -------- ---------- ---------- ------------------ -------- ------------------ ------------- ----------
07/21/14 10:41:39    3.29.673         INACTIVE        105          9 SOE                INACTIVE SOE                6cmmk52wfnr7r          0

ARROW:(SYS@san):PRIMARY> @xplan.sql
Enter value for sqlid: 6cmmk52wfnr7r
Enter value for child: 0
SQL_ID  6cmmk52wfnr7r, child number 0
-------------------------------------
update INVENTORIES set QUANTITY_ON_HAND=QUANTITY_ON_HAND-10 where
PRODUCT_ID=171 and WAREHOUSE_ID=560

Plan hash value: 2141863993

-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |              |       |       |     3 (100)|          |
|   1 |  UPDATE            | INVENTORIES  |       |       |            |          |
|*  2 |   INDEX UNIQUE SCAN| INVENTORY_PK |     1 |    14 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PRODUCT_ID"=171 AND "WAREHOUSE_ID"=560)


20 rows selected.

ARROW:(SYS@san):PRIMARY>

For fun, switched logfile and perform another update.

ARROW:(MDINH@san):PRIMARY> select max(sequence#)+1 from v$log_history;

MAX(SEQUENCE#)+1
----------------
             196

ARROW:(MDINH@san):PRIMARY> alter system switch logfile;

System altered.

ARROW:(MDINH@san):PRIMARY> /

System altered.

ARROW:(MDINH@san):PRIMARY> /

System altered.

ARROW:(MDINH@san):PRIMARY> /

System altered.

ARROW:(MDINH@san):PRIMARY> select max(sequence#)+1 from v$log_history;

MAX(SEQUENCE#)+1
----------------
             200

ARROW:(MDINH@san):PRIMARY> update SOE.INVENTORIES set QUANTITY_ON_HAND=QUANTITY_ON_HAND-10 where PRODUCT_ID=170;

883 rows updated.

ARROW:(MDINH@san):PRIMARY>

Check GoldenGate transactions to find 2 open transactions, one from Redo Seq: 196 and one from Redo Seq: 200

GGSCI (arrow.localdomain) 1> send esan, showtrans

Sending SHOWTRANS request to EXTRACT ESAN ...


Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 196, RBA 4955152

------------------------------------------------------------
XID:                  3.29.673
Items:                1
Extract:              ESAN
Redo Thread:          1
Start Time:           2014-07-21:10:41:41
SCN:                  0.1652053 (1652053)
Redo Seq:             196
Redo RBA:             4955152
Status:               Running


------------------------------------------------------------
XID:                  4.20.516
Items:                883
Extract:              ESAN
Redo Thread:          1
Start Time:           2014-07-21:11:03:20
SCN:                  0.1654314 (1654314)
Redo Seq:             200
Redo RBA:             5136
Status:               Running


GGSCI (arrow.localdomain) 2>

Let’s kill the transaction by SOE user.

ARROW:(SYS@san):PRIMARY> @trans.sql

START_TIME           XID              STATUS          SID    SERIAL# USERNAME           STATUS   SCHEMANAME         SQLID              CHILD
-------------------- ---------------- -------- ---------- ---------- ------------------ -------- ------------------ ------------- ----------
07/21/14 10:41:39    3.29.673         INACTIVE        105          9 SOE                INACTIVE SOE                6cmmk52wfnr7r          0
07/21/14 11:03:19    4.20.516         INACTIVE         18         53 MDINH              INACTIVE MDINH              a5qywm8993bqg          0

ARROW:(SYS@san):PRIMARY> @xplan.sql
Enter value for sqlid: a5qywm8993bqg
Enter value for child: 0
SQL_ID  a5qywm8993bqg, child number 0
-------------------------------------
update SOE.INVENTORIES set QUANTITY_ON_HAND=QUANTITY_ON_HAND-10 where
PRODUCT_ID=170

Plan hash value: 1060265186

------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT  |                |       |       |    28 (100)|          |
|   1 |  UPDATE           | INVENTORIES    |       |       |            |          |
|*  2 |   INDEX RANGE SCAN| INV_PRODUCT_IX |   900 | 12600 |     4   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PRODUCT_ID"=170)


20 rows selected.

ARROW:(SYS@san):PRIMARY> alter system kill session '105,9' immediate;

System altered.

ARROW:(SYS@san):PRIMARY> @trans.sql

START_TIME           XID              STATUS          SID    SERIAL# USERNAME           STATUS   SCHEMANAME         SQLID              CHILD
-------------------- ---------------- -------- ---------- ---------- ------------------ -------- ------------------ ------------- ----------
07/21/14 11:03:19    4.20.516         INACTIVE         18         53 MDINH              INACTIVE MDINH              a5qywm8993bqg          0

ARROW:(SYS@san):PRIMARY>

Verify transaction from killed session is removed from GoldenGate

GGSCI (arrow.localdomain) 1> send esan, status

Sending STATUS request to EXTRACT ESAN ...


EXTRACT ESAN (PID 2556)
  Current status: Recovery complete: At EOF

  Current read position:
  Redo thread #: 1
  Sequence #: 200
  RBA: 464896
  Timestamp: 2014-07-21 11:06:40.000000
  SCN: 0.1654584
  Current write position:
  Sequence #: 7
  RBA: 1130
  Timestamp: 2014-07-21 11:06:37.435383
  Extract Trail: /u01/app/ggs01/dirdat/ss



GGSCI (arrow.localdomain) 2> send esan, showtrans

Sending SHOWTRANS request to EXTRACT ESAN ...


Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 200, RBA 5136

------------------------------------------------------------
XID:                  4.20.516
Items:                883
Extract:              ESAN
Redo Thread:          1
Start Time:           2014-07-21:11:03:20
SCN:                  0.1654314 (1654314)
Redo Seq:             200
Redo RBA:             5136
Status:               Running


GGSCI (arrow.localdomain) 3>

-- trans.sql
set lines 200 pages 1000
col xid for a16
col username for a18
col schemaname for a18
col osuser for a12
select t.start_time, t.xidusn||'.'||t.xidslot||'.'||t.xidsqn xid, s.status,
s.sid,s.serial#,s.username,s.status,s.schemaname,
decode(s.sql_id,null,s.prev_sql_id) sqlid, decode(s.sql_child_number,null,s.prev_child_number) child
from v$transaction t, v$session s
where s.saddr = t.ses_addr
order by t.start_time
;

 

Categories: DBA Blogs

Kscope14

Galo Balda's Blog - Mon, 2014-07-21 11:11

ultimate-seattle-wallpaper2

Photo by Nate Whitehill

It’s been a few weeks since I returned from another awesome Kscope conference and I just realized that I never wrote about it.

For me, it was the first time visiting Seattle and I really liked it even when I only managed to walk around the downtown area. I had some concerns about how the weather was going to be but everything worked out very well with clear skies, temperature in the mid 70’s and no rain!

The view from my hotel room.

The Sunday symposiums, the conference sessions and the hands-on labs provided really good content. I particularly enjoyed all the presentations delivered by Jonathan Lewis and Richard Foote.

My friend Amy Caldwell won the contest to have a dinner with ODTUG’s President Monty Latiolais and she was very kind to invite me as her guest. We had a good time talking about the past, present and future of ODTUG and it was enlightening and inspirational to say the least.

My presentation on row pattern matching went well but the attendance wasn’t the best mostly because I had to present on the last time slot when people were on party mode and ready to head to the EMP Museum for the big event. Nevertheless, I had attendees like Dominic Delmolino, Kim Berg Hansen, Alex Zaballa, Leighton Nelson, Joel Kallman and Patrick Wolf that had good questions about my topic.

Some comments on Social Media

As I said before, the big event took place at the EMP Museum and I believe everyone had a good time visiting the music and sci-fi exhibits and enjoying the food, drinks and music.

The EMP Museum

The EMP Museum

Next year, Kscope will take place on Hollywood, Florida. If you’re a Developer, DBA or an Architect working with Oracle products that’s where you want to be from June 21 – 25. I suggest you register and book your hotel room right away because it’s going to sell out really fast.

Hope to see you there!


Filed under: Kscope Tagged: Kscope
Categories: DBA Blogs

Understanding and using tokens in Oracle #GoldenGate

DBASolved - Mon, 2014-07-21 10:53

Recently, I’ve been doing some work with a client where tokens need to be used.  It came to my attention that the basic usage of tokens is misunderstood.  Let’s see if I can clear this up a bit for people reading.

In Oracle GoldenGate, tokens are a way to capture and store data in the header of the trail file (more info on trail headers here).  Once a token has been defined, captured and stored in the header, it can be retrieved, on the apply side, and used in many ways to customize what information is delivered by Oracle GoldenGate.

Defining a token is pretty simple; however, keep these three points in mind:

  1. You define the token and associated data
  2. The token header in the trail file header permits up to a total of 2,000 bytes (token name, associated data, and length of data)
  3. Use the TOKEN option of the TABLE parameter in Extracts

In order to define a token in an extract, the definition should follow this basic syntax:


 TABLE <schema>.<table>, TOKENS( SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’));

In the example above, the token will be populated with the timestamp of the last commit on the table it is defined against.  After restarting the extract, the token (SRC_CSN_TS) will be included in the header of the trail file.

Once the trail file is shipped to the target side and read by the replicat, the token is mapped to a column in the target table.


MAP <schema>.<table>, target <schema>.<table>,
COLMAP (
SRC_CSN_TS=@token(’SRC_CSN_TS’)
); 

Image 1, is a view of a table where I have mapped the token (SRC_CSN_TS) to a target table to keep track of the committed timestamps of a transaction on the source system.

Image 1:

output_tokens1.png

 

 

 

 

Tokens are simple to create, use, and are a powerful feature for mapping data between environments.

Enjoy!!

twitter: @dbasolved

blog: https://dbasolved.com

 


Filed under: Golden Gate
Categories: DBA Blogs