Over the years I've worked very closely with the Oracle Database Access Group and have a lot of respect for them. I'm happy to share that they are hiring C developers.
To apply, go to irecruitment.oracle.com and enter the code IRC2401606 or IRC2403582 (depending where you want to work) in the Keyword search box.
Here is an excerpt from the job posting:
The Database Access group at Oracle is responsible for providing functionally comprehensive, reliable, high performance, secure and highly available access to the Oracle Database from various client drivers, including proprietary, standards-based and open-source drivers. The group works on the high level drivers, the Oracle Call Interface (OCI) layer, the Oracle wire protocol (TTC) and highly scalable server side protocol handlers that together connect an application written in any language securely to the Oracle Database Server to provide full featured access to the Oracle Database.
Some of the listed requirements are:
Strong C programming experience.
I believe it when they also say "Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization". Check out the postings for all the details. I look forward to working with you.
Addendum: IRC2401606 is also available.
Saw this blog post about how Cary Millsap learned about Oracle performance tuning and I thought it was interesting:
It is interesting because I’ve learned so much from Cary Millsap and Jeff Holt’s book and it is cool to see how Cary got to the position he is in as a well-known Oracle performance expert. I guess I’ve always found the performance tuning part of my Oracle DBA job the most interesting so it is intriguing to see someone else who feels the same way about it and how they pursued their interest and made a living doing it.
Cary’s blog post contains good advice to someone who wants to be an Oracle DBA/performance analyst. Read the Concepts manual and other relevant manuals. Read books by the performance experts. Find opportunities to work on real performance problems. Pretty much these are the steps I’ve taken, although I’ve always worked as an employee for a company instead of being a consultant. Still, working for a large company has been a good situation for me because we have a variety of systems and performance is consistently a problem that needs work.
How well do you communicate with your customers? Are you listening, does your customer hear what you are saying? Every day we interact with our customers, team mates, colleagues and managers. Is your message getting delivered?
1. Actively listen.
It is easy to be on a call, but no one can see you “nod”. Make sure to acknowledge the person by replying with “I see” or paraphrase what they said. Side benefit: you look impressive!
2. Ask targeted questions.
Once you’ve practiced #1, you can now ask intelligent, pointed questions. Consider what your customer said, and more importantly what they didn’t say.
3. Show respect.
Respect your audience. Stop talking, try not to interrupt and focus. Avoid distractions such as typing on your computer or checking email. Be in the moment. When you demonstrate respect for your audience you show them you respect yourself as well as them!
4. Tell the truth.
“The whole truth, and nothing but the truth…” Are you on a call with a very upset customer because the DB crashed, and we were at fault? Escalate the call to your manager, and then listen. If we are at fault, we’ll always admit it. It’s counterproductive to try to shift blame. Even in times of stress, try not to lose sight of that fact that we’re in the solutions business. The customer can handle the truth, and they will respect you more for delivering it.
5. Understand what your customer values.
What systems or DBs are most important to your customer? Are they cost driven? React appropriately.
6. Be candid.
Be straight forward — don’t cover things up. State facts and avoid excuses. This, in turn, builds trust and a sense of partnership.
7. Be consistent.
“Say what you mean, and mean what you say.” Nothing could be truer when dealing with customers. Enough said.
8. Dedicate yourself.
Relationships aren’t built overnight. Dedicate yourself to the process, and you will reap the benefits in the long term. Patience is key.
9. Ask for feedback.
How do we get better if we don’t know where we need to improve? Asking for feedback will open the doors of honesty, so be open to the response that you’ve requested.
10. Be Persistent.
Still not sure on the requirements? Seek clarification by rephrasing your questions and using paraphrasing. Before the end of the conversation, be sure that you are clear on next steps. Ask until you know.
11. Build rapport.
Find common ground and let the customer know you can relate. Compliment them and focus on areas of agreement. They have lives outside of the office, so try get to know them.
When you smile while you’re talking, your customer can hear it in your voice.
13. You’re an expert — act like one.
Don’t undermine your expertise by asking questions that can be answered internally. Write a note and ask your team.
14. Be flexible.
Be creative in finding solutions. The only limits are the ones we place on ourselves by thinking small. Customers needs change, we need to change and adapt with it. Don’t get stuck in the past.
15. Maintain constant communication.
Don’t restrict yourself to only talking to your customer when there is bad news. Share the good news, too!
16. Be Careful What You Say.
Customers often take things literally. Avoid words such as outage or crash, etc. Continue to be honest, but these words and ones like them trigger panic. Just use different ones.
Effective communication takes work and patience. Commit yourself to improving every day, not just with your customers but with your peers, managers & even your family!
What do you think? Have something to add, let me know what works for you!
Please note: Not all of these ideas were my own, but some are. Still I must give credit to that class I took called “Encouraging Upward Communication “. Author/teacher unknown.
Over the last couple of months I've been busy working with our documentation people helping them build a set of documentation aimed at partners, specifically around the topic of technical integration with Oracle Sales Cloud. The aim here is to convey our knowledge, and experience of working with partners and come up with some documentation detailing the various use-cases and specifically how it can be implemented.. technically...
However note that in true Cloud style, this is only the first set of docs in a much larger collection of documents which we [Oracle] are planning to release.
Stay tuned and if you have any suggestions for future doc topics let me know!
Team OTN Java Community, Yolande Poirier and Tori Wieldt, are proud to be bringing to you the IoT Developer Challenge. Read on to learn more (from the Java Source blog).
Show the world your embedded Java + Internet of Things (IoT) application for a chance to win a trip to JavaOne 2014! 12 winners will receive a trip to JavaOne 2014, the #1 place to meet world-renowned Java experts. In addition, six students will receive laptops and certification vouchers. Team up and submit the video and code of your project by May 30, 2014. Official rules can be found here.
Need just a little bit of help? We will provide experts along the way -- regular "office hours." Ask questions on the Challenge forums and check the online resources. There may be some source code and solutions you can use for your project.
At previous developer challenges, we've had developers:
- Connect a doorbell to a camera, taking a picture and sending it to a cell phone when someone rings the doorbell.
- Help blind people figure out which recycling container to use ("put it in the blue can" doesn't help!)
- Control a toy monster truck from a phone (Monster Truck As A Service!)
- Connect a heart monitor to Google glass so your doctor or trainer can see your heart rate.
This short video shows them in action:
Entries will be judged based on their implementation, innovation and usefulness:
- Quality: a well-implemented project that uses Oracle Java Embedded with computer boards, devices or IoT Technologies
- Innovation: a new and innovative way of using Java Embedded.
- Usefulness: a project that meets a business need.
Learn more and register for the IoT Developer Challenge at www.java.net/challenge
Oracle Support values your involvement and input, especially on the tools and features that help you use our software or hardware products more effectively.
You can now opt in to the Oracle Support Customer Advisory Program and become part of the feedback process for these Oracle Support tools. Tell us about yourself; let us know that you would like to provide feedback, and which areas of focus that most interest you.
We may ask you to provide input through various forms of engagement: one-on-one session, small group discussion, etc. By contributing your thoughts and ideas, you have the opportunity to help influence new and existing Oracle Support products—and get a first look at new designs and features before general availability. You may also have the chance to join us in testing.
We look forward to collaborating with you.
This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
In the first part, I explained that Incremental Statistics are designed to allow a partitioned tables Global Statistics to be updated based on a combination of
1) Statistics gathered by analysing the contents of one or more partitions that have just been loaded or have been updated (and see this blog post for more depth on what 'updated' means!)
2) The statistics of existing partitions which are represented by synopses that are already stored in the SYSAUX tablespace.
Using this combination, we can avoid scanning every partition in the table every time we want to update the Global Stats which is an expensive operation that is likely to be unfeasible every time we load some data into a large table.
For me, the key word here is Incremental. Global Statistic updates are an incremental process, building on previous statistics (represented by the synopses) and only updating the Global Statistics based on the changes introduced by loading new partitions.
Understanding this might clear up another area of confusion I keep coming across. After upgrading their database to 11g, people often want to try out Incremental Global Stats on one of their existing large tables because they've always struggled to keep their Global Stats consistent and up to date. Maybe it's just the sites I work at but I'd say this is the most popular use case. Incrementals for a planned large partitioned table in your new systems might be a sensible idea, but there are a lot more existing systems out there with Global Stats collection problems that people have struggled with for years.
Most people I've spoken to initially had the impression that they simply flick the INCREMENTAL switch and perhaps modify some of the parameters to their existing DBMS_STATS calls so that GRANULARITY is AUTO and they use AUTO sampling sizes. All of which is discussed in the various white papers and blog posts out there.
Then they get a hell of a surprise when the very first gather runs for ages! How long is ages? I don't know in your particular case but I've seen this running for hours and hour and hours and people are crying in to their keyboards wondering why something that was supposed to make things run more quickly is so much slower than their usual stats calls.
The best way I've found to explain this phenomenon is to concentrate on the synopses that describe the existing partitions. Where do you think they come from? How are they calculated and populated if you don't ask Oracle to look at the existing data in your enormous table? That's what needs to happen. In order to make future updates to your Global Stats much more efficient, we first need to establish the baseline describing your existing data that Oracle will use as the foundation for the later incremental updates.
Generating the synopses as the baseline for future improvements will be a relatively painful for the largest tables (if it wasn't, you probably wouldn't be so interested in Incrementals ), but it does only have to happen once. You just need to understand that it does have to happen and plan for it as part of your migration.
My personal suggestion is usually to just delete all of the existing stats and start from scratch with modern default parameter values and tidy up any stats-related junk that might be lingering around large, critical tables. Painful but probably worth it!
The CEO's Perfect Storm: Demographics, Data and Devices Change Everything
Read Mark Hurd's views on the drastic changes we will see in the workplace in the next 5 to 10 years.
The Oracle 12c client does not have a ocijdbc11.dll, so when you try to login with your 12c Oracle Client (oci thick-driver), you may see :
no ocijdbc11 in path
Don’t worry, this boils down to the jdbc driver ojdbc6.jar. Just overwrite your sqldeveloper/jdbc/lib/ojdbc6.jar with the one from your 12c client.
The same applies to the 220.127.116.11 client, whereas there you will get a core dump at oracle.jdbc.driver.T2CConnection.t2cCreateState instead of a proper error message.
I post a thread on https://community.oracle.com/message/12293761
Sitting in an airport, waiting for a plane, I decided to read a note (pdf) about Flashback data archive written by Beat Ramseier from Trivadis. I’d got about three quarters of the way through it when I paused for thought and figured out that on the typical database implementation something nasty is going to happen after approximately 3 years and 9 months. Can you guess why ?
It’s all about smon_scn_time – which normally records one row every five minutes (created by smon) with a continuous cycle of 24 hours – typically giving you about 1,440 rows in the table. The table is in a cluster, and the cluster key is the instance (thread) number. Clearly this was originally a clever idea from someone who realised that a cluster key of thread number would be beneficial if you had a RAC system with multiple instances – each instance gets its own blocks and the data for any one instance is as well clustered as possible.
The trouble is, when you enable flashback data archive smon no longer sticks to a 24 hour cycle, it just keeps adding rows. Now on my 8KB block tablespace I see 6 rows per block in the table/cluster – which means I get through 48 blocks per days, 17,520 blocks per year, and in 3 years and 9 months I’ll get to roughly 65,700 blocks – and that’s the problem. An index entry in a cluster index points to a chain of cluster blocks, and the last two bytes of the “rowid” in the index entry identify which block within the chain the cluster key scan should start at – and two bytes means you can only have 65,536 blocks for a single cluster key.
I don’t know what’s going to happen when smon tries to insert a row into the 65,535th (-ish) block for the current thread – but it ought to raise an Oracle error, and then you’ll probably have to take emergency action to make sure that the flashback mechanisms carry on running.
Although oraus.msg indicates that it’s an error message about hash clusters it’s possible that the first sight will be: Oracle error: “ORA-02475 maximum cluster chain block count of %s has been exceeded”. If you’re using a 16KB block size then you’ve got roughly 7.5 years, and 32KB block sizes give you about 15 years (not that that’s a good argument for selecting larger block sizes, of course.)Footnote:
Searching MoS for related topics (smon_scn_time flashback) I found doc ID: 1100993.1 from which we could possibly infer that the 1,440 rows was a fixed limit in 10g but that the number of rows allowed in smon_scn_time could increase in 11g if you enable automatic undo management. I also found a couple of bugs relating to index or cluster corruption – fixed by 18.104.22.168, though.
What’s the most elaborate thing you have done with DataPump?
So there I was, given the requirement to export multiple partitions for multiple tables where each partition has its own dump file having the format “tablename_partitionanme.dmp”, pondering how this can be done efficiently.
With the following metadata and requirements, what approach would you take?
If you are curious about the I approach I used, then read on.
TABLE_OWNER TABLE_NAME PARTITION_NAME ------------------------------ ------------------------------ ------------------------------ MDINH A_TAB P001 MDINH A_TAB P002 MDINH A_TAB P003 MDINH A_TAB P004 MDINH A_TAB P005 MDINH B_TAB P001 MDINH B_TAB P002 MDINH B_TAB P003 MDINH B_TAB P004 MDINH B_TAB P005Here’s the demo:
$ nohup sqlplus "/ as sysdba" @exp_api.sql > exp_api.log 2>&1 & $ cat exp_api.log nohup: ignoring input SQL*Plus: Release 22.214.171.124.0 Production on Wed Feb 26 20:28:07 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 126.96.36.199.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ARROW:(SYS@db01):PRIMARY> -- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1) ARROW:(SYS@db01):PRIMARY> -- Work around for the above mentioned error ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context forever, level 32'; System altered. Elapsed: 00:00:00.00 ARROW:(SYS@db01):PRIMARY> declare 2 h1 number; 3 dir_name varchar2(30); 4 begin 5 dir_name := 'DPDIR'; 6 for x in ( 7 select table_owner, table_name, partition_name 8 from dba_tab_partitions 9 where table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$') 10 order by table_owner, table_name, partition_position 11 ) loop 12 13 h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE'); 14 15 dbms_datapump.add_file ( 16 handle => h1, 17 filename => x.table_name||'_'||x.partition_name||'.dmp', 18 reusefile => 1, 19 directory => dir_name, 20 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE); 21 22 dbms_datapump.add_file ( 23 handle => h1, 24 filename => 'exp_'||x.table_name||'_'||x.partition_name||'.log', 25 directory => dir_name, 26 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 27 28 dbms_datapump.set_parameter ( 29 handle => h1, 30 name => 'INCLUDE_METADATA', 31 value => 0); 32 33 dbms_datapump.metadata_filter ( 34 handle => h1, 35 name => 'SCHEMA_EXPR', 36 value => 'IN ('''||x.table_owner||''')'); 37 38 dbms_datapump.metadata_filter ( 39 handle => h1, 40 name => 'NAME_EXPR', 41 value => 'IN ('''||x.table_name||''')'); 42 43 dbms_datapump.data_filter ( 44 handle => h1, 45 name => 'PARTITION_LIST', 46 value => x.partition_name, 47 table_name => x.table_name, 48 schema_name => x.table_owner); 49 50 dbms_datapump.start_job (handle => h1); 51 dbms_datapump.detach (handle => h1); 52 end loop; 53 end; 54 / PL/SQL procedure successfully completed. Elapsed: 00:00:10.92 ARROW:(SYS@db01):PRIMARY> alter system set events '10298 trace name context off'; System altered. Elapsed: 00:00:00.00 ARROW:(SYS@db01):PRIMARY> exit Disconnected from Oracle Database 11g Enterprise Edition Release 188.8.131.52.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing optionsReview export log:
$ ls -l exp*.log-rw-r--r--. 1 oracle oinstall 2888 Feb 26 20:28 exp_api.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_A_TAB_P001.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_A_TAB_P002.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_A_TAB_P003.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_A_TAB_P004.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_B_TAB_P001.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_B_TAB_P002.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_B_TAB_P003.log -rw-r--r--. 1 oracle oinstall 578 Feb 26 20:28 exp_B_TAB_P004.logReview export dump:
$ ls -l *.dmp -rw-r-----. 1 oracle oinstall 90112 Feb 26 20:28 A_TAB_P001.dmp -rw-r-----. 1 oracle oinstall 98304 Feb 26 20:28 A_TAB_P002.dmp -rw-r-----. 1 oracle oinstall 188416 Feb 26 20:28 A_TAB_P003.dmp -rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 A_TAB_P004.dmp -rw-r-----. 1 oracle oinstall 90112 Feb 26 20:28 B_TAB_P001.dmp -rw-r-----. 1 oracle oinstall 98304 Feb 26 20:28 B_TAB_P002.dmp -rw-r-----. 1 oracle oinstall 188416 Feb 26 20:28 B_TAB_P003.dmp -rw-r-----. 1 oracle oinstall 1069056 Feb 26 20:28 B_TAB_P004.dmpReview job status:
$ grep "successfully completed" exp*.log exp_api.log:PL/SQL procedure successfully completed. exp_A_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Wed Feb 26 20:28:09 2014 elapsed 0 00:00:01 exp_A_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_03" successfully completed at Wed Feb 26 20:28:10 2014 elapsed 0 00:00:02 exp_A_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_04" successfully completed at Wed Feb 26 20:28:11 2014 elapsed 0 00:00:02 exp_A_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at Wed Feb 26 20:28:13 2014 elapsed 0 00:00:02 exp_B_TAB_P001.log:Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02 exp_B_TAB_P002.log:Job "SYS"."SYS_EXPORT_TABLE_07" successfully completed at Wed Feb 26 20:28:16 2014 elapsed 0 00:00:02 exp_B_TAB_P003.log:Job "SYS"."SYS_EXPORT_TABLE_08" successfully completed at Wed Feb 26 20:28:17 2014 elapsed 0 00:00:03 exp_B_TAB_P004.log:Job "SYS"."SYS_EXPORT_TABLE_09" successfully completed at Wed Feb 26 20:28:19 2014 elapsed 0 00:00:02Review exported partition:
$ grep "exported" exp*.log exp_A_TAB_P001.log:. . exported "MDINH"."A_TAB":"P001" 6.351 KB 9 rows exp_A_TAB_P002.log:. . exported "MDINH"."A_TAB":"P002" 14.89 KB 90 rows exp_A_TAB_P003.log:. . exported "MDINH"."A_TAB":"P003" 101.1 KB 900 rows exp_A_TAB_P004.log:. . exported "MDINH"."A_TAB":"P004" 963.3 KB 9000 rows exp_B_TAB_P001.log:. . exported "MDINH"."B_TAB":"P001" 6.351 KB 9 rows exp_B_TAB_P002.log:. . exported "MDINH"."B_TAB":"P002" 14.89 KB 90 rows exp_B_TAB_P003.log:. . exported "MDINH"."B_TAB":"P003" 101.1 KB 900 rows exp_B_TAB_P004.log:. . exported "MDINH"."B_TAB":"P004" 963.3 KB 9000 rowsExample of completed log:
$ cat exp_B_TAB_P001.log Starting "SYS"."SYS_EXPORT_TABLE_06": Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 8 MB . . exported "MDINH"."B_TAB":"P001" 6.351 KB 9 rows Master table "SYS"."SYS_EXPORT_TABLE_06" successfully loaded/unloaded ****************************************************************************** Dump file set for SYS.SYS_EXPORT_TABLE_06 is: /tmp/B_TAB_P001.dmp Job "SYS"."SYS_EXPORT_TABLE_06" successfully completed at Wed Feb 26 20:28:14 2014 elapsed 0 00:00:02SQL Scripts:
set timing on echo on -- DataPump Export (EXPDP) Fails With Errors ORA-39001 ORA-39000 ORA-31641 ORA-27054 ORA-27037 When The Dump File Is On NFS Mount Point (Doc ID 1518979.1) -- Work around for the above mentioned error alter system set events '10298 trace name context forever, level 32'; declare h1 number; dir_name varchar2(30); begin dir_name := 'DPDIR'; for x in ( select table_owner, table_name, partition_name from dba_tab_partitions where table_owner = 'MDINH' and table_name in ('A_TAB','B_TAB') and regexp_like(partition_name,'[0-4]$') order by table_owner, table_name, partition_position ) loop h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'TABLE'); dbms_datapump.add_file ( handle => h1, filename => x.table_name||'_'||x.partition_name||'.dmp', reusefile => 1, -- REUSE_DUMPFILES=Y directory => dir_name, filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE); dbms_datapump.add_file ( handle => h1, filename => 'exp_'||x.table_name||'_'||x.partition_name||'.log', directory => dir_name, filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); -- CONTENT = DATA_ONLY dbms_datapump.set_parameter ( handle => h1, name => 'INCLUDE_METADATA', value => 0); dbms_datapump.metadata_filter ( handle => h1, name => 'SCHEMA_EXPR', value => 'IN ('''||x.table_owner||''')'); dbms_datapump.metadata_filter ( handle => h1, name => 'NAME_EXPR', value => 'IN ('''||x.table_name||''')'); dbms_datapump.data_filter ( handle => h1, name => 'PARTITION_LIST', value => x.partition_name, table_name => x.table_name, schema_name => x.table_owner); dbms_datapump.start_job (handle => h1); dbms_datapump.detach (handle => h1); end loop; end; / alter system set events '10298 trace name context off'; exitReference: DBMS_DATAPUMP
Are you curious about TempDB? Allow me to help make your understanding of the subject permanent. TempDB is an interesting system database that doesn’t behave quite like the others. It’s kind of like a quirky friend that you’ve come to rely on when you need a hand with getting something done.Who is TempDB for?
TempDB is accessible to all users that are connected to an instance of SQL Server.What is TempDB?
TempDB is a system database that’s used to store temporary objects. It utilizes minimal logging, meaning it only stores the information required to recover a transaction – this means no point-in-time recovery. You can’t perform a backup or restore on TempDB, and each time SQL Server is restarted, TempDB gets re-created with the last configured settings.How does TempDB get used?
TempDB stores a number of different temporary objects:
- User-created temporary objects: Such as global (prefixed by ##) or local (prefixed by #) temporary tables, temporary stored procedures and cursors.
- Internal temporary objects: Like work tables for intermediate results (any sorting, such as GROUP BY or ORDER BY)
- Version stores: Row versions for data-modification transactions when certain isolation levels are in use.
- …And certain features like online index operations
Think about it like this: I’m 5’7” and 150 pounds. Would you rather have me or Dwayne “The Rock” Johnson helping you move your furniture? If it’s just a chair or two, I’m happy to help, but if you want me to move a sofa and don’t plan on doing any heavy-lifting yourself, you’ll probably want to get a bigger guy.
Just like when you’re moving furniture, when using TempDB, the size configurations are very important.Common Issues
TempDB typically has three common issues that a DBA can run into: I/O bottlenecks, low disk space, and page contention. These issues and their resolutions are often interrelated.
If you really want my help, and I see that I’m not going to be big enough to move your sofa, I’ll try my hardest to bulk up so that I can perform the task. TempDB feels the same way, and will automatically grow in size (the default autogrowth setting is by 10% of its current size) in order to accomplish whatever job it set out to do. It’s great that TempDB is so willing to help, but in some cases, it can become so concerned with getting bigger that it causes I/O pressure on the disk, and performance can suffer. In even worse scenarios, TempDB might decide that it needs to be so huge that it consumes the whole disk. How am I going to help you move your furniture if my muscles can’t even fit through the doorframe anymore?!
Since TempDB is so handy, everyone wants a piece of it, and TempDB will put a lot of work on its plate, trying to satisfy everyone’s demands. This can cause page contention, which we witness in the form of the PAGELATCH wait type (Note: This is not PAGEIOLATCH) denoting that a page is protected and already in memory. Queries needing TempDB’s help will have to wait for their turn.Solving Common Issues
So, how do we keep TempDB from feeling over-worked and checking into the Smack-Down Hotel? The answer lies in capacity planning.
The basic concepts of capacity planning for TempDB include:
- Set autogrowth to a pre-set amount: If TempDB gets to be the size of The Rock we don’t want it to automatically grow by a percentage of its current size each time. Set the number to a reasonable size to avoid the continuous need for growth, but keep it low enough to avoid wasted space.
- Capture and replay activity using a trace or run individual queries: Monitor TempDB’s growth as it is used.
- The tricky part: Estimate the work load (while accounting for concurrent user-activity)
You can use SYS.DM_DB_SESSION_SPACE_USAGE and SYS.DM_DB_TASK_SPACE_USAGE to identify queries which are consuming TempDB
- Configure the appropriate number of TempDB DATA files: If you thought having one Dwayne “The Rock” Johnson helping you move furniture was cool, imagine having two of him! Increasing the number of TempDB data files will allocate work in a round-robin form. This will relieve some of the pressure on TempDB. It’s like having one Dwayne “The Rock” Johnson Feng Shui your living room while another does the same to your kitchen. It is best practice to set the number of TempDB data files equal to the number of logical CPU cores. For example, if you have a dual-core processor, then set the number of TempDB data files equal to two. If you have more than 8 cores, start with 8 files and add four at a time as needed.
- All TempDB files are created equal: This isn’t George Orwell’s Animal Farm. Ensure that the size and growth settings for all TempDB data files are configured in the same manner.
- Disk Placement: If possible, spread TempDB data files across different disks to reduce I/O contention. Putting TempDB data files on a different disk than the user data files will further reduce I/O contention. Be sure to use fast disks whenever possible.
Keep in mind, that TempDB configurations are largely environment-specific. You might not actually need to have the same number of TempDB files as your logical CPU cores. In fact, having too many TempDB data files can cause performance problems due to slowing down the round-robin style allocation. Use your judgement and knowledge of your environment to determine what’s right for your system. If it’s configured properly, then you and your TempDB can be tag-team wrestling (or moving furniture) with queries like heavyweight champions.
For some more information, check out the following links:
Self-Service Websites have become an integral element of a Web user's interaction with organizations they do business with. Whether they are customers, partners, or employees - these users expect to interact with the organization from the Web anytime, anywhere, and from any device.
And while the expectations are high, building and maintaining this type of rich experience for the business user has not always been without complexity. IT groups are finding it difficult and expensive to build new or even maintain their current self-service sites. In many cases, their existing self-service sites are built on old and outdated technologies, which can make it challenging and costly for an organization to add new capabilities, as well as increase their risk by trying to adapt older environments to the needs of the modern day user.
Oracle has proven to be a trusted partner for organizations that are looking to build new or replace existing self-service Websites. Oracle's approach to delivering a comprehensive and pre-integrated collection of enterprise products and tools reduces the overall risk and total cost of the project.
We invite you to learn how Oracle WebCenter customers are leveraging Oracle WebCenter Portal to use, build and manage self-service Websites and deliver intuitive digital experiences for their users.
For the about a week I have been getting an “ORA-27054: NFS file system not mounted with correct options” error when running an RMAN backup to a NFS mount point. The system administrator has not been successful in identifying the cause as it shows that the correct mount point options are being used. Metalink Doc.
The post ORA-27054: NFS file system not mounted with correct options appeared first on VitalSoftTech.