On Exadata (or when setting cell_offload_plan_display = always on non-Exadata) you may see the storage() predicate in addition to the usual access() and filter() predicates in an execution plan:
SQL> SELECT * FROM dual WHERE dummy = 'X'; D - X
Check the plan:
SQL> @x Display execution plan for last statement for this session from library cache... PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------ SQL_ID dtjs9v7q7zj1g, child number 0 ------------------------------------- SELECT * FROM dual WHERE dummy = 'X' Plan hash value: 272002086 ------------------------------------------------------------------------ | Id | Operation | Name | E-Rows |E-Bytes| Cost (%CPU)| ------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | | | 2 (100)| |* 1 | TABLE ACCESS STORAGE FULL| DUAL | 1 | 2 | 2 (0)| ------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - storage("DUMMY"='X') filter("DUMMY"='X')
The access() and filter() predicates come from the corresponding ACCESS_PREDICATES and FILTER_PREDICATES columns in V$SQL_PLAN. But there’s no STORAGE_PREDICATES column there!
SQL> @desc v$sql_plan Name Null? Type ------------------------------- -------- ---------------------------- 1 ADDRESS RAW(4) 2 HASH_VALUE NUMBER 3 SQL_ID VARCHAR2(13) ... 33 TEMP_SPACE NUMBER 34 ACCESS_PREDICATES VARCHAR2(4000) 35 FILTER_PREDICATES VARCHAR2(4000) 36 PROJECTION VARCHAR2(4000) ... 40 OTHER_XML CLOB
So where does the storage predicate come from then?
The answer is that there is no storage() predicate column in any V$ views. The storage() predicate actually comes from the ACCESS_PREDICATE column, but the DBMS_XPLAN.DISPLAY functions just have extra logic in them that if the execution plan line (OPTION column in V$SQL_PLAN) contains STORAGE string, then any access() predicates for that line must be storage() predicates instead!
SQL> SELECT id, access_predicates,filter_predicates FROM v$sql_plan WHERE sql_id = 'dtjs9v7q7zj1g' AND child_number = 0; ID ACCESS_PREDICATES FILTER_PREDICATES ---------- -------------------- -------------------- 0 1 "DUMMY"='X' "DUMMY"='X'
This actually makes sense, as the filter() predicates are the “dumb brute-force” predicates that are not able to pass any information (about what values are they looking for) inside the access path row source they are filtering. In other words, a filter() function fetches all the rows from its rowsource and it throws away everything that doesn’t match the filter condition.
The access() predicate, on the other hand, is able to pass in the value (or range) it’s looking for inside its row source. For example, when doing an index unique lookup, the access() predicate can send the value your query is looking for right into the index traversing code, so you only retrieve the rows you want as opposed to retrieving everything and throwing the non-wanted rows away.
So the access() predicate traditionally showed up for index access paths and also hash join row sources, but never for full table scans. Now, with Exadata, even full table scans can work in a smart way (allowing you pass in the values you’re looking for into the storage layer), so some of the full scanning row sources support the access() predicate now too – with the catch that if the OPTION column in V$SQL_PLAN contains “STORAGE”, the access() predicates are shown as storage().
Note that the SQL Monitor reports (to my knowledge) still don’t support this display logic, so you would see row sources like TABLE ACCESS STORAGE FULL with filter() and access() predicates on them – the access() on these STORAGE row sources really means storage()Related Posts
Alliances between market competitors and scientists have led some organizations to implement data-sharing tools. Whereas some executives believe that digital information should remain confidential, others are finding that collaborative efforts produce profitable results. Due to the complexity of such an operation, a number of corporations are hiring remote database support companies to connect them with other organizations.
Reuters reported that six major banks, two of which are United States-based companies JPMorgan and Citigroup, recently contracted the services of Swift, a Brussels-based organization that provides financial institutions with an avenue to exchange monetary data. The company's new clients will be able to collect and share regulatory information many banks are required to amass as a part of their due diligence processes.
Senior executives responsible for initiating the deal hope that the database administration service will provide their enterprises with the ability to help one another meet the increasing know-your-customer standards. The report stated that these requirements obligate banks to possess stringent KYC policies to strengthen protection against money laundering and fraud.
Although customer confidentiality remains a concern, Swift assured its clients that each financial institution will retain ownership of its own information and can disallow other banks from viewing the data if they wish.
Preparation for disasters
In addition to bank executives, scientists have pooled their refined data with one another to identify an incredible variety of trends. For example, because climate change has become such a widespread concern, environmental experts have relied on remote database services to allow information sharing. The ability to contribute several different statistical viewpoints on such a pressing matter has helped environmental analysts issue whitepapers detailing possible solutions.
Scientists involved in tectonics are looking to utilize DBA services. According to KGW TV, experts at Central Washington University claim that data collected from global positioning systems designed to monitor earthquakes has helped regions throughout the world anticipate the disasters before they occur.
The article stated that the Pacific Northwest Geodetic Array tracks 500 GPS stations throughout the region in real time. The system received considerable attention after data revealed that the United States west coast is susceptible to sustaining a magnitude 9 earthquake.
"The problem, says CWU seismologist Tim Melbourne, is that GPS networks aren't sharing their data," stated the news source. "He wants to see that changed."
Being able to consolidate such information onto a single platform could possibly save thousands of lives. As Melbourne continues to urge other scientists to share their digital estimations, the professional may seek the advice of database experts to create a network to support the data.
We’ve just published our fourth episode in the e-Literate TV pilot series. This one is about CourseWare. Frequent e-Literate readers will know that this is a topic Phil and I think is important and growing in importance. You’re most likely to have heard of the products from the big publishers—Pearson’s CourseConnect, McGraw Hill’s SmartBooks, Cengage’s MindTap, and Wiley’s WileyPLUS—but there are also a lot of smaller entrants that are worth paying attention to.
For this episode, I got to interview two of my favorite people in ed tech: Soomo Publishing’s David Lindrum and Lumen Learning’s Kim Thanos. Both of these folks run small companies that are doing good and novel work with the courseware product category without having huge technology budgets or whizzy adaptive engines. And they have some really interesting things to say about how the advent courseware changes the way we have to approach the development of curricular materials.
The post New e-Literate TV Episode – CourseWare: What Comes After the Textbook appeared first on e-Literate.
Another day, another airport lounge – another quick note: one of the changes that appeared in 12c was a tweak to the “broadcast” distribution option of parallel queries. I mentioned this in a footnote to a longer article a couple of months ago; this note simply expands on that brief comment with an example. We’ll start with a simple two-table hash join – which I’ll first construct and demonstrate in 188.8.131.52:
create table t1 as with generator as ( select --+ materialize rownum id from all_objects where rownum <= 3000 ) select rownum n1, lpad(rownum,6,'0') small_vc, lpad(rownum,200,'0') padding from generator v1, generator v2 where rownum <= 1000 ; create table t2 as with generator as ( select --+ materialize rownum id from all_objects where rownum <= 3000 ) select 1 + mod(rownum,10000) n1, lpad(1 + mod(rownum,10000),6,'0') small_vc, lpad(rownum,500,'0') padding from generator v1, generator v2 where rownum <= 20000 ; -- collect stats, no histograms. select /*+ leading(t1 t2) parallel(t1 2) parallel(t2 2) use_hash(t2) */ t1.padding, t2.padding from t1, t2 where t2.n1 = t1.n1 and t2.small_vc = t1.small_vc ; ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1000 | 707K| 135 | | | | | 1 | PX COORDINATOR | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 1000 | 707K| 135 | Q1,01 | P->S | QC (RAND) | |* 3 | HASH JOIN | | 1000 | 707K| 135 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 1000 | 207K| 4 | Q1,01 | PCWP | | | 5 | PX SEND BROADCAST | :TQ10000 | 1000 | 207K| 4 | Q1,00 | P->P | BROADCAST | | 6 | PX BLOCK ITERATOR | | 1000 | 207K| 4 | Q1,00 | PCWC | | | 7 | TABLE ACCESS FULL| T1 | 1000 | 207K| 4 | Q1,00 | PCWP | | | 8 | PX BLOCK ITERATOR | | 20000 | 9M| 131 | Q1,01 | PCWC | | | 9 | TABLE ACCESS FULL | T2 | 20000 | 9M| 131 | Q1,01 | PCWP | | -------------------------------------------------------------------------------------------------
In this plan slave set 2 scans table t1 in parallel and broadcasts the result set to slave set 1 (lines 5 – 7). The significance of the broadcast option is that each slave in slave set 2 sends all the rows it has read to every slave in slave set 1. For a fairly large table with a high degree of parallelism this could be a lot of inter-process communication; the total number of rows passing through the PX message pool is “DOP x number of row filtered from t1″.
After a slave in slave set 1 has receive the whole of the t1 result set it builds an in-memory hash table and starts scanning rowid ranges (PX BLOCK ITERATOR) from table t2, probing the in-memory hash table to effect the join (lines 3,4, 8,9). Since each slave has a copy of the whole result set from t1 it can scan any chunk of t2 and handle the contents locally. Moreover, because slave set 1 isn’t reading its second input from a virtual table it is able to write its output immediately the virtual table (:TQ10001) that feeds the query coordinator with the result (lines 1,2) – we don’t have to do a “hash join buffered” operation and buffer the entire second input before starting to execute the join.
So how does 12c change things. With the same starting data and query, here’s the execution plan:
----------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib | ----------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1000 | 707K| 135 | | | | | 1 | PX COORDINATOR | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10000 | 1000 | 707K| 135 | Q1,00 | P->S | QC (RAND) | |* 3 | HASH JOIN | | 1000 | 707K| 135 | Q1,00 | PCWP | | | 4 | TABLE ACCESS FULL | T1 | 1000 | 207K| 4 | Q1,00 | PCWP | | | 5 | PX BLOCK ITERATOR | | 20000 | 9M| 131 | Q1,00 | PCWC | | | 6 | TABLE ACCESS FULL| T2 | 20000 | 9M| 131 | Q1,00 | PCWP | | -----------------------------------------------------------------------------------------------
Notice, in particular, that we only have one virtual table (or table queue :TQ10000) rather than two – and that’s from a parallel query slave set to the query co-ordinator, parallel to serial; the query only uses one set of parallel query slaves. Until you run the query with rowsource execution statistics enabled and look at the output from v$pq_tqstat it’s not going to be immediately obvious what has happened, but we should see that somehow Oracle is no longer broadcasting the first table even though it’s still doing something in parallel with both tables.
The run-time statistics confirm that we’ve only used one set of slaves, and each slave in the slave set has scanned the whole of table t1. This means each slave can build the full hash table and then go on to read rowid ranges from table t2. We’ve managed to get the benefit of broadcasting t1 (every slave has the whole of t1 so we don’t have to scan and distribute the big table t2 through the PX message pool) but we haven’t had to clone it multiple times through the PX message pool.
Clearly there’s a trade-off here that Oracle Corp. has decided is worth considering. I’m guessing it’s biased towards Exadata where you might run queries with a very high degree of parallelism. In that case the overhead of task switching as large numbers of messages are passed around may (and this is pure supposition) be greater than the added cost of loading the table into the buffer cache (of each instance) and having each slave scan it from there. (Reminder – 11g introduced two “opposite” changed to tablescans: “serial direct reads” and “parallel in-memory scans”.)
There’s one little oddity in this replication – there’s a pair of hints: pq_replicate and no_pq_replicate to control the effect if you think the optimizer is making the wrong choice. I would have guessed that in my example the hint would read: /*+ pq_replicate(t1) */ as it’s table t1 that is read by every single slave. Strangely, though, this is what the outline section of the execution plan showed:
/*+ BEGIN_OUTLINE_DATA PQ_REPLICATE(@"SEL$1" "T2"@"SEL$1") PQ_DISTRIBUTE(@"SEL$1" "T2"@"SEL$1" BROADCAST NONE) USE_HASH(@"SEL$1" "T2"@"SEL$1") LEADING(@"SEL$1" "T1"@"SEL$1" "T2"@"SEL$1") FULL(@"SEL$1" "T2"@"SEL$1") FULL(@"SEL$1" "T1"@"SEL$1") OUTLINE_LEAF(@"SEL$1") ALL_ROWS OPT_PARAM('_optimizer_cost_model' 'io') DB_VERSION('184.108.40.206') OPTIMIZER_FEATURES_ENABLE('220.127.116.11') IGNORE_OPTIM_EMBEDDED_HINTS END_OUTLINE_DATA */
Notice how the hint specifies table t2, not table t1 !Footnote
Here’s a little anomaly, and a generic warning about “optimizer_features_enable”: I found that if I used the hint /*+ optimizer_features_enable(’18.104.22.168′) */ in 12c I could still get the pq_replicate() hint to work. Unfortunately there are a few places where the hint (or parameter) isn’t guaranteed to take the optimizer code backwards the full 100%.
Which is why there's still a place for the detailed technical blog posts, once you've got the basics clear. The problem with not having the basics clear is that I've had people tell me things about Incremental Stats based on the blog posts and, when I read them, they make sense to me but I can also see why they might confuse others. Anyway, for completeness, here are some terrific posts about Incremental Stats.
Randolf Geist always has excellent stuff, as does John Hallas and, of course, when Maria Colgan was the Optimizer PM, she used to write about this stuff all the time.
Read blogs, by all means, but maybe start off with White Papers and the documentation and full presentations?
Today is the second anniversary of my first post on this blog.
Some time ago I played with a free blog that came with a yahoo email account that my wife and I share but didn’t get very far with that. Finally for this blog I decided to spend a small amount of money to get my own hosting and domain name.
I’m using iPage for the hosting. I’ve been happy with their service. There were a couple of billing issues, but I opened tickets with them and they were quickly resolved in a fair way.
I evaluated a few blog sites and chose WordPress because it allowed me to do what was most important. WordPress’s software enabled me to upload sqlplus scripts and their logs along with PowerPoint, Microsoft Word, and PDF files. WordPress also enabled me to include output from sqlplus and have the columns line up nicely like this:
SQL> select table_name, blocks 2 from dba_tables 3 where owner='SYS' 4 order by table_name; TABLE_NAME BLOCKS ------------------------------ ---------- ACCESS$ 759 ALERT_QT APPLY$_CHANGE_HANDLERS 0 APPLY$_CONF_HDLR_COLUMNS 0 APPLY$_CONSTRAINT_COLUMNS 0 APPLY$_DEST_OBJ 0 APPLY$_DEST_OBJ_CMAP 0 APPLY$_DEST_OBJ_OPS 0 APPLY$_ERROR 0
I’m pretty happy with the results. I’ve had some nice interaction with the Oracle community and I’ve used links to my posts with coworkers on the job. Plus, I actually query the site myself when I forget how to do something I wrote about.
I.e. If I forget where to plus sign on an outer join I search for it on my blog!
Overall it has been an enjoyable experience and I think helpful to me and my employer as I try to communicate with others the things I’m learning about Oracle database technology, especially in the performance tuning area.
I have been exploring NoSQL and Hadoop since the buzz word started to flood. Have been doing research on Oracle NoSQL mostly on the use case and started to find options to use it in my day to day life and seeing a lot of cases which can help. I will start to blog more on that but before I begin with the use case want to tell my friends how to setup a simple Oracle NoSQL Cluster.
This is only for study purpose. I am using only one physical server and creating four virtual nosql store running on different ports.
Modify the /etc/hosts file with the alias
10.0.0.1 labserver.localdomain.com server1 server2 server3 server4
Download nosql software to /oraclenosql/lab/stage
Here I am using 11g later will upgrade to 12c
Creating a staging folder and create a base nosql folder with binaries in it.
mkdir nosql_11g;cd nosql_11g
mkdir oraclesoftware storage
cp /oraclenosql/lab/stage/kv-ee-2.0.39.zip .
Clone the staged binary into four servers. This cloning folder will help me to create different case studies quickly by just deleting the folders and re-creating them again.
cp -Rf nosql_11g server1
cp -Rf nosql_11g server2
cp -Rf nosql_11g server3
cp -Rf nosql_11g server4
echo "server 1"
java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVSTORAGE -capacity 1 -harange 5010,5020 -admin 5001 -port 5000 -host server1
nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVSTORAGE &
echo "server 2"
java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVSTORAGE -capacity 1 -harange 5110,5120 -admin 5101 -port 5100 -host server2
nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVSTORAGE &
echo "server 3"
java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVSTORAGE -capacity 1 -harange 5210,5220 -admin 5201 -port 5200 -host server3
nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVSTORAGE &
echo "server 4"
java -jar $KVHOME/lib/kvstore.jar makebootconfig -root $KVSTORAGE -capacity 1 -harange 5310,5320 -admin 5301 -port 5300 -host server4
nohup java -jar $KVHOME/lib/kvstore.jar start -root $KVSTORAGE &
java -jar $KVHOME/lib/kvstore.jar runadmin -port 5000 -host server1
Make sure you get the KV prompt.
Also verify URL http://labserver.localdomain.com:5010
In the upcoming blog , we will see the below;
Startup and Shutdown Scripts
Setting up 1x1 topology
Expanding 1x1 to 3x1 topology
Moving from 3x1 to 3x3 topology
From 3x3 to 3x4 topology
Smoke Testing NoSQL
Historically, Websites can be best defined as a content publishing channel. Even today, the large majority of internal and external Websites are static content used for pushing information to their users. However, while self-service is a small percentage of the overall Web properties, it provides the highest value to the users by providing a personalized experience which can not only be tailored to the users organizational role, but also allows the user to directly access, modify and manage key elements of their business relationship without the need for interacting with anyone else from the organization. Depending on the purpose of the self-service Website, it can help organizations increase their revenue while reducing costs.
Figure 1. Common Use Cases for Self-Service.
The use cases in Figure 1 are broken into 3 top-level categories: External, Partner, and Employee, since the scope of most self-service Websites will target one of these audiences per deployment. The key with any successful self-service Website is to meet the specific needs of the target audience by providing the most relevant and valuable capabilities. The broader the audience, the more difficult it is to deliver a compelling experience. Self-service is the epitome of targeting and personalization, which makes definition of the audience critical to the usefulness and overall success of the site.
Figure 2. Key Business Drivers for Self-Service Portals.
Self-service Websites for external audiences such as customers or citizens, typically empower the user by granting access and/or updating information they previously would have needed to contact someone else in the organization directly for. Growing revenue and reducing processing costs are often attributes of the business case for these scenarios.
Self-service Websites are a powerful way for an organization to cross-sell other products and/or services to an existing customer. In a customer self-service scenario such as a customer support portal or a My Utilities site, not only can customers access their accounts, update their addresses, pay bills and monitor their daily/hourly usage, but also marketing teams can deliver highly targeted campaigns based on account information of the users’ currently owned product/services. For instance, a customer that logs into a customer support portal seeking to find updates for their existing product could be interested in upgrading to the latest generation of that product – a carefully placed personalized ad could result in a significant revenue opportunity to grow profits from the existing customer base. Similarly, this model can help a Utilities company to promote rebates and other services that may benefit both the customer and the organization.
In addition to revenue growth, many external scenario's can also be justified by reduction and accuracy of processing times, decrease in call or walk-in traffic, and improvement of overall user satisfaction, which can result in significant cost savings for the organization.
Customer StoryLos Angeles Department of Water and Power (LADWP) LADWP is the largest public utility company in the United States with over 1.6 million residential and commercial customers in Southern California. Searching for opportunities to improve customer service and automate paper based processes, LADWP realized their existing legacy technologies provided very little agility and created a lot of challenges. During customer focus groups, they discovered their customers wanted a Web experience that delivered more direct access to the things they need most, such as billing and service requests. LADWP implemented a self-service portal using Oracle WebCenter Portal to surface various back-end systems such as Oracle Siebel CRM, Legacy Mainframe, and billing applications. Their customers now have access to MyAccount (billing and usage analysis), Financial Assistance Programs, Rebate Programs, service requests, outage reporting, eNotifications and much more. The site has received extremely positive feedback from their customers, has helped to drastically improve LADWP's customer experience ranking vs comparable utilities companies, and driven the usage of online billing.
Business partners are often seen as an extension of an organization, which means the self-service interfaces tend to be extremely rich for access to backend application data. In some cases, the interface may be identical to what an employee within the organization may have access to. It's also common for a partner to use the organization’s hosted site as a co-selling tool for their own customers and simply rebrand the site with their own logo. Partner-facing self-service can be a powerful way for an organization to extend the reach and capability of their organization by enabling partners to do business on their behalf. The key value for this scenario is business reach/growth, process efficiency, and process consistency. The types of self-service tools for partners can be a key differentiator in attracting and maintaining a partner network. Partner Portals enable companies to share information and collaborate between partners, brand owners, and joint customers. Its broad range of data, marketing materials, and selling tools facilitates the collaborative sales process, while its superior service and problem-resolution capabilities help partners deliver timely and consistent customer service.
Customer StoryLand O’Lakes, Inc. Land O’Lakes is one of America’s premier member-owned cooperatives, offering local cooperatives and agricultural producers across the nation an extensive line of agricultural supplies, as well as state-of-the-art production and business services. Land O’Lakes is using Oracle WebCenter Portal to improve online experiences for partners to handle all aspects of account management and order entry through a consolidated, personalized, and secure user interface. By creating a self-service portal for placing orders for seed, Land O'Lakes ended up with a new engagement platform that drives more efficient processes, and has actually resulted in net new business.
Internal self-service sites offer employees access to their human resource details such as personal records and payroll information; allowing employees to change their own address, contact details, education records, job profile, vacation and sick leave, and many more. Some scenarios allow for employees to change payroll details such as their direct deposit information and provide access to current and historical payslips. These sites have also been known to be used for career planning, skills profiles, learning, objective settings, and annual appraisals. This can be a valuable asset to an organization to retain their talent, while also helping to improve employee satisfaction. Human resource departments see employee portals as a means of empowering employees and reducing process errors and cost. Organizations also view employee self-service as a means of eliminating manual processes and in turn reducing costs associated with support and physical paper processing.
Customer StorySchneider National, Inc.
Schneider National is a premier provider of truckload, logistics and intermodal services. Serving more than two-thirds of the FORTUNE 500® companies and conducting business in more than 28 countries worldwide, Schneider National offers the broadest portfolio of services in the industry. The company’s transportation and logistics solutions include Van Truckload, Dedicated, Regional, Bulk, Intermodal, Transportation Management, Supply Chain Management, Warehousing and International Logistics services. Schneider National’s previous solution for their Drivers was very static, fairly expensive to maintain, and needed IT involvement to update most content. The business wanted a more dynamic and less IT-centric solution that would allow employees to modify their solutions quickly and easily, allow for significantly improved collaboration with the users, and an integrated solution across various types of content, applications, and collaboration services. Schneider National turned to Oracle WebCenter Portal for their employee self-service portal, allowing drivers to view employee information such as benefits, online training and pay statements, all in a single user interface – whether they are in the office or on the road.
Self-Service Websites have become an integral element of a Web user’s interaction with organizations they do business with. Oracle has proven to be a trusted partner for organizations that are looking to build new or replace existing self-service Websites. Oracle's approach to delivering a comprehensive and pre-integrated collection of enterprise products and tools reduces the overall risk and total cost of the self-service project. We invite you to learn how Oracle WebCenter customers are leveraging Oracle WebCenter Portal to use, build and manage self-service Websites and deliver intuitive digital experiences for their users.
Recently I’ve been combating a high water mark enqueue wait (enq: HW – contention) on a single node within an Exadata I’m supporting. I first noticed the wait when I was looking at the performance page for the node in Oracle Enterprise Manager 12c. What I noticed was the a lot of brown looking spikes (Image 1). These spikes correspond to a Configuration wait.
When I clicked on Configuration in the legend on the side of the graph, I’m taken to the Active Session Waiting: Configuration (Image 2) page. On this page, I can clearly see that the wait event I’m waiting on is the “enq: HW – contention” wait.
Now, that I know what wait event I’m waiting on, I needed to figure out what object was causing the wait. Everyone has a different approach to how they performance tune; I find it easier to identify objects causing the wait by using the Active Session History (ASH) report. ASH reports can be ran directly from the OEM Active Session Waiting page (page with the graphs) or from the command line. Since I like the pretty graphs that OEM provides, I prefer to run ASH from the command line and use the graphs to help identify what direction my tuning is going.Active Session History (ASH) Report
The Active Session History (ASH) report is handy in helping to identify what object(s) the wait is waiting on. In order to run the ASH Report from the command line, there are a few requirements needed. They are:
- Set ORACLE_HOME
- Move to the directory where you would like to save the report
Once these requirements have been meet, the ASH Report can be ran from the command line using the ashrpt.sql script.
When the script starts to run, it will ask for input on what format you would like the report in. For search ability purposes (from command line), I go with the “Text” option. Next, it asks for a timeframe (Image 3); provide a timeframe as per the layout listed.
Lastly, it will ask you what you would like to name the report. I normally prefix it with my initials and an underscore (bc_) (Image 4).
Once I press enter, the report runs and the output will be stored in the directory you started SQL*Plus from.Reading the ASH Report
In reading the ASH Report, I want to identify the primary object that is holding the “enq: HW – contention” wait. In order to do this, I need to look at the “Top Event P1/P2/P3 Values” section of the report (Image 5). Reviewing this section, I see that the Top Event is the “enq: HW – contention”, the same as I saw from the OEM performance pages. Next, I need to find the values of P2 and P3.
The values for P2 and P3 will tell me what data file (P2) to look at for the block (P3) holding the wait. The values P2 & P3 for the wait are currently “85” and “16018”.Finding the Object
Armed with the values for P2 and P3 from the ASH Report, I can now go and locate the object via SQL. The SQL used to identify the object is as follows:
select /*+ parallel(a dba_extents, 4) */ a.owner, a.segment_type,a.segment_name
from DBA_EXTENTS a
where a.FILE_ID = &P2
and &P3 between a.block_id and a.block_id + blocks – 1;
Note: I’m using a parallel hint; this is to speed of the identification of the block. Can help in VLDBs.
Once the script is done running, it returns the owner, segment_type, and segment_name of the object holding the enq: HW – contention wait (Image 6). In this case, the wait is on a system generated LOB index.
At this point, I have identified the following:
- The wait: enq: HW – contention
- The object holding the wait
Next, I want to resolve the wait as fast as I can. Knowing that the wait is a high watermark (enq: HW –contention) wait event, there needs to be an basic understanding of high watermarks and how extents are allocated. The basic reason why the ‘enq: HW – contention’ raises is because the additional extents are not being allocated to in a timely manner for the data being inserted into the object.
In order to resolve this wait, additional extents need to be allocated. The environment I’m working in is using big file data files with automatic segment space management (ASSM). When initially looking into how to increase the extents; there are many different ways. Being that this database is using ASSM, makes the extent allocation simpler. To allocate another extent for the object identified, use the following:
SQL> alter index <owner>.SYS_IL0000253792C00002$$ allocate extent;
The above alter statement is correct; however, the problem is that I was trying to allocate an extent on a system generated index. The below error message states that this is not allowed:
Error starting at line : 32 in command -
alter index <OWNER>.SYS_IL0000253792C00002$$ allocate extent
Error report -
SQL Error: ORA-22864: cannot ALTER or DROP LOB indexes
22864. 00000 - “cannot ALTER or DROP LOB indexes”
*Cause: An attempt was made to ALTER or DROP a LOB index.
*Action: Do not operate directly on the system-defined LOB index.
Perform operations on the corresponding LOB column.
In order to add extents, I needed to identify the object that is using the system generated LOB index. This information can be found in DBA_LOBS (SQL below).
select owner, table_name, column_name, segment_name, index_name
where index_name = ‘SYS_IL0000253792C00002$$’;
The SQL returns the name of the owner, table_name and LOB column within the table that is using the system generated LOB index (Image 7). Using this information, I can now allocate additional extents to the LOB.
In order to allocate an extent on the object identified, the following was used:
alter table <owner>.<table> modify lob(<column>) (allocate extent (size 5M));
Once this completed, I was expecting wait (enq: HW – contention) to go away. After sometime, I saw no improvement. This lead me to open an service request (SR) with Oracle Support in trying to resolve.Research
Opening the SR has lead me to dig a bit and do some research on extent allocation and LOBs while waiting on Oracle’s answer. Prior to Oracle Database 11g (22.214.171.124), there is a known bug (6376915). This bug is related to high watermark enqueue contention for ASSM LOB segments (Note ID: 6376915.8). This bug was actually fixed in release 126.96.36.199 but it needs to be “turned on” in later releases. To turn the fix for this bug on in 188.8.131.52+, an event needs to be set in the spfile.
SQL> ALTER SYSTEM SET EVENT=”44951 TRACE NAME CONTEXT FOREVER, LEVEL <1 .. 1024>” scope=spfile;
By setting this event between 1 and 1024 will identify the number of chunks to be cleaned up each time a reclamation operation is performed. In turn this reduces the number of requests against the high watermark enqueue.
Once the event has been set in the spfile, the database instance needs to be rebooted to make the change effective.Conclusion
The steps in resolving this enqueue event are not difficult; however, when extents need to be allocated manually it becomes very interesting. The downside to setting the event, per Oracle Support, is the need to reboot the database instance to make the event active.
Filed under: Database, Exadata, Performance
The OTN Yathra 2014 tour is over and I’m back home now. Here are all the blog posts from the tour.
Although I come from the second biggest city in the UK, Birmingham has a very slow pace in comparison to other UK cities. Friends had told me how busy India was, so I was quite nervous about this trip and how I would cope with it. My initial fears were confirmed during my first taxi ride from Amritsar airport to Jalandhar. Getting ill on the first morning of the tour wasn’t a good omen either. Once the kind folks at the Lovely Professional University sorted me out with some medical attention, things started to get better and I started to believe I might make it to the end of the tour alive.
As the tour progressed I got into my stride and really started to enjoy the whole process. As I’ve said many times, I’m not a fan of travelling, but I like being at different places. The travelling part of this tour was very strenuous, which was my own fault for choosing to do all 7 events, but that was easily outweighed by getting the opportunity to connect with the attendees and speakers in all the cities.
Here come the much deserved thank you messages:
- Thanks to everyone from All India Oracle Users Group (AIOUG) involved in the organisation of this tour. Aman, Sai, Satyendra and Murali did a great job of making the logistics as simple as possible.
- Thank you to Oracle Corporation for the use of their offices and to all the Universities that let us visit them.
- A big thank you goes out to Oracle Technology Network (OTN) and the Oracle ACE Program, without whom this tour would not have happened.
- Thank you to my fellow speakers. I’m not sure I would have stayed sane on my journey from Jalandhar to Noida without the company of Raj Mattamal. Raj, Debra and Hans were welcome distractions from the hours spent in airports, planes and taxis. Thanks everyone for putting up with me. Thanks to all the other speakers that dotted in and out for specific events. It was fun to hook up with you all.
- Finally, thank you to all the attendees for coming and supporting this event! Without you being there, the events would not happen, so if you want to see more of these events in future, please join AIOUG and get your friends to join too.
My lasting memories of India will be:
- Crazy traffic.
- Fantastic food.
- Very friendly people.
Until next time…
Tim…OTN Yathra 2014 : It’s a Wrap was first posted on March 5, 2014 at 9:10 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
In response to the No Child Left Behind Act of 2001, educators throughout the United States have implemented ways to ensure that their students are abiding by standards established by the federal government. Some school officials, in conjunction with state departments, have consulted with database administration services to deploy networks possessing a wide variety of student data across entire districts.
One such database has already been deployed. According to Brookfield Patch, the Connecticut State Department of Education recently established its own system for assessing and distinguishing school performance. The network, monitored by a remote DBA support firm, categorizes each institution throughout the state into one of three designations based on the average results of student Connecticut Mastery Tests, Connecticut Academic Performance Tests and other exams:
- "Excelling" is assigned to schools possessing an SPI over 88, a four-year graduation rate over 94 percent and at least 25 percent of students partaking in advanced classes in three of four subject areas.
- "Progressing" means that a school has an SPI of 88, but a 10-point gap or larger exists for underachieving subgroups and the graduation rate is between 94 to 90 percent.
- "Transition" is designated to institutions that have an SPI between 88 and 64. The establishments are missing either the 90 percent four-year graduation standard or a subgroup gap higher than 10 points.
An additional three categories, "review," "focus" and "turnaround," classify schools as possessing substandard performance characteristics and are typically under scrutiny by the CSDE.
Access to standardization
Although a number of U.S. constituents perceive standardization to be detrimental to students, the issue isn't dissipating. According to VentureBeat, Desire2Learn, a learning software provider, recently acquired the Achievement Standards Network, an online database of education standards.
Desire2Learn informed the news source that the organization will undertake all responsibility for operating ASN's public data infrastructure. Whether or not the company will consult with database experts on how to maintain the networks by itself has not been clarified, but it's assumed that the size of the operation will convince them to do so.
Jeremy Auger, co-founder and chief strategy officer of Desire2Learn, claimed that he hopes the transition will give clarity to teachers confounded by the wide array of differing education standards throughout the country. The effort is a part of ASN's desire to digitize learning content and simplify the materials search process.
The deployment of these two databases signifies a revolutionary shift in the U.S. education system. Defining stringent standards and making them available for widespread consumption is sure to solidify the policies initiated by No Child Left Behind and the Common Core.
Certification 1Z0-133 Monitoring WebLogic Server Transactions : WebLogic 12c Administration Certification
This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Next up in the ‘How I work‘ series is Anton de Weger, my first boss in the PeopleSoft world. Anton ran Congruent UK, the consultancy I worked at for 7 great years and I owe much of what I’ve learnt to him. Congruent was – for it’s no longer in existence – full of great consultants, not only in their technical or functional ability, but in a desire to behave ethically for the client – something that came from Anton at the top.
Not only is Anton a great leader, but he’s brilliant technically too. He possesses a ‘Steve Jobs like’ reality distortion field, where all technical tasks somehow seem easier when he’s around. He’s still the only person I’ve ever seen – when there was a network outage – fire up a blank notepad session and write an SQR from scratch.
Anton has since returned to Australia where he’s currently working on a 9.2 upgrade.
Name: Anton de Weger
Occupation: PeopleSoft Consultant / Project Manager
Location: Melbourne Australia
Current computer: Metabox (Win7, 16GB RAM, SSD, Radeon 6900M) third in a line of luggable powerhouses that can warm my lunch as well as run a PeopleSoft VM or two.
Current mobile devices: iPhone 5s
I work: I have a bit of a love/hate relationship with PeopleSoft these days. I’ve been trying to get away from the application for a while, but it still keeps pulling me back in. I dislike the lack of development and focus on PeopleSoft from Oracle, and I feel this is limiting the potential for new blood and enthusiasm in the market. Also, is it just me or are PeopleSoft implementations getting harder? My personal view is that the off-shoring and out-sourcing of the project teams are counteracting the benefits and efficiency of projects. However, in the end I love PeopleTools as a corporate computing platform and I think its flexibility and structure are underappreciated. I’m driven by solving problems and feeling like I’m making a difference to my customers and if I can link that up with an occasional good technical challenge and a good team then I’m happy.
What apps/software/tools can’t you live without?
Snag-It: There is always a need for screen shots, whether you are doing Technical, Functional or Project Management work.
Podcasts: For the commute into work, either educational or humorous.
Notepad++: moving to this as a general text editor from previous preferences for UltraEdit and TextPad.
Besides your phone and computer, what gadget can’t you live without?
Now that I’m getting a bit older, I’m not as tied to having the latest gadget and I am quite enjoying challenging myself outside the technology field. This Christmas I was disappointed when I was able to get mobile coverage at our remote hobby farm (home to 13 beef cattle and 220 acres of grass and trees), where I like to go to escape the world.
What’s your workspace like?
Wherever the client wants me, which being a consultant is normally in the corner, or wherever they can find space. At home it looks like this.
What do you listen to while you work?
It depends on my mood, but industrial/gothic is high on the list, as is a bit of Electronica to help speed things along.
What PeopleSoft-related productivity apps do you use?
Notepad++, Toad, PeopleBooks, Google Search and 20 years of previous projects, documents and notes pretty much covers me for most eventualities. The Oracle Virtual Machine PeopleSoft installs are also pretty cool, but only rarely used when onsite.
Do you have a 2-line tip that some others might not know?
PeopleSoft Directory, allows you to connect to Active Directory and the PeopleCode behind the Test page can be used to build an Active Directory interface using the Business Interlink.
What SQL/Code do you find yourself writing most often?
Always the effective date/effective sequence joins to JOB. I really need to sort out a macro for that.
What would be the one item you’d add to PeopleSoft if you could?
A bit better vision from Oracle, so that customers didn’t have to beg to get PeopleSoft sold to them. Preconfigured Global Payroll templates for countries. OK, that’s two items, but the first is just a gripe.
What everyday thing are you better at than anyone else?
At one stage I think I had the largest SQR delivered by PeopleSoft… I knew it was the largest, as to add an extra line of code, you had to remove one from somewhere else. I’d probably say I’m reasonably good at working between tech teams, functional teams and the business.
What’s the best advice you’ve ever received?
I.T. is not about the software, it’s about the people. If projects were about the software, you would press a button and it would be done.
Pythian announced today that it raised $6 million to expand the market reach of Adminiscope, its data security solution, to help retailers eliminate data security breaches. As global corporations continue to collect, store, and monetize massive amounts of sensitive data, they’re increasingly turning to Pythian to secure their infrastructures, especially from internal threats. Investors include the Business Development Bank of Canada (BDC) and Royal Bank of Canada (RBC).
“It’s clear there’s an appetite out in the marketplace for greater data security measures among all organizations, especially retailers who have to protect sensitive customer financial information,” said Paul Vallée, Founder and Executive Chairman at Pythian. “The recent challenges continue to underscore the need for solutions like Adminiscope, and strengthen our resolve to ensure our clients operate with the highest level of security and accountability. Our technology offers organizations and their customers unwavering confidence in their data’s security. With this investment, we will bring our solution to more retailers large and small, and help them increase consumer confidence and protect their brand.”
Adminiscope, a software-as-a-service (SaaS) privileged session management solution, secures access to enterprise systems, cloud infrastructures, databases, servers, and applications, and enables stakeholders to monitor and record in real time all privileged activity performed across an IT infrastructure. Database Trends and Applications magazine recently named Adminiscope to its list of Trend-Setting Products for 2014 in the data management and analysis space.
See the full press release here.
The OLAP Extension is now in SQL Developer 4.0.
See http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-releasenotes-v4-1925251.html for the details.
The OLAP functionality is mentioned toward the bottom of the web page.You will still need AWM 184.108.40.206.0 to
- Manage and enable cube and dimension MV's.
- Manage data security.
- Create and edit nested measure folders (i.e. measure folders that are children of other measure folders)
- Create and edit Maintenance Scripts
- Manage multilingual support for OLAP Metadata objects
- Use the OBIEE plugin or the Data Validation plugin
- New Calculation Expression editor for calculated measures. This allows the user to nest different types to calculated measures easily. For instance a user can now create a Moving Total of a Prior Period as one calculated measure. In AWM, it would have required a user to create a Prior Period first and then create a Moving Total calculated measure which referred to the Prior Period measure. Also the new Calculation Expression editor displays hypertext helper templates when the user selects the OLAP API syntax in the editor.
- Support for OLAP DML command execution in the SQL Worksheet. Simply prefix OLAP DML commands by a '~' and then select the execute button to execute them on the SQL Worksheet. The output of the command will appear in the DBMS Output Window if it is opened, or the Script Output Window if the user has executed 'set serveroutput on' before executing the DML command.
- Improved OLAP DML Program Editor integrated within the SQL Developer framework.
- New diagnostic reports in the SQL Developer Report navigator.
- Ability to create a fact view with a measure dimension (i.e. "pivot cube"). This functionality is accessible from the SQL Developer Tools-OLAP menu option.
- Cube scripts have been renamed to Build Specifications and are now accessible within the Create/Edit Cube dialog. The Build Specifications editor there, is similar to the calculation expression editor as far as functionality.
I’m trying to get up to speed on 12c, especially the multitentant features. So, I decided to read the sections of the manuals that related to using multitentant. Here is the name of the manual I was looking at:
It isn’t great reading. It is kind of slow-paced and really more of a reference but since it is a large new feature I wanted to go over the practical details of using it. Some of the sections had a command line sqlplus version and a web-based cloud control version so I skipped the cloud version. I’m more of a command line, DBA and v$ view kind of DBA.
Here are the chapter numbers and titles of the sections I read:
36 Overview of Managing a Multitenant Environment 37 Creating and Configuring a CDB 38 Creating and Removing PDBs with SQL*Plus 40 Administering a CDB with SQL*Plus 42 Administering PDBs with SQL*Plus 43 Viewing Information About CDBs and PDBs with SQL*Plus 44 Using Oracle Resource Manager for PDBs with SQL*Plus 46 Using Oracle Scheduler with a CDB
I haven’t really used what I’ve learned yet but I feel like I have a good overview and know where to go to get more information.
I’m also working on reading through the 12c concepts manual on my Kindle but I haven’t gotten to the multitenant part. Once I’m done with that I hope to get serious about studying up for the 12c OCP upgrade exam. It looks like you can sign up for the tests now, but I don’t have any practice exam software or a book and I usually get one of these before I take the test. I see one book that is supposed to be available in June so I may have to wait until then but I have plenty to do between now and then anyway so I may not be ready to take the test before the book is available so that should work out.
In any case, certification or no, it was helpful to read the db admin guide multitenant sections to get a feel for how it works. Probably chapter 44 was the most interesting because it talked about using resource manager to divide resource usage between the pluggable databases that are sharing the same instance.
Stewart’s too modest to mention it on the blog himself, but I just wanted to congratulate Stewart Bryson on being awarded Oracle ACE Director status by the Oracle OTN ACE program. Stewart was given the Oracle ACE award a few years ago to recognise past work he’d done for the Oracle BI, DW and ETL community, but this higher ACE Director award recognises the ongoing work he’s since been doing to share his knowledge and experience with the Oracle community. For a taste of what he’s been doing recently, check out this recent article on OTN on migrating OWB to ODI, where he goes through ODI12c installation, and then walks through several scenarios for interoperating with, and migrating to, Oracle Data Integrator 12c.
I also wanted to take this opportunity to share the exciting news that Edelweiss Kammermann, who many of you will know from her work with the Latin American Oracle User Group community, joined us back at the start of this year to work with Robin Moffatt in our training development area.
We’ve known Edel for many years, and she came over to Brighton last year to present on BI and SOA/BPM at our 2013 BI Forum. Edel is initially working with Robin on developing our new ODI12c course, followed later on in 2014 with our new BI Apps 11g course, and she’ll be delivering training in Europe, USA and Latin America later on in the year. We’re incredibly pleased and proud that Edel has joined us, and being an Oracle ACE herself, we’ve now got two Oracle ACE Directors, and two Oracle ACEs, in the Rittman Mead team.
I’m sure you’ll all join me in congratulating Stewart, and welcoming Edel to the Rittman Mead team!
We have just published our third episode in the first ETV series—an interview with Stanford University’s Amy Collier about MOOCs. As is often the case with these episode, there are lots of different angles we could have taken. This episode is really the answer to a colleague who asks, “What is this thing that Tom Friedman has been talking about?” Amy does a terrific job of delivering a hype (and anti-hype) antidote.
We’ll have broader coverage in the full series about MOOCs we’ll be releasing after this current one, developed from the interviews that Phil and I conducted at the MOOC Research Initiative Conference.