Skip navigation.

Feed aggregator

June 23rd: Boxfusion Consulting Sales Cloud Reference Forum

Linda Fishman Hoyle - Thu, 2015-06-11 17:14

Join us for another Oracle Customer Reference Forum on June 23, 2015, to hear Andy Stevens, Delivery Director of Boxfusion Consulting talk about why they have chosen to implement Oracle Sales Cloud.

Andy will share Boxfusion’s selection process for Oracle Sales Cloud, its implementation approach, and benefits derived from the application. He will also talk about Boxfusion’s experiences helping customers implement Oracle Sales Cloud.

Register now to attend the live Forum on Tuesday, June 23, 2015, at 8:00 A.M. Pacific Time/11:00 A.M. Eastern Time and learn more from Boxfusion Consulting directly. 

Listen to all Customer Forum replays available on demand here, including Acorn Paper, APEX IT, Batesville, Hardware.com, iBasis, and more!

Partner Webcast – Oracle Big Data & Business Analytics: NEOS SNA Solution for Telcos

Behind the hype of big data there's a simple story, as for decades companies have been making business decisions based on transactional data stored in relational databases. Beyond that critical data,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Priority Support Infogram for 11-JUN-2015

Oracle Infogram - Thu, 2015-06-11 15:26

RDBMS
Oracle Pipes (Part 1), from Andrew’s Oracle Blog.
Recent News about Pluggable Databases - Oracle Multitenant, from Upgrade your Database - NOW!
list database monitoring users, from Laurent Schneider.
Oracle SQL and PL/SQL
Why not use WM_CONCAT function in Oracle?, from Lalit Kumar B.
Everything you ever wanted to know about the PL/SQL Debugger, from that JEFF SMITH.
Exadata
New White Paper: Implementing ILM with DB12c, from Exadata Partner Community – EMEA.
ZFS
Zero Copy I/O Aggregation, from Bizarre ! Vous avez dit Bizarre?
MAF
How to Display Base64 Encoded Image in MAF, from the Oracle Partner Hub: ISV Migration Center Team.
Java
Java EE 8 Roadmap Update, from The Java Source.
EPM
Remote Diagnostic Agent - RDA 8.08 released, from Business Analytics - Proactive Support.
From  the same source: Patch Set Update: Hyperion Disclosure Management 11.1.2.3.590
And
Patch Set Update: Hyperion Financial Reporting 11.1.2.4.003
BI
Practical Tips for Oracle BI Applications 11g Implementations, from RittmanMead.
Supply Chain
New Video on Oracle In-Memory Cost Management Solutions, from Oracle Supply Chain Management.
Primavera
Upgrading Older Analytics\Data Warehouse Versions, from the Oracle Primavera Analytics Blog
EBS
From the Oracle E-Business Suite Support blog:
Webcast: eAM GIS Integration Options
New version 200.5 of the 12.1.3 Procurement Approval Analyzer Just Released!
Take a look at the new release of Oracle E-Business Suite Plug-in 12.1.0.4
Database Upgrade Reminder From 12.1.0.1 to 12.1.0.2 by July 2015
Webcast: How to Understand and Resolve Mismatches Between APCC Reports and ASCP Data
Webcast: Demantra Certification Part II - Preparing for Retake
Important Update on the March 2015 12.1.3 Procurement Rollup Patch
Move Order On Down The Road!
Have you applied the March 2015 12.1.3 Procurement RUP? Or have you seen a problem with the Action History of Requisitions being Deleted?
From the Oracle E-Business Suite Technology blog:
Does the Leap Second Affect the E-Business Suite?
…and Finally
Yes, there is in fact a Wikipedia entirely in Scottish.

Parallel DML in 12c

Yann Neuhaus - Thu, 2015-06-11 12:48

Following a question from Randolf Geist (who can imagine that there is something about parallel query that Randolf didn't know?), I get back to some notes I've taken when 12c was out and I've tested them again on 12.1.0.2 - it's about the ability to enable parallel DML at query level.

Test case

In 12.1.0.2 I create two tables. DEMO1 has 100000 rows and is about 80MB. DEMO2 is empty. DEMO1 is parallel.

SQL> create table DEMO1 parallel 2 as
  2  select rownum id , ora_hash(rownum,10) a from xmltable('1 to 1000000');

Table created.

SQL> select table_name,num_rows,blocks from user_tables where table_name='DEMO';

TABLE_NAME   NUM_ROWS     BLOCKS
---------- ---------- ----------
DEMO           100000      10143

SQL>
SQL> create table DEMO2 as select * from DEMO1 where null is not null;

Table created.

SQL>
SQL> alter session set statistics_level=all;

Session altered.

insert ... select ...

Here is a simple insert as select:

SQL> insert into DEMO2 select * from DEMO1;

1000000 rows created.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
SQL_ID  bx27xdnkr7dvw, child number 0
-------------------------------------
insert into DEMO2 select * from DEMO1

Plan hash value: 4271246053

-----------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          |      1 |        |      0 |00:00:17.40 |   24311 |
|   1 |  LOAD TABLE CONVENTIONAL | DEMO2    |      1 |        |      0 |00:00:17.40 |   24311 |
|   2 |   PX COORDINATOR         |          |      1 |        |   1000K|00:00:04.49 |       5 |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000 |      0 |   1000K|      0 |00:00:00.01 |       0 |
|   4 |     PX BLOCK ITERATOR    |          |      0 |   1000K|      0 |00:00:00.01 |       0 |
|*  5 |      TABLE ACCESS FULL   | DEMO1    |      0 |   1000K|      0 |00:00:00.01 |       0 |
-----------------------------------------------------------------------------------------------

Note                                                                                                                                
-----
   - Degree of Parallelism is 2 because of table property
   - PDML is disabled in current session

The select part is done in parallel (it's below the coordinator) but the insert part (LOAD TABLE) is above the coordinator, which means that it is done in serial by the coordinator. In 12.1.0.2 you have no doubt: dbms_xplan has a note to tell you that PDML was not used and it gives the reason: it's not enabled in the session.

When you have tuning pack the parallel queries are monitored by default, so we can get the SQL Monitor Plan. You can get it as text, html or flash but I'll use Lighty here as I find it awesome for that as well:

PDML1.png

Look at the bottom right which details the highlighted plan line: 100% of the load has been done by my session process.

Enable parallel dml

So we need to enable parallel DML for our session. Do you know why? Because inserting in parallel requires to lock the table (or partition) it is inserted into, so the optimizer cannot decide that without our permission. So let's enable parallel DML:

SQL> alter session enable parallel dml;
ERROR:
ORA-12841: Cannot alter the session parallel DML state within a transaction

I cannot do that here because I have a transaction in progress. But in 12c you can also enable parallel DML at query level, with the ENABLE_PARALLEL_DML hint. I've seen it when 12c came out, but it was undocumented. But I discover today that it is documented in the Enable Parallel DML Mode of the Database VLDB and Partitioning Guide.

With the DISABLE_PARALLEL_DML hint you can disable PDML at query level when you have enabled it in the session. And with the ENABLE_PARALLEL_DML hint you can enable PDML for one query even when it's not enabled in the session. And you can do that even if you have a transaction in progress:

SQL> insert /*+ enable_parallel_dml */ into DEMO2 select * from DEMO1;

1000000 rows created.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
SQL_ID  707bk8y125hp4, child number 0
-------------------------------------
insert /*+ enable_parallel_dml */ into DEMO2 select * from DEMO1

Plan hash value: 4271246053

-----------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          |      1 |        |      0 |00:00:18.76 |   22343 |
|   1 |  LOAD TABLE CONVENTIONAL | DEMO2    |      1 |        |      0 |00:00:18.76 |   22343 |
|   2 |   PX COORDINATOR         |          |      1 |        |   1000K|00:00:04.22 |       5 |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000 |      0 |   1000K|      0 |00:00:00.01 |       0 |
|   4 |     PX BLOCK ITERATOR    |          |      0 |   1000K|      0 |00:00:00.01 |       0 |
|*  5 |      TABLE ACCESS FULL   | DEMO1    |      0 |   1000K|      0 |00:00:00.01 |       0 |
-----------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 2 because of table property
   - PDML disabled because object is not decorated with parallel clause
   - Direct Load disabled because no append hint given and not executing in parallel

Ok. I've enabled PDML but PDML occurs only when in parallel. Here the table has no parallel degree and there is no PARALLEL hint. Once again dbms_xplan gives us the reason. And because it's not PDML and there is no append hint, then it's not loaded in direct-path.

Here is the SQL Monitoring plan. Note that is the same as the previous one except that it's not the same cost. I don't know why yet. If you have any idea, please comment.

PDML2.png

Enable parallel DML while in a transaction

I disable PDML and start a transaction:

SQL> commit;

Commit complete.

SQL> alter session disable parallel dml;

Session altered.

SQL> delete from DEMO1 where rownum
SQL> select status,used_urec from v$transaction where ses_addr=(select saddr from v$session where sid=sys_context('userenv','sid'));

STATUS            USED_UREC
---------------- ----------
ACTIVE                 1000

And while I'm within that transaction, Let's do the parallel insert enabled by hint:

SQL> insert /*+ parallel enable_parallel_dml */ into DEMO2 select * from DEMO1;

999000 rows created.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID  2b8q4k902pbdx, child number 1
-------------------------------------
insert /*+ parallel enable_parallel_dml */ into DEMO2 select * from DEMO1

Plan hash value: 86785878

-------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name | Starts | A-Rows | Buffers | OMem |1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |      |      1 |      4 |     136 |      |     |          |
|   1 |  PX COORDINATOR                    |      |      1 |      4 |     136 |      |     |          |
|   2 |   PX SEND QC (RANDOM)              | :TQ10|      0 |      0 |       0 |      |     |          |
|   3 |    LOAD AS SELECT (HYBRID TSM/HWMB)|      |      0 |      0 |       0 |   33M|  33M| 2068K (0)|
|   4 |     OPTIMIZER STATISTICS GATHERING |      |      0 |      0 |       0 |      |     |          |
|   5 |      PX BLOCK ITERATOR             |      |      0 |      0 |       0 |      |     |          |
|*  6 |       TABLE ACCESS FULL            | DEMO1|      0 |      0 |       0 |      |     |          |
-------------------------------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Here PDML occurred. We know that because of the load operator under the coordinator (Note to self: HYBRID TSM/HWMB is something to investigate - once again comments welcome). I've displayed the plan with 'allstats last' which show only the coordinator activity. SQL Monitor can show all:

PDML5.png

Conclusion

Yes you can enable PDML at query level in 12c and it is documented. And you can do it even when within a transaction which is a restriction only for 'alter session enable parallel dml' but not for that hint.

The PeopleTools CAB is Looking to Expand

PeopleSoft Technology Blog - Thu, 2015-06-11 12:31
The PeopleTools Customer Advisory Board is comprised of PeopleSoft technology strategists and a small group of customers that are leaders - both in their organizations and the PeopleSoft customer community as well. Our members are the decision makers who set directions and priorities for their PeopleSoft applications.

The PeopleSoft Technology team is looking to expand our CAB with a few individuals who are familiar with the business side of managing PeopleSoft applications. No propeller on top of your hat? No worries! We are NOT looking to talk about the next widget for Process Scheduler. We ARE looking for senior management responsible for their PeopleSoft implementations who understand the customer value provided with the latest delivered technology. We look to our CAB to validate our direction and messaging. We depend on them to represent the larger customer community, not just their own organization, when providing feedback to us. The ideal candidates are:

1) Knowledgeable in PeopleTools and technology in general

2) Adopting current functionality in their PeopleSoft Applications

3) Leaders in the user community

4) Willing and able to participate in face to face CAB meetings (1-2 times per year) and conference calls as needed

5) Sharing what they have done with the user community through conference presentations or web casts

Does your organization have a history of rapidly adopting new PeopleSoft technology? Are you working on your plan to implement Fluid UI and reap the rewards? Has PTF become a key part of your testing strategy? Have Engineered Systems changed what you do with PeopleSoft? Are you rolling out enhancements via Selective Adoption? Do you consistently evaluate new functionality and look for ways for your business to benefit from it? If you are one of the people we’re looking for and would like to be considered for the PeopleTools Customer Advisory Board, please contact mark.hoernemann@oracle.com to receive an application where you can share the criteria that set you apart from other customers.

SQL Server 2016 : availability groups and automatic failover enhancements

Yann Neuhaus - Thu, 2015-06-11 11:07

Let’s continue with this study of the new availability group enhancements. Others studies are available here:

This time we’ll talk about the possibility to enrol a third replica for automatic failover. It implies of course to configure synchronous replication between the 2 pairs of replicas and this is at the cost of degrading the overall performance. But it seems that in this area we can expect to have also some improvements. So maybe another future study.

First of all, my feeling is that this enhancement will be very interesting in terms of availability but unfortunately introducing a third replica in this case will not be affordable for some customers in terms of budget. So, the final package is surely not yet defined and this would lead me to draw conclusions based on inaccurate information. So let’s focus only on the technical aspect of this feature for the moment:

I have included a third replica (SQL163) to my existing availability group 2016Grp:

 

blog_51_-_1-_aag_config

 

In parallel, my cluster quorum is configured as follows:

 

blog_51_-_2-_clust_config

 

...

 

blog_51_-_3-_quorum_config

 

...

 

blog_51_-_4-_quorum_config

 

Basically, this is a windows failover cluster CLUST-2021 on a single subnet that includes three nodes (SQL161, SQL162 and SQL163) and configured to use a file share witness as well as dynamic quorum capability.

I simulated a lot of test failures in my lab environment (shutdown of a replica, turn off of a replica, lost a database file, disable the network cards and so on) and the automatic failover on 2 pairs of replicas was successful in each case. However, this raised the following question: which secondary replica will be chosen by the system? I didn’t see a configuration setting that controls the “failover priority order list” and I believe it could be a good adding value here. After performing others tests and after discussing with some other MVPS like Christophe Laporte (@Conseilit), I noticed that the failover order seems to be related to the order of the preferred owner of the related availability group cluster role. Moreover, according to this very interesting article from Vijay Rodrigues, this order is set and changed dynamically by SQL Server itself, so changing the order directly from the cluster itself seems to be a very bad idea. Next, I decided to configure directly the order at the creation step of the availability during the adding operation of the replicas and it seems to be the good solution.

To illustrate this point here the initial configuration I wanted to achieve:

 

  • SQL161 primary replica
  • SQL163 secondary replica (first failover partner)
  • SQL162 secondary replica (second failover partner)

After adding this replica in the correct order from the wizard here the inherited order I get from the related cluster role:

 

blog_51_-_7-_preferred_owner_list

 

This order seems to be preserved according to the current context of the availability group.

The same test with a different order like:

  • SQL163 primary replica
  • SQL162 secondary replica (first failover partner)
  • SQL161 secondary replica (second failover partner)

… Givesus a different result and once again this order is preserved regardless the context changes:

 

blog_51_-_8-_preferred_owner_list

 

This idea of controlling the failover replicas order comes from a specific scenario where you may have two secondary replicas across multiple sites. You may decide to failover first on the secondary replica on the same datacenter and then the one located on the remote site.

 

blog_51_-_7-_preferred_owner_list_

 

 

But wait a minute… do you see the weakness in the above architecture? Let’s deal with the node weights (in red). You may noticed that you will have to introduce another replica in order to avoid losing the quorum in case of the datacenter 1 failure. Indeed, you won’t get the majority with the current architecture if it remains nothing but the file share witness and the replica on the datacenter 2. So the new architecture may be the following:

 

blog_51_-_6-_archi

 

In this case we may or may not decide to use this additional node as a SQL Server replica on the datacenter 2 but it is at least mandatory in the global cluster architecture to provide automatic failover capability for the availability group layer in case of the datacenter 1 failure. This is why I said earlier that introducing this new availability group capability may not be affordable for all of the customers assuming that this additional replica must be licenced.

See you

SQL Server 2016: native support for JSON

Yann Neuhaus - Thu, 2015-06-11 10:00

A lot of discussions and most important, a feature requests in the Microsoft connect site here with more than 1000 votes is the origin of this new feature in SQL Server 2016.

 

A lot of NoSQL have already this function and PostgreSQL for example have the json_extract_path_text functionality and you can at every time ask my colleague Daniel Westermann one of our expert in PostgreSQL of this subject.

using dbms_server_alert in combination with a custom monitoring solution

Yann Neuhaus - Thu, 2015-06-11 06:35
Lot's of companies do not use Grid- or Cloud Control for monitoring their Oracle databases for various reasons but rather use open source tools like nagios. And lot of those either implemented custom script frameworks or rely on plug-ins available for the monitoring solution of choice. This post shall show on how you can let Oracle check a lot of its own metrics and you only need one script for alerting various metrics.

Site maintenance and how to manage changing URLs

Tim Hall - Thu, 2015-06-11 03:37

DiagnosticsAfter my recent rants about Oracle changing URLs and breaking stuff, I’ve actually done some changes myself. :)

From time to time change is forced on internet content producers. This might be because of platform changes, or changes in the way search engines behave. The important thing is how you handle that change.

Followers of the blog will know I recently made my website responsive. That happened in part because Google recently announced they would downgrade the rankings of sites that weren’t “mobile friendly” and “responsive”. The search ranking were only meant to affect mobile searches. What they didn’t say, but many people including myself believe, is that these rankings actually affect normal desktop-based searches as well. When this Google announcement was made, I noticed a drop in my hit rate. Once I changed the site to be responsive, the hit rate went up again somewhat. When I recently corrected about 100 of the remaining non-responsive articles, the hit rate went up again. It could be conincidence, but it certainly seems there was a bleed-over of this ranking change into the desktop side of things, which represents over 95% of my traffic. Those changes affected content, but not the URLs to the content.

Since I’m revisiting almost every article to fix broken links to Oracle docs, I thought I would take the opportunity to do some additional site maintenance, specifically in the following two areas.

  • HTTPS : About 9 months ago I got a certificate for the website to allow it to be accessed using HTTPS. This was also influenced by a Google decision that they would improve the ranking of content that was available over HTTPS, as well as HTTP. It was part of their “HTTPS Everywhere” campaign. Even though the site could handle HTTPS, I did not make it the default. As of a couple of days ago, you may have noticed all pages on oracle-base.com are how delivered over HTTPS. Unfortunately, this represents a URL change as far as the internet is concerned, so it means lots of broken links, unless you handle it properly. More about that later.
  • Removal of “.php” extension : You will notice many blogs and websites don’t display a file extension of pages, or display a generic “.htm” or “.html” extension. It’s pretty easy to do this using query rewrites in Apache or a “.htaccess” file. For a while, the site could be accessed with or without the “.php” extension. Now it is removed by default. The nice thing about this is you can change the underlying technology at any time, without having to support an inconsistent file extension. Once again, this represents a URL change.

So how do you manage this change without pissing off “the internet”?

The answer is rewrites and redirects done in real web pages, Apache config or “.htaccess” files. Essentially, you are supporting the old URL and redirecting the browser to the new URL, using a 301 redirect, so all search engines know the content has moved location and can be re-indexed in that new location. Over time, all the links from Google will go directly to the new URL.

So that means you can remove the redirects after a while right? NO! People will have links from their website to the old URLs forever. People will have bookmarks in their browsers forever. If you are going to change a URL, the old URL must be maintained forever.

Over the years I’ve made lots of structural changes to the site.

  • When my website started it was written in Active Server Pages, using a “.asp” extension.
  • After a while I switched to PHP, using a “.php” extension.
  • I used to name pages using initcap. A couple of years ago I switched to lower case and “-” separated names.
  • About 9 months ago I removed the “www.” because it seemed pointless in this day and age.
  • I’ve just swicthed to HTTPS.
  • I’ve just removed the “.php” extension.

If we look at a really old article, probably about 15 years old, we will see the history of these changes in the following URLs.

So all those structural changes over the last 15 years should have resulted in zero broken links, zero pissed off content producers who link to my content and zero uninformed search engines.

Now I’m not perfect, so if anyone finds something that is broken, I will fix it, assuming it’s not your bad typing or copy/pasting. :)

Cheers

Tim…

PS. Any structural changes, regardless of how well you do your 301 redirects, can result in lower search rankings, so it should not be done on a whim if you really care about hitting that top spot on Google. This is my hobby, so I will do whatever I want. :)

Site maintenance and how to manage changing URLs was first posted on June 11, 2015 at 10:37 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How To Record ADF Client Side Request Performance Time

Andrejus Baranovski - Thu, 2015-06-11 00:13
I had a blog post, where I have described how to monitor client side request performance time with Ajax tag (works in ADF 12c) - Monitoring PPR Request Time on ADF UI Client Side. There is effective way to propagate recorded statistics back to the server for history logging using WebSockets - WebSocket Integration with ADF for PPR Request Monitoring. This requires WebSockets integration into project, if project doesn't allow this - there is another way. We could leverage Ajax call to the Java servlet and pass logged statistics data through parameter. I will describe how this can be achieved.

Sample application - ADFAltaApp_v6.zip, implements standard servlet. In the doGet method, I'm accessing parameter and parsing received statistics data. From here we could log data to the DB, so it could be analysed further. Data processing should be executed in the separate thread, not to block servlet execution and return response back to the client. Java in ADF 12c supports asynchronous servlets, however this is not the case with ADF 11g versions (unless you change JVM to 1.7). Parameter processing in the servlet:


On the client side, JavaScript code contains a method to acquire Ajax XmlHttpRequest object. Through this object we can invoke servlet, directly from JavaScript:


In the monitor function (invoked automatically on start and end events for the request) - we are invoking servlet and passing request performance data to log. I'm executing synchronous call, this is done on purpose - asynchronous calls in JavaScript can be lost, if user executes requests fast, portion of the data will be skipped. Synchronous call is made at the end of the request and usually is pretty fast (make sure to log data in the separate servlet thread, not to block execution):


This is how it works. Press Save button - you will see information about request processing time embedded in the top of the page (this is calculated and updated in JavaScript function from above):


Servlet on the server side prints received data to be logged - button ID, request processing time, user info:


Try to invoke another request - navigate back to the previous fragment. Time will be logged and displayed in the top part of the page, same is communicated to the servlet for logging:


Time is being logged on the server side:


I hope described idea will be useful for your own project, where you could measure ADF UI client performance (from button click, until action is done). Ajax tag to catch request start and stop events works only in ADF 12c, but you could do similar thing in ADF 11g with custom JavaScript listeners.

draft release notes for PostgreSQL 9.5 online

Yann Neuhaus - Thu, 2015-06-11 00:02
Bruce Momjian, one of the PostgreSQL Core members, just compiled the first draft version of the release notes for the upcoming PostgreSQL 9.5

Some of the goodies that will show up: If you want to test any of the new features check this post.

Oracle Product Support Advisor Webcasts June 2015

Chris Warticki - Wed, 2015-06-10 16:42
Oracle Corporation flag1 Oracle Product Support Advisor Webcasts June 2015 flag2 shadow1 flag3

This Month:


Oracle Product Support Advisor Webcasts for June shadow1 dial Dear Valued Support Customer,
We are pleased to invite you to our Advisor Webcast series for June 2015. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products.

 To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts;

Many of the Oracle Product Support teams maintain support blogs. You may access the Support Blogs directly, or via the Product Support Blog Index. Watch this short video to see how to subscribe to a support blog of interest.

Sincerely,
Oracle Support

shadow2 shadow3 pen June Featured Webcasts by Product Area: CRM Preparing Certificates for SISNAPI Encryption in Siebel June 18 Enroll Database How to Resolve Patch Conflicts with MOS Conflict Checker? June 23 Enroll E-Business Suite How to Understand and Resolve Mismatches Between APCC Reports and ASCP Data June 11 Enroll E-Business Suite eAM GIS Integration Options June 18 Enroll E-Business Suite Outside Processing (OSP) for OPM in 12.2.4 June 23 Enroll E-Business Suite Work In Process Scrap - Costing Overview June 24 Enroll Eng System Advanced Customer Support (ACS) - Solution Support Center for Engineered Systems June 30 Enroll JD Edwards JD Edwards World: Payroll & HR General Outline & Navigation - JDE World Payroll & HR Basics Part1/14 June 18 Enroll JD Edwards JD Edwards EnterpriseOne: P41203 Lot Management Workbench (Trace/Track Inquiry) Functionality June 23 Enroll JD Edwards JD Edwards World: Costed Routing (P30208) in Manufacturing June 24 Enroll JD Edwards JD Edwards EnterpriseOne:Troubleshooting Tips for AS400 related issues June 25 Enroll Middleware WebCenter Content - All about Oracle Text Search (OTS) June 24 Enroll Oracle Business Intelligence OBIEE11g セキュリティ概要とトラブルシューティング (Japanese Only) June 24 Enroll PeopleSoft Enterprise Using the Affordable Care Act Component Interface June 24 Enroll Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices | Privacy SEV100425990_LRT100425915
Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

Sweet Dreams at the EyeO Festival

Oracle AppsLab - Wed, 2015-06-10 15:17

 

cherryjohn-at-eyeO


I often tell people that you need both a left brain and a right brain to be a software designer: a left brain to analyze and understand, a right brain to dream and wonder. The EyeO Festival which Thao and I just attended in Minneapolis, was food for our right brains.

EyeO is about the intersection of art and code: generative artists (who use data and algorithms to produce works of art), art installations (which often require sophisticated coding), and those who see coding itself as an art form. It is not so much about knowledge transfer as it about building a community, meeting world-class data artists and hearing their back stories.

I attended fourteen talks in all and saw many wonders.

The JPL crew controlling the Mars rover use Microsoft HoloLens goggles to create an augmented reality, allowing scientists in remote locations to stand together on the surface of the planet. Each person sees their own desk, chair and monitor sitting in a crater with the rover just a few feet away. As their eyes scan the area, little dots of light show where each person is looking; when they use their mouse to teleport to a nearby ridge, others see their avatars walk to the new location. They can even walk around the rover and point to where it should go next.

The design team at nervo.us (she’s a biologist, he’s a physicist) is interested in how complex forms arise in nature from cells growing at different rates. Using their own custom software, they create spectacular simulations and turn these into 3-D printed art objects. One of their most stunning creations is a kinematics dress, made supple using thousands of tiny interlocking plastic hinges perfectly fitted to the laser-scanned image of a customer’s body. With scary-hard math, they generalize a moving body from a single scan, compute not just how the dress will look but how it will hang and twirl, and even prefold it so that it will fit in today’s small 3-D printers.

Perhaps the most jaw-dropping demonstration was a sneak preview of “Connected Worlds,” an installation that will be opening soon at the New York Hall of Science. Three years in the making, it creates a Star Trek style holodeck with a 50-foot waterfall and six distinct biomes populated by whimsical plants and animals. Children move physical logs to redirect virtual water into the various biomes; if they make the right decisions wonderful trees will grow and attract ever more magical animals. The team at Design I/O described technical challenges and lessons learned, some of which might be applicable to future AppsLab motion-tracking projects.

One of the topics I found most stimulating was new and improved coding languages. I have used Processing, a language developed specifically for artists, to create some of the interactive visualizations we show in our cloud lab. It was a thrill to meet and talk with Processing’s co-inventors and hear their plans for new evolutions of the language, including P5.js, Processing.py, and the upcoming Processing 3.0.

But the most interesting talk about languages was by a guy named Ramsey Nassar. Ramsey is an uber-coder who creates new computer languages for fun. He argues that most coders are stuck using alienating, frustrating, brittle languages created decades ago for a world that no longer exists. He wants to create languages that facilitate “post-human creativity,” new forms of creativity not possible before computers. Some of his languages, like god.js (which makes code look like biblical text) and Emojinal (made completely out of emoji), are just for fun. Others, like Alb (the first entirely Arabic coding language), Arcadia (for Unity 3D VR game development), Zajal (inspired by Processing), and Rejoice (a stack language based on Joy), are practical and mind-expanding. I plan to talk more about why coding languages should matter to designers in a future blog post.

As with any conference there were countless small discoveries, too many to report in full. Here are just a few…

Amanda Cox of the New York Times talked about making data more relatable by using geocoding to default the initial view of a large geographical dataset to the user’s own locale. Another interesting technique was having users guess what a plotted curve would look like by drawing it before showing the actual curve.

One clever flourish I noticed was the use of tiny single-value pie charts placed beneath each year in the X axis of a time-driven line chart to add an extra dimension of data about each year without distracting from the main point of the chart.

Sprint, the telephone company, started out as a railroad company that used their existing right of way to plant cell towers. Sprint stands for Southern Pacific Railroad Internal Networking Telephony.

Reza Ali is an amazing generative artist who turns data and music into images, animations, and tangible objects. One of his secret weapons is ofxPro. Check out his music videos for the band OK Go.

Into LED arrays and Raspberry Pi? Check out Fadecandy.

Timescape is a visualization-based storytelling platform, currently in beta. Looks interesting.

How long does it take the New York Times team to create world-class infographics? As long as you have plus one half hour.

What kind of music do coding language nerds listen to? The Lisps of course!

My right brain is full now. Time to dream!Possibly Related Posts:

Asteroid Explorer Launched

Oracle AppsLab - Wed, 2015-06-10 13:28

On Monday, we launched Asteroid Explorer at the Harvard-Smithsonian Center for Astrophysics.

Jeremy (@jrwashley), DJ, Kris Robison and I attended the launch event, which you can watch here. My part of the presentation begins at 36:36.

This event was the culmination of NASA’s Asteroid Hackathon event back in October. Remember that?

Here’s the event abstract:

Abstract: In October 2014, NASA’s Asteroid Hackathon event was hosted (with several other NASA partners) at the SETI institute in Mountain View, California. Team NOVA’s overall winning solution for this hackathon allowed users to explore relationships among the Minor Planet Center’s asteroid data. The elegant interface was not just eye-catching, the repeated learning that hackathon participants experienced in the “science fair” portion of judging greatly impressed the judges. More than once, people discovered relationships among asteroid data parameters that they didn’t previously know about. A perfect outcome for one of the primary goals: to increase public knowledge of asteroids. Dr José Luis Galache (Acting Deputy Director, Minor Planet Center) and DJ Ursal (Director, Product Management at Oracle) teamed up together through the Oracle Volunteering initiative to implement the winning entry from the Asteroid Hackathon on the Minor Planet Center website. On June 8th they will be launching the website as part of the for the Harvard-Smithsonian Center for Astrophysics’s Solar, Stellar and Planetary Sciences division seminar series. The team will be discussing this project as it relates to cooperation between the Minor Planet Center, NASA, Oracle Volunteering, and its goal to inform and involve the general public.

This volunteer effort is a great success, and the result is well received and appreciated by the astrophysicists attending the launch event.

IMAG6211

Jeremy Ashley, GVP Oracle Applications User Experience, speaking at the launch of Asteroid Explorer

The NASA Grand Challenge program executive Jason Kessler (@soughin) was at the White House, talking up the Asteroid Hackathon and this volunteer work there, before calling into the event via Skype.

IMAG6205

The event was live broadcast through the Minor Planet Center’s YouTube channel, and the audience at the Center was mostly astrophysicists.

On the roof-top of the Harvard-Smithsonian Center for Astrophysics, there are several telescopes, including the famous Harvard Great Refractor. But we liked this cute mini Astro Haven.

IMAG6201

A bit about Asteroid Explorer, the main part of the web tool utilizes Crossfilter, D3.js and Highcharts. I processed Asteroid data into proper slicing, grouping to feed into Crossfilter to render the interactive filter bar charts and tables; also, I created bubble chart to render series of property data for looking into correlation of any pair of properties, and dynamically react to filter bar charts range slider.

Screen Shot 2015-06-08 at 11.48.36 PM

Screen Shot 2015-06-08 at 11.49.58 PM

Possibly Related Posts:

list database monitoring users

Laurent Schneider - Wed, 2015-06-10 10:34

I am quite familiar with the SYSMAN tables but this one required me some googling beyond the Oracle documentation.

The list of targets in your Oracle Enterprise Manager is in SYSMAN.MGMT_TARGETS. Each database target is monitored by a database user, typically DBSNMP.

To retrieve this information, you need some to hijack your database, read this : Gökhan Atil

  1. you copy your encryption key to your repository database, on the OMS server
  2. 
    $ emctl config emkey -copy_to_repos
    Enter Enterprise Manager Root (SYSMAN) Password :
    

    Now anyone with select any table on your repository will see all passwords. You don’t want to do this, but unfortunately you have to do this because even the username is encrpyted.

  3. you decrypt the credentials for db monitoring
  4. 
    SELECT *
    FROM (
      SELECT target_name,
        sysman.em_crypto.decrypt (
          c.cred_attr_value, 
          c.cred_salt) cred,
        cred_attr_name attr
      FROM SYSMAN.EM_TARGET_CREDS c
      JOIN SYSMAN.mgmt_targets t USING (target_guid)
      JOIN sysman.EM_NC_CRED_COLUMNS c USING (cred_guid)
      WHERE c.target_type = 'oracle_database'
      AND c.set_name = 'DBCredsMonitoring' 
    ) PIVOT ( 
      MAX (cred)
      FOR (attr) IN (
        'DBUserName' AS USERNAME, 
        'DBRole' AS "ROLE")
      )
    

    
    TARGET_NAME USERNAME ROLE  
    ----------- -------- ------
    DB01        dbsnmp   NORMAL
    DB02        dbsnmp   NORMAL
    DB03        sys      SYSDBA
    

  5. remove the security leak
  6. 
    $ emctl config emkey -remove_from_repos
    Enter Enterprise Manager Root (SYSMAN) Password :
    

Now the em_crypto won’t work any more


select 
  sysman.em_crypto.decrypt('0','0')
from dual
  *
Error at line 2
ORA-28239: no key provided
ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
ORA-06512: at "SYS.DBMS_CRYPTO", line 44
ORA-06512: at "SYSMAN.EM_CRYPTO", line 250
ORA-06512: at line 1

This information could be used to change the password dynamically accross all databases.


emcli login \
  -username=sysman \
  -password=sysmanpw
emcli update_db_password \
  -target_name=DB01 \
  -user_name=dbsnmp \
  -change_at_target=yes \
  -old_password=oldpw \
  -new_password=newpw \
  -retype_new_password=newpw

APEX Connect Presentation and Download of the sample application

Dietmar Aust - Wed, 2015-06-10 10:30
Hi guys,

I have just finished my presentation on the smaller new features of Oracle Application Express 5.0 here at the APEX Connect conference in Düsseldorf ... it was a blast :).

You can download the slides and the sample application here:

Cheers, ~Dietmar. 

Can you have pending system statistics?

Yann Neuhaus - Wed, 2015-06-10 09:08

Your system statistics seems to be wrong and you want to gather or set more relevant ones. But you don't want to see all your application execution plans changing between nested loops and hash joins. For object statistics, we can gather statistics in a pending mode, test them in a few sessions, and publish them when we are ok with them. But for system statistics, can you do the same? It can be risky to try it, so I've done it for you in my lab.

Test case in 11g

 

SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production

SQL> create table DEMO as
           select rownum id , ora_hash(rownum,10) a , ora_hash(rownum,10) b , lpad('x',650,'x') c 
           from xmltable('1 to 100000');

Table created.

Here are my system statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2
is not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 08:11
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:11
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

I check a full table scan cost:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  2752   (1)| 00:00:34 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  2752   (1)| 00:00:34 |
--------------------------------------------------------------------------

No surprise here. I've 10000 blocks in my tables, SREATDIM= IOSEEKTIM + db_block_size / IOTFRSPEED= 12 ms and MREADTIM= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED = 26 ms. Then the cost based on a MBRC of 8 is ( 26 * 10000 / 8 ) / 12 = 2700

 

Pending stats in 11g

I set 'PUBLISH' to false in order to have pending statistics:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

Then I set some system statistics manually to simulate a fast storage:

17:14:38 SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

17:14:38 SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I run the same explain plan:

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  1643   (2)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  1643   (2)| 00:00:02 |
--------------------------------------------------------------------------

The cost is better. I'm not using pending statistics, which means that the published stats have been changed - despie the PUBLISH global preference set to FALSE:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 i
s not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM 1
                     SYSSTATS_MAIN    IOTFRSPEED 204800
                     SYSSTATS_INFO    DSTART                06-10-2015 08:14
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:14
                     SYSSTATS_INFO    FLAGS               1
                     SYSSTATS_INFO    STATUS                COMPLETED

As you see, the SYS-AUX_STATS$ show my modified values (note that the date/time did not change by the way). So be careful, when you set or gather or delete system statistics in 11g you don't have the pending/publish mechanism. It's the kind of change that may have a wide impact changing all your execution plans.

 

With the values I've set the SREADTIM is near 1 ms and MREADTIM is about 1.3 ms so the cost is ( 1.3 * 10000 / 8 ) / 1 = 1625 which is roughly what has been calculated by the CBO on my new not-so-pending statistics.

12c

If you look at 12c you will see new procedures in dbms_stats which suggest that you can have pending system statistics:

SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select procedure_name from dba_procedures where object_name='DBMS_STATS' and procedure_name like '%PENDIN
G_SYSTEM_STATS';

PROCEDURE_NAME
--------------------------------------------
DELETE_PENDING_SYSTEM_STATS
EXPORT_PENDING_SYSTEM_STATS
PUBLISH_PENDING_SYSTEM_STATS

but be careful, they are not documented. Let's try it anyway. I start as I did above, with a demo table and default statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  2752   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  2752   (1)| 00:00:01 |
--------------------------------------------------------------------------

I set PUBLISH to false and set manual system stats:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I check the SYS.AUX_STATS$ table:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

Good ! I still have the previous values here. The new stats have not been published.

 

The pending stats are stored in the history table, with a date in the future:

SQL> select savtime,sname,pname,pval1,pval2 from sys.wri$_optstat_aux_history where pval1 is not null or pval2
 is not null and savtime>sysdate-30/24/60/60 order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
01-dec-3000 01:00:00 SYSSTATS_MAIN    CPUSPEEDNW       2725
01-dec-3000 01:00:00 SYSSTATS_MAIN IOSEEKTIM 10
01-dec-3000 01:00:00 SYSSTATS_MAIN IOTFRSPEED 204800
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTART                06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTOP                 06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    FLAGS               1
01-dec-3000 01:00:00 SYSSTATS_INFO    STATUS                COMPLETED

That's perfect. It seems that I can gather system statistics without publishing them. And I don't care about the Y3K bug yet.

 

12c use pending stats = true

First, I'll check that a session can use the pending stats if chosen explicitly:

SQL> alter session set optimizer_use_pending_statistics=true;

Session altered.

the I run the query:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO2;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1308   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1308   (1)| 00:00:01 |
--------------------------------------------------------------------------

Cost is lower. This is exacly what I expected with my new - unpublished - statistics. Good. I don't know what it's lower than in 11g. Maybe the formula has changed. This is another place for comments ;)

 

12c use pending stats = false

Ok I checked that the published statistics are the same as before, but let's try to use them:

SQL> alter session set optimizer_use_pending_statistics=false;

Session altered.

and once again run the same query:

SQL> set autotrace trace explain

SQL> select * from DEMO DEMO3;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1541 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1541 |
----------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)

Oh. There is a problem here. 'cpu costing is off' means that there are no system statistics. The cost has been calculated as it were in old versions whithout system statistics. This is bad. I have gathered pending statistics, not published, but all sessions have their costing changed now.

 

10053

Just a look at the 10053 trace show that I have a problem:

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
System Stats are INVALID.
...
  Table: DEMO  Alias: DEMO3
    Card: Original: 80500.000000  Rounded: 80500  Computed: 80500.000000  Non Adjusted: 80500.000000
  Scan IO  Cost (Disk) =   1541.000000
  Scan CPU Cost (Disk) =   0.000000
  Total Scan IO  Cost  =   1541.000000 (scan (Disk))
                       =   1541.000000
  Total Scan CPU  Cost =   0.000000 (scan (Disk))
                       =   0.000000
  Access Path: TableScan
    Cost:  1541.000000  Resp: 1541.000000  Degree: 0
      Cost_io: 1541.000000  Cost_cpu: 0
      Resp_io: 1541.000000  Resp_cpu: 0
  Best:: AccessPath: TableScan
         Cost: 1541.000000  Degree: 1  Resp: 1541.000000  Card: 80500.000000  Bytes: 0.000000

It seems that with pending statistics the optimizer can't simply get the published values, and falls back as if there were no system statistics. This is a bug obviously. I've not used the undocumented new functions. They were used in the background, but it's totally supported to set PUBLISH to FALSE and the gather system statistics. The behavior should be either the same as in 11g - publishing the gathered stats - or gathering into pending stats only and session continue to use the published ones by default.

 

Conclusion

In 11g, be careful, system statistic changes are always published.

In 12c, don't gather system statistics when PUBLISH is set to false. We can expect that nice new feature in further versions, but for the moment it messes up everything. I'll not open an SR yet but hope it'll be fixed in future versions.

update

Further investigations done by Stefan Koehler on this twitter conversation:

@FranckPachot IOSEEKTIM=1 is not accepted/set. Reason for cost drop to 1308 in case of pending SYS stats … 1/2 pic.twitter.com/jIotn07lbP

— Stefan Koehler (@OracleSK) June 11, 2015

Pillars of PowerShell: Windows OS

Pythian Group - Wed, 2015-06-10 06:42
Introduction

This is the fifth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling

The Windows Operating System (OS) is something a DBA should know and be familiar with since SQL Server has to run on top of it. I would say that on average most DBAs interact with the OS for troubleshooting purposes. In this post I just want to point out a few snippets of how PowerShell can help you do this type of work.

 Services Console Manager

In the SQL Server 2000 days DBAs became very familiar with typing in “services.msc” in the run prompt. Scrolling through the list of services to find out what state it is, or what the login is configured for with a particular service. Now, if you are performing administrative tasks against SQL Server services it is always advised that you use SQL Server Configuration Manager. However, if you are looking to check the status of the service or performing a restart of just the service, PowerShell can help out.

Get-Service

This cmdlet has a few discrepancies that it can help to understand upfront when you start using PowerShell instead of the Services Console. In the Services Console you find the service by the “Name”, this is the “DisplayName in the Get-Service cmdlet. The “Name” in Get-Service is actually the “Service Name” in the Service Console, do you follow? OK. So with SQL Server the DisplayName for a default instance would be “SQL Server (MSSQLSERVER)”, and the “Name” would be “mssqlserver”. This cmdlet allows you to filter by either field so the below two commands will return the same thing:

Get-Service 'SQL Server (MSSQLSERVER)'
Get-Service mssqlserver

You can obviously see which one is easier to type right off. So with SQL Server you will likely know that a default instance’s name would be queried using “mssqlserver”, and a named instance would be “mssql$myinstance”. So if you wanted to find all of the instances running on a server you could use this one-liner:

Get-Service mssql*
Restart-Service

This does exactly what you think it will, so you have to be careful. You can call this cmdlet by itself and restart a service by referencing the “name” just as you did with Get-Service. I want to show you how the pipeline can work for you in this situation. You will find some cmdlets in PowerShell that have a few “special” features. The service cmdlets are included in this category, they allow an array as an input object to the cmdlet for the property or via the pipeline.

So, let’s use the example that I have a server with multiple instances of SQL Server, and all the additional components like SSRS and SSIS. I only want to work with the named instance “SQL12″. I can get the status of all component services with this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110'

Now if I need to do a controlled restart of all of those services I can just do this command:

Get-Service -Name 'MSSQL$SQL12','ReportServer$SQL12','SQLAgent$SQL12','MsDtsServer110' |
Restart-Service -Force -WhatIf

The added “-WhatIf” will not actually perform the operation but tell you what it would end up doing. Once I remove that the restart would actually occur. All of this would look something like this in the console:

Get-Service_Restart-Service Win32_Service

Some of you may recognize this one as a WMI class, and it is. Using WMI offers you a bit more information than the Get-Service cmdlet. You can see that by just running this code:

Get-Service mssqlserver
Get-WmiObject win32_service | where {$_.name -eq 'mssqlserver'}

The two commands above equate to the same referenced service but return slightly different bits of information by default:

gwmi_Win32_Service

However, if you run the command below, you will see how gathering service info with WMI offers much more potential:

Get-WmiObject win32_service | where {$_.name -eq 'mssqlserver'} | select *

Get-Service will not actually give you the service account. So here is one function I use often (saved in my profile):

function Get-SQLServiceStatus ([string[]]$server)
{
 foreach ($s in $server) {
 Get-WmiObject win32_service -ComputerName $s |
	where {$_.DisplayName -match "SQL "} |
	select @{Label="ServerName";Expression={$s}},
	DisplayName, Name, State, Status, StartMode, StartName
 }
}

One specific thing I did in this function is declaring the type of parameter you pass into this function. When you use “[string[]]”, it means the parameter accepts an array or multiple objects. You can set your variable to do this, but you also have to ensure the function is written in a manner that can process the array. I did this simply by wrapping the commands into a “foreach” loop. So an example use of this against a single server would be:
getsqlserverstatus
If you wanted to run this against multiple servers it would go something like this:

Get-SQLServerStatus -server 'MyServer','MyServer2','MyServer3' | Out-GridView
#another option
$serverList = 'MyServer','MyServer2','MyServer3'
Get-SQLServerStatus -server $serverList | Out-GridView
Disk Manager

Every DBA should be very familiar with this management console and can probably get to it blind folded. You might use this or “My Computer” when you need to see how much free space there is on a drive. If you happen to be working in an environment that only has Window Server 2012 and Windows 8 or higher, wish I was there with you. PowerShell 4.0 and higher offers storage cmdlets that let you get information about your disk and volume much easier, and cleaner. They actually use CIM (Common Information Model), which is what WMI is built upon. I read somewhere that basically “WMI is just Microsoft’s way of implementing CIM”. They are obviously going back to the standard, as they have done with other areas. It is worth learning more about, and it actually allows you to connect to a PowerShell 2.0 machine to get the same amount of information.

Anyway back to the task at hand. If you are working on PowerShell 3.0 or lower you can use Get-WmiObject and win32_Volume to get similar information that the storage cmdlet Get-Volume returns in 4.0:

Get-Volume
Get-WmiObject win32_volume | select DriveLetter, Label, FileSystem,
@{Label="SizeRemaining";Expression={"{0:N2}" -f($_.FreeSpace/1GB)}},
@{Label="Size";Expression={"{0:N2}" -f($_.Capacity/1GB)}} | Format-Table
win32_volume  Windows Event Viewer

Almost everyone is familiar with and knows their way around the Windows Event Viewer. I actually left this last for a reason. I want to walk you through an example that I think will help “put it all together” on what PowerShell can do for you. Our scenario is dealing with a server that had an unexpected restart, at least for me. There are times that I will get paged by our Avail Monitoring product for a customer’s site, and I need to find out who or why the server restarted. The most common place you are going to go for this will be the Event Log.

Show-EventLog

If you just want to go through Event Viewer and manually find events, and it is a remote server, I find this to be the quickest method:

Show-EventLog -ComputerName Server1

This command will open Event Viewer and go through the process of connecting you to “Server1″. No more right-clicking and selecting “connect to another computer”!

Get-EventLog

I prefer to just dig into searching for events, this is where Get-EventLog comes in handy. You can call this cmdlet and provide:

  1. Specific Log to look in (system, application, or security most commonly)
  2. Specify a time range
  3. Look just for specific entry type (error, information, warning, etc.)

In Windows Server 2003 Microsoft added a group policy “Shutdown Event Tracker” that if enabled writes particular events to the System Log when a server restarts, either planned or unplanned. In an unplanned event the first user that logs into the server will get a prompt about the unexpected shutdown. When you are dealing with planned, they are prompted for a similar prompt for restart and it has to be filled in before the restart will occur. What you can do with this cmdlet is search for those messages in the System Log.

To find the planned you would use:

Get-EventLog -LogName System -Message "*restart*" -ComputerName Server1 |
select * -First 1

Then to find the unplanned simply change “*restart*” to “*shutdown*”:


geteventlog

In this instance I find that SSIS and SSRS did not start back up and failed to start. I found this because I checked the status of the services for SQL Server using my custom function, Get-SQLServiceStatus:

troubleshoot_service1a

To search for events after the shutdown I need to find the first event that is written to the Event Log when a server starts up, the EventLog source. I can then use that time stamp as a starting point to search for messages on the SQL Server services that did not start up correctly. I just need the time stamp of the event and pass that into the Get-EventLog cmdlet to pull up error events. I am going to do that with this bit of code:

$t = Get-EventLog -LogName System -Source EventLog -Message "*shutdown*" | select * -First 1
Get-EventLog -LogName System -Before $t.TimeGenerated -Newest 5 -EntryType Error |
select TimeGenerated, Source, Message | Format-Table -Wrap
troubleshoot_service2a Summary

I hope you found this post useful and it gets you excited about digging deeper into PowerShell. In the next post I am going to close up the series digging into SQL Server and a few areas where PowerShell can help.

 

Learn more about our expertise in SQL Server.

Categories: DBA Blogs

Hadoop generalities

DBMS2 - Wed, 2015-06-10 06:33

Occasionally I talk with an astute reporter — there are still a few left :) — and get led toward angles I hadn’t considered before, or at least hadn’t written up. A blog post may then ensue. This is one such post.

There is a group of questions going around that includes:

  • Is Hadoop overhyped?
  • Has Hadoop adoption stalled?
  • Is Hadoop adoption being delayed by skills shortages?
  • What is Hadoop really good for anyway?
  • Which adoption curves for previous technologies are the best analogies for Hadoop?

To a first approximation, my responses are: 

  • The Hadoop hype is generally justified, but …
  • … what exactly constitutes “Hadoop” is trickier than one might think, in at least two ways:
    • Hadoop is much more than just a few core projects.
    • Even the core of Hadoop is repeatedly re-imagined.
  • RDBMS are a good analogy for Hadoop.
  • As a general rule, Hadoop adoption is happening earlier for new applications, rather than in replacement or rehosting of old ones. That kind of thing is standard for any comparable technology, both because enabling new applications can be valuable and because migration is a pain.
  • Data transformation, as pre-processing for analytic RDBMS use, is an exception to that general rule. That said …
  • … it’s been adopted quickly because it saves costs. But of course a business that’s only about cost savings may not generate a lot of revenue.
  • Dumping data into a Hadoop-centric “data lake” is a smart decision, even if you haven’t figured out yet what to do with it. But of course, …
  • … even if zero-application adoption makes sense, it isn’t exactly a high-value proposition.
  • I’m generally a skeptic about market numbers. Specific to Hadoop, I note that:
    • The most reliable numbers about Hadoop adoption come from Hortonworks, since it is the only pure-play public company in the market. (Compare, for example, the negligible amounts of information put out by MapR.) But Hortonworks’ experiences are not necessarily identical to those of other vendors, who may compete more on the basis of value-added service and technology rather than on open source purity or price.
    • Hadoop (and the same is true of NoSQL) are most widely adopted at digital companies rather than at traditional enterprises.
    • That said, while all traditional enterprises have some kind of digital presence, not all have ones of the scope that would mandate a heavy investment in internet technologies. Large consumer-oriented companies probably do, but companies with more limited customer bases might not be there yet.
  • Concerns about skill shortages are exaggerated.
    • The point of distributing processing frameworks such as Spark or MapReduce is to make distributed analytic or application programming not be much harder than any other kind.
    • If a new programming language or framework needs to be adopted — well, programmers nowadays love learning that kind of stuff.
    • The industry is moving quickly to make distributed systems easier to administer. Any skill shortages in operations should prove quite temporary.
Categories: Other

Flame Graph for quick identification of Oracle bug

Yann Neuhaus - Wed, 2015-06-10 04:28

Most of my performance stores start with a screenshot of Orachrome Lighty my preferred tool to have a graphical view of the database performance, in Standard and Enterprise Edition without any options:

b2ap3_thumbnail_CaptureHighParse.JPG