Feed aggregator

Remote DBA Benefits Comparison Series- Part 2- Service Quality and Flexibility

Chris Foot - Wed, 2016-06-29 10:21

In part 1, we talked about the cost reduction benefits that remote DBA service providers offer customers. In part 2 of the Remote DBA Benefits Comparison Series, we’ll discuss the continuity and quality of service and support flexibility benefits customers experience when utilizing a remote DBA service provider. We’ll also compare these benefits to in-house and consulting alternatives.

Documentum Multiple ADTS: Switching rendition queue

Yann Neuhaus - Wed, 2016-06-29 08:29

As part of my previous posts about having 2 rendition servers for one docbase (see below), I’ll show you how to simply switch a rendition queue item to the other server
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-1/
http://blog.dbi-services.com/documentum-multiple-adts-ratio-of-rendition-creations-between-instances-part-2/

I had an issue by a customer where one of the two rendition server was stuck since 2 days. As I explained in my previous posts, each server will reserve a group of items from the queue for it to process. Let’s say we got the threshold to 5 items. Each server will reserve 5 items in the dmi_queue_item and set the attribute sign_off_user to itself. E.g. RNDSERVER_DOCBASE1.

Then it will process each items one by one; onces one is done it will reserve a new one from the queue, and so on.

The problem is: if the rendition server is stuck for whatever reason all reserved items will NOT go back to the available pool. It means that they will be reserved by THIS rendition server until you fix the server and it starts processing them again.

You can imagine what I got by the customer, some documents were not rendered since 2 days!

So here is the simplest solution to put the items back in the pool:

update dmi_queue_item objects set sign_off_user ='' where sign_off_user ='RNDSERVER_DOCBASE1';

Hence all items will be set as available. The other rendition server should reserve them now as the current server is stuck and can’t reserve more items.

In the case of a big file beeing processed by the first server and you want the documents to be processed by the other one you can reserver items by yourself manually with:

update dmi_queue_item objects set sign_off_user='RNDSERVER2_DOCBASE1' where item_id in ('09xxxx','09xxxx');

If you have any questions please use the comment section.

 

Cet article Documentum Multiple ADTS: Switching rendition queue est apparu en premier sur Blog dbi services.

VirtualBox 5.0.24 Released

Tim Hall - Wed, 2016-06-29 07:54

Less than two weeks since the last release, yesterday saw the appearance of VirtualBox 5.0.24.

The downloads and changelog are in the usual places.

I’ve only done the installation on Windows at this point, but it seemed to go fine. I’ll probably do the Mac and Linux installs when I get home tonight.

Cheers

Tim…

VirtualBox 5.0.24 Released was first posted on June 29, 2016 at 1:54 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Brings SPARC to the Cloud

Oracle Press Releases - Wed, 2016-06-29 07:00
Press Release
Oracle Brings SPARC to the Cloud New SPARC S7 Products with Software in Silicon now available in Oracle Cloud, Engineered Systems and Servers

Redwood Shores, Calif.—Jun 29, 2016

Oracle today announced major new additions to the SPARC platform that for the first time bring the advanced security, efficiency and simplicity of SPARC to the cloud. Built on the new SPARC S7 microprocessor, the latest additions to the SPARC platform include new cloud services, engineered systems and servers.  

While the business benefits of the public cloud are increasingly clear, many organizations have yet to move enterprise workloads to the cloud due to performance, security and management concerns. To help eliminate those concerns and enable organizations to confidently move enterprise workloads to the cloud, the new SPARC platform is designed from the ground up to economically improve on cloud computing delivery of the most critical business applications and scale-out application environments.

The latest additions to the SPARC platform are built on the revolutionary new 4.27 GHz, 8-core/64-thread SPARC S7 microprocessor with Software in Silicon features such as Silicon Secured Memory and Data Analytics Accelerators, which delivers the industry’s highest per-core efficiency and enables organizations to run applications of all sizes on the SPARC platform at commodity price points. All existing commercial and custom applications will run on the new SPARC enterprise cloud services and solutions unchanged with significant improvements in security, efficiency and simplicity.

“We are still in the early phases of cloud computing adoption and as the market matures, organizations will increasingly move critical enterprise workloads to the cloud,” said John Fowler, executive vice president, Systems, Oracle. “To enable our customers to take advantage of this next stage of cloud computing to speed innovation, reduce costs and drive business growth, we are focused on delivering proven enterprise-grade services such as the Oracle SPARC platform in the Oracle Cloud.

Some of the key features of the new SPARC platform include:

  • Effortless Security: The latest additions to the SPARC platform are designed for security and compliance and utilize Silicon Secured Memory capabilities to address malware attacks and programming errors. Wide-key encryption ciphers and hashes enable a fully encrypted cloud with less than two percent performance overhead. In addition, security is further enhanced through verified boot, immutable content that prevents unauthorized changes, enforced secured updates and a trusted and secure hardware and software supply chain that does not rely on intermediaries.
  • Breakthrough Efficiency: By taking advantage of the open APIs in the processor and integrated Data Analytics Accelerators, which deliver up to 10x greater analytics performance spanning enterprise, big data and cloud applications, the latest additions to the SPARC platform reduce latency and cost. When compared to the x86 servers, the fully integrated S7-2 and S7-2L servers delivers up to 100 percent better per core efficiency, 1.7x better per core Java performance efficiency, 1.6x per core database OLTP performance efficiency, and 2-3x more bandwidth for high-traffic analysis and cloud apps.
  • Straightforward Simplicity: Taking integration a step beyond the server, the Oracle MiniCluster S7-2 Engineered System dramatically simplifies the top four most challenging aspects of enterprise computing: security and compliance; high availability; patching and administration; and performance tuning. By eliminating the need for a standard platform or OS and reducing security and database administration time and effort, the new Engineered System enables organizations to:

    • Secure systems by default by eliminating enterprise security expertise requirements
    • Automate compliance monitoring and auditing in order to maintain the secure state of the system over time
    • Make service resiliency effortless by taking advantage of a high availability operation that is engineered in to the hardware and software
    • Ensure the platform will always be up to date with the latest software and security enhancements through simple full stack patching
    • Enhance database and application performance through automatic performance tuning.

The new SPARC S7 processor-based cloud services and systems deliver commodity x86 economics and significant enterprise-class functionalities for security and analytics with Software in Silicon. They include new Oracle Cloud Compute platform services, the Oracle MiniCluster S7-2 Engineered System and Oracle SPARC S7 servers. These new products are designed to seamlessly integrate with existing infrastructure and include fully integrated virtualization and management for cloud.

The new Oracle SPARC Cloud service that is now part of the SPARC platform is a dedicated compute service to provide organizations with a simple, secure and efficient compute platform in the Cloud. The new service extends the complete suite of cloud services that Oracle provides to help organizations rapidly build and deploy rich applications—or extend Oracle Cloud Applications—on an enterprise-grade cloud platform.

To extend the security and performance benefits of the Oracle SuperCluster engineered systems to mid-size computing, Oracle has also introduced Oracle MiniCluster S7-2. Through full application and database compatibility with SuperCluster M7, Oracle MiniCluster enables organizations to reduce hardware and software costs at a fraction of the cost of commodity solutions. The new Oracle Engineered System is designed to support multi-tenant application and/or database consolidation, remote office/branch office computing demands and test/development environments.

Oracle has also introduced new additions to the SPARC server product line that extend the M7/T7 portfolio to address scale-out and cloud workloads at attractive new low price points. The new two-socket SPARC S7 servers are available in different configuration options that are optimized for either compute or storage and IO density and include Software in Silicon offload features for malware attack prevention, no compromise encryption and data analytics acceleration.

Contact Info
Sarah Fraser
Oracle PR
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Sarah Fraser

  • +1.650.743.0660

shell script to send two outputs to single file

Tom Kyte - Wed, 2016-06-29 00:06
Hi Tom, I need to prepare one shell script, which contains start and end time of the script(which is another shell script) and one database query. In this script, i am good to execute the db query and send output to file.In the same way,i want t...
Categories: DBA Blogs

Convert rows to columns

Tom Kyte - Wed, 2016-06-29 00:06
Hi Tom, I have a table: CREATE TABLE TEACHER (NAME VARCHAR(20), PRO VARCHAR(20)); with this data in it: INSERT INTO TEACHER VALUES ('A','PROFESSOR'); INSERT INTO TEACHER VALUES ('B','DOCTOR'); INSERT INTO TEACHER VALUES ('C','ENGINEER');...
Categories: DBA Blogs

Moving Constraints from one table to another

Tom Kyte - Wed, 2016-06-29 00:06
Hi asktom team, I want to convert a NON - PARTITIONED table to LIST - partitioned table. So to do that I am doing the following: FOR EXAMPLE take ORDERS table: -Renaming the TABLE ORDERS to ORDERS_OLD. -Creating a table ORDERS same as ORDER...
Categories: DBA Blogs

What are possible meanings of error OCCI ORA-12560: TNS:protocol adapter error?

Tom Kyte - Wed, 2016-06-29 00:06
Hi, The Oracle error messages are not always very descriptive. Searching the internet I found couple reasons why we can get ORA-12560 error: 1) Oracle database instance is not started 2) ORACLE_HOME is not set (for version 10) What are other ...
Categories: DBA Blogs

Peformance issue with user_tab_partitions query

Tom Kyte - Wed, 2016-06-29 00:06
Hi, We have the below query which is taking lot of time....while reading the partition name for the given high_value of table table_1. Here P_SEQ is the input value that we are passing through a procedure to this query. SELECT to_char(x.parti...
Categories: DBA Blogs

Inserting a BLOB file into oracle using a pl/sql script

Tom Kyte - Wed, 2016-06-29 00:06
I cannot for the life of me figure out how to successfully insert a file into a blob column in a DB, using a PL/SQL script. Should be able to reuse this script indefinitely, preferably, with several files at the same time, but as long as I can do on...
Categories: DBA Blogs

getting error for NOLOGGING NOCACHE

Tom Kyte - Wed, 2016-06-29 00:06
Hi there, We have recently migrated our database from Oracle 9i to Oracle 11g. Now,One of our procedure is having query to create external table of oracle loader type. It also include NOLOGGING NOCACHE. This procedure was working properly in Oracl...
Categories: DBA Blogs

Can Oracle 8i client connect to Oracle 12c server?

Tom Kyte - Wed, 2016-06-29 00:06
Can Oracle 8i client connect to Oracle 12c server? If yes, please anybody can help me that what are the changes does in Oracle 8i client's file and Oracle 12c Server's file...
Categories: DBA Blogs

Generating Dates between two date ranges

Tom Kyte - Wed, 2016-06-29 00:06
<code>Hi Tom , I need to write a query which will give me a list of dates between two date ranges (start date and end date) . For Example if my start date is 12/01/2003 and end Date is 12/31/2003. i need to generate my output as Date ---...
Categories: DBA Blogs

Refresh Materialized view when there is records in the source table

Tom Kyte - Wed, 2016-06-29 00:06
Hi , Need help to refresh one materialize view only when there is records in the table. say example . create materialize view MABCD as select field1,field2,field3 from abcdef ; my requirement is if abcdef have records greater than zer...
Categories: DBA Blogs

Expecting Sum-Of-Parts Performance From Shared Solid State Storage? I Didn’t Think So. Neither Should Exadata Customers. Here’s Why.

Kevin Closson - Tue, 2016-06-28 19:55

 

sum-of-parts

Last month I had the privilege of delivering the key note session to the quarterly gathering of Northern California Oracle User Group. My session was a set of vignettes in a theme regarding modern storage advancements. I was mistaken on how much time I had for the session so I skipped over a section about how we sometimes still expect systems performance to add up to a sum of its parts. This blog post aims to dive in to this topic.

To the best of my knowledge there is no marketing literature about XtremIO Storage Array that suggests the array performance is due to the number of solid state disk (SSD) drives found in the device. Generally speaking, enterprise all-flash storage arrays are built to offer features and performance–otherwise they’d be more aptly named Just a Bunch of Flash (JBOF).  The scope of this blog post is strictly targeting enterprise storage.

Wild, And Crazy, Claims

Lately I’ve seen a particular slide–bearing Oracle’s logo and copyright notice–popping up to suggest that Exadata is vastly superior to EMC and Pure Storage arrays because of Exadata’s supposed unique ability to leverage all flash bandwidth in the Exadata X6 family. You might be able to guess by now that I aim to expose how invalid this claim is. To start things off I’ll show a screenshot of the slide as I’ve seen it. Throughout the post there will be references to materials I’m citing.

DISCLAIMER: The slide I am about to show was not a fair use sample of content from oracle.com and it therefore may not, in fact, represent the official position of Oracle on the matter. That said, these slides do bear logo and copyright! So, then, the slide:

X6-sum-of-parts

I’ll start by listing a few objections. My objections are always based on science and fact so objecting to content–in particular–is certainly appropriate.

  1. The slide suggests an EMC XtremIO 4 X-Brick array is limited to 60 megabytes per second per “flash drive.”
    1. An XtremIO 4 X-Brick array has 100 Solid State Disks (SSD)–25 per X-Brick. I don’t know where the author got the data but it is grossly mistaken. No, a 4 X-Brick array is not limited to 60 * 100 megabytes per second (6,000MB/s). An XtremIO 4 X-Brick array is a 12GB/s array: click here. In fact, even way back in 2014 I used Oracle Database 11g Real Application Clusters to scan at 10.5GB/s with Parallel Query (click here). Remember, Parallel Query spends a non-trivial amount of IPC and work-brokering setup time at the beginning of a scan involving multiple Real Application cluster nodes. That query startup time impacts total scan elapsed time thus 10.5 GB/s reflects the average scan rate that includes this “dead air” query startup time. Everyone who uses Parallel Query Option is familiar with this overhead.
  2. The slide suggests that 60 MB/s is “spinning disk level throughput.”
    1. Any 15K RPM SAS (12Gb) or FC hard disk drive easily delivers sequential scans throughput of more than 200 MB/s.
  3. The slide suggests XtremIO cannot scale out.
    1. XtremIO architecture is 100% scale out. One can start with a single X-Brick and add up to 7 more. In the current generation scaling out in this fashion with XtremIO adds 25 more SSDs, storage controllers (CPU) and 4 more Fibre Channel ports per X-Brick.
  4. The slide suggests “bottlenecks at server inputs” further retard throughput when using Fibre Channel.
    1. This is just silly. There are 4 x 8GFC host-side FC ports per XtremIO X-Brick. I routinely test Haswell-EP 2-socket hosts with 6 active 8GFC ports (3 cards) per host. Can a measly 2-socket hosts really drive 12 GB/s Oracle scan bandwidth? Yes! No question. In fact, challenge me on that and I’ll show AWR proof of a single 2-socket host sustaining Oracle table scan bandwidth at 18 GB/s. No, actually, I won’t make anyone go to that much trouble. Click the following link for AWR proof that a single host with 2 6-core Haswell-EP (2s12c24t) processors can sustain Oracle Database 12c scan bandwidth of 18 GB/s: click here. I don’t say it frequently enough, but it’s true; you most likely do not know how powerful modern servers are!
  5. The slide says Exadata achieve “full flash throughput.”
    1. I’m laughing, but that claim is, in fact, the perfect segue to the next section.
Full Flash Throughput Scan Bandwidth

The slide accurately states that the NVMe flash card in the Exadata X6 model are rated at 5.5GB/s. This can be seen in the F320 datasheet. Click the following link for a screenshot of the datasheet: click here. So the question becomes, can Exadata really achieve full utilization of all of the NVMe flash cards configured in the Exadata X6? The answer no, but sort of. Please allow me to explain.

The following graph depicts the reality of how close a full-rack Exadata X6 comes to realizing full flash potential. As we know a full-rack Exadata has 14 storage servers. The High Capacity (HC) model has 4 NVMe cards per storage server purposed as a flash cache. The HC model also comes with 12 7,200 RPM hard drives per storage server as per the datasheet.  The following graph shows that yes, indeed Exadata X6 does realize full flash potential when performing a fully-offloaded scan (Smart Scan). After all, 4 * 14 * 5.5 is 308 and the datasheet cites 301 GB/s scan performance for the HC model. This is fine and dandy but it means you have to put up with 168 (12 * 14) howling 7,200 RPM hard disks if you are really intent on harnessing the magic power of full-flash potential! Why the sarcasm? It’s simple really–just take a look at the graph and notice that the all-flash EF model realizes just a slight bit more than 50% of the full flash performance potential. Indeed, the EF model has 14 * 8 * 5.5 == 616 GB/s of full potential available–but not realizable.

No, Exadata X6 does not–as the above slide suggests–harness the full potential of flash. Well, not unless you’re willing to put up with 168 round, brown, spinning thingies in the configuration. Ironically, it’s the HDD-Flash hybrid HC model that enjoys the “full flash potential.” I doubt the presenter points this bit out when slinging the above slide.

BW

IOPS

The above slide doesn’t actually suggest that Exadata X6 achieves full flash potential for IOPS, but since these people made me crack open the datasheets and use my brain for a moment or two I took it upon myself to do the calculations. The following graph shows the delta bewteen full flash IOPS potential for the full-rack HC and EF Exadata X6 models.

No…it doesn’t realize full flash potential in terms of IOPS either.

IOPS

References

Here is a link to the full slide deck containing the slide I focused on in this post: http://konferenciak.advalorem.hu/uploads/files/INFR_Sarecz_Lajos.pdf.

Just in case that copy of the deck disappears, I pushed a copy up the the WayBack Machine: click here.

Summary

XtremIO Storage Array literature does not suggest that the performance characteristics of the array are a simple product of how many component SSDs the array is configured with. To the best of my knowledge neither does Pure Storage suggest such a thing.

Oracle shouldn’t either. I made that point clear.


Filed under: oracle

New Version Of XPLAN_ASH Utility

Randolf Geist - Tue, 2016-06-28 18:23
A new version 4.23 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

This version comes only with minor changes, see the change log below.

Here are the notes from the change log:

- Finally corrected the very old and wrong description of "wait times" in the script comments, where it was talking about "in-flight" wait events but that is not correct. ASH performs a "fix-up" of the last 255 samples or so and updates them with the time waited, so these wait events are not "in-flight"

- Removed some of the clean up code added in 4.22 to the beginning of the script, because it doesn't really help much but spooled script output always contained these error messages about non-existent column definitions being cleared

- The "Concurrent I/O" sections will now also be skipped in LIMITED_ASH mode

- Some more fixes to the I/O figures in the "Activity Timeline based on ASH" - the spreading introduced in 4.22 needed some further refinement (see 4.22 change log for more details)

The Importance of Routine

Complete IT Professional - Tue, 2016-06-28 17:08
In the last two weeks, I’ve learnt about the importance of a routine. How does it relate to your career, and why should you care? The Last Two Weeks So, you might have noticed there hasn’t been a post on Complete IT Professional in the last week or so. I mentioned there was a delay […]
Categories: Development

TLS 1.2 Certified with E-Business Suite 12.2

Steven Chan - Tue, 2016-06-28 16:57

I'm pleased to announce that Oracle E-Business Suite 12.2 inbound, outbound, and loopback connections are now certified with TLS 1.2, 1.1, and 1.0. If you have not already migrated from SSL to TLS, you should begin planning the migration for your environment. 

For more information on patching and configuration requirements when migrating to TLS 1.2 from TLS 1.0 or SSL, refer to the following My Oracle Support Knowledge Document:

The steps and configuration outlined in MOS Note 1367293.1 will address recent security vulnerabilities (e.g. Weak Cipher Suites/FREAK, POODLE, DROWN).

Configuration Options

  • Certification with TLS 1.2, 1.1 and 1.0

    The default Oracle E-Business Suite 12.2 configuration allows for the handshake between the client and server to negotiate and use the highest version of TLS (either 1.2, 1.1, or 1.0) supported end-to-end by all parties.

For example, if the outbound connection used by iProcurement is by default configured for TLS 1.2, 1.1 and 1.0 and if a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.2 and a common cipher suite is found, then TLS 1.2 will be used end-to-end. If a call is made from Oracle E-Business Suite iProcurement to an external site that supports TLS 1.1 and a common cipher suite is found, then the handshake negotiation will resolve to use TLS 1.1 for the connection encryption.

  • Restricting the TLS Protocol (Optional Configuration)

You may optionally configure Oracle E-Business Suite to use the highest level of TLS certified with Oracle E-Business Suite Release 12.2. This option currently allows you configure TLS 1.2 for all inbound, outbound and loopback connections. 

Warning: If you restrict Oracle E-Business Suite 12.2 to use only TLS 1.2, this configuration could result in the inability to connect to other sites or browsers that do not support TLS 1.2.

  • Restricting Connections to HTTPS Only (Optional Configuration)

You also may optionally configure Oracle E-Business Suite to disable the HTTP port and use the HTTPS port only. 

Where can I learn more?
There are several guides and documents that cover Oracle E-Business Suite 12.2 secure configuration and encryption. You can learn more by reading the following:

SSL or TLS 1.0 Reference Note

If you are using SSL or TLS 1.0 and need to review your current configuration or renew your certificate, you may refer to the following:

Related Articles

Categories: APPS Blogs

PostgreSQL as a central reporting hub? – Yes, of course

Yann Neuhaus - Tue, 2016-06-28 15:31

For every client we do projects for there usually are database systems from various vendors. In most places you at least find some Oracle and MS SQL Server databases. Probably somewhere there are even some MySQL/MariaDB or PostgreSQL instance running. The Oracle and MS SQL Server stuff almost always is there because vendors require it. For MySQL/MariaDB the common use case are web applications, ticketing systems and home grown developments. At some point in time there might be a requirement to centralize important data of all these databases into a central reporting hub. The more data you have the more there is a requirement to analyze it and to create reports that drive some sorts of decisions. PostgreSQL is very well prepared to assist in this. If you have read the last posts about connecting your PostgreSQL instance to either Oracle, MS SQL Server, MariaDB/MySQL or even other PostgreSQL instances you might already know into what direction this post will go.

This is a sample use case: There is a requirement to centralize all the sales figures of a company into one big reporting database. The (simplified) database landscape in this company looks exactly as described above:

pg_reporting_1

There is one Oracle database, one MS SQL Server instance, one MariaDB/MySQL instance and one PostgreSQL instance. Each of them holds some sales figures which shall be integrated into the reporting database. For the demo setup I use the Dell DVD Store Database sample schema. You can read how to set this up for:

As the goal is to have a PostgreSQL instance as a central reporting hub the desired database landscape will approximately look like this:

pg_reporting_2

Lets assume all the foreign data wrappers are already setup in the central PostgreSQL instance:

(postgres@[local]:4445) [postgres] > \dx
                                                 List of installed extensions
     Name     |    Version    |   Schema   |                                    Description                                    
--------------+---------------+------------+-----------------------------------------------------------------------------------
 mysql_fdw    | 1.0           | public     | Foreign data wrapper for querying a MySQL server
 oracle_fdw   | 1.1           | public     | foreign data wrapper for Oracle access
 plpgsql      | 1.0           | pg_catalog | PL/pgSQL procedural language
 postgres_fdw | 1.0           | ds2_oracle | foreign-data wrapper for remote PostgreSQL servers
 tds_fdw      | 2.0.0-alpha.1 | public     | Foreign data wrapper for querying a TDS database (Sybase or Microsoft SQL Server)

In addition lets assume all the foreign servers and all the user mappings are there and are working:

(postgres@[local]:4445) [postgres] > select srvname,srvoptions from pg_foreign_server;
   srvname    |                                      srvoptions                                       
--------------+---------------------------------------------------------------------------------------
 mysql_server | {host=192.168.22.41,port=3306}
 mssql_svr    | {servername=192.168.22.102,port=1433,database=ds2,tds_version=7.3,msg_handler=notice}
 oracle_srv   | {dbserver=//192.168.22.42/DELLDVD}
 postgres_srv | {host=192.168.22.40,port=5432,dbname=ds2}
(4 rows)

(postgres@[local]:4445) [postgres] > select * from pg_user_mappings;
 umid  | srvid |   srvname    | umuser | usename  |               umoptions                
-------+-------+--------------+--------+----------+----------------------------------------
 65547 | 65546 | mysql_server |     10 | postgres | {username=web,password=web}
 65587 | 65586 | mssql_svr    |     10 | postgres | {username=ds2user,password=ds2}
 65615 | 65614 | oracle_srv   |     10 | postgres | {user=DS2,password=ds2}
 65676 | 65675 | postgres_srv |     10 | postgres | {user=ds2,password=ds2}

As the central reporting database is already connected to all the other database systems how could you organize the fetching of the data? One approach is to create a separate schema for each of the foreign databases and one additional schema that combines the data:

(postgres@[local]:4445) [postgres] > create schema ds2_mssql;
(postgres@[local]:4445) [postgres] > create schema ds2_oracle;
(postgres@[local]:4445) [postgres] > create schema ds2_mysql;
(postgres@[local]:4445) [postgres] > create schema ds2_postgresql;
(postgres@[local]:4445) [postgres] > create schema ds2_combined;
(postgres@[local]:4445) [postgres] > \dn
      List of schemas
      Name      |  Owner   
----------------+----------
 ds2_combined   | postgres
 ds2_mssql      | postgres
 ds2_mysql      | postgres
 ds2_oracle     | postgres
 ds2_postgresql | postgres
 public         | postgres
(6 rows)

The next step is to create the foreign tables we want to work with.

For Oracle, MySQL/MariaDB and PostgreSQL we can use the import foreign schema command:

postgres=# import foreign schema "DS2" from server oracle_srv into ds2_oracle;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "DS2" from server mysql_srv into ds2_mysql;
IMPORT FOREIGN SCHEMA
postgres= import foreign schema "public" from server postgres_srv into ds2_postgresql;
IMPORT FOREIGN SCHEMA

For MS SQL Server we need to specify the foreign tables on our own (there is a feature request to implement import foreign schema):

create foreign table ds2_mssql.orders 
  ( orderid numeric not null
  , orderdate timestamp(0) without time zone not null
  , customerid numeric                                
  , netamount numeric(12,2) not null 
  , tax numeric(12,2) not null 
  , totalamount numeric(12,2) not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orders', row_estimate_method 'showplan_all');

create foreign table ds2_mssql.orderlines
  ( orderlineid numeric not null 
  , orderid numeric not null 
  , prod_id numeric not null 
  , quantity numeric not null 
  , orderdate timestamp(0) without time zone not null 
  )
SERVER mssql_svr
OPTIONS (table 'dbo.orderlines', row_estimate_method 'showplan_all');

Having all the foreign tables available we can start to work with the data either by creating materialized views:

create materialized view ds2_combined.mv_orders as
  select * from ds2_mssql.orders
  union all
  select * from ds2_mysql."ORDERS"
  union all
  select * from ds2_oracle.orders
  union all
  select * from ds2_postgresql.orders
  with no data;
refresh materialized view ds2_combined.mv_orders with data;

… or by importing the data into PostgreSQL and then build reports on top of that:

BEGIN;
  insert into ds2_combined.orders select * from ds2_mssql.orders;
  insert into ds2_combined.orders select * from ds2_mysql."ORDERS";
  insert into ds2_combined.orders select * from ds2_oracle.orders;
  insert into ds2_combined.orders select * from ds2_postgresql.orders;
END;
BEGIN;
  insert into ds2_combined.orderlines 
         select * from ds2_mssql.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_mysql."ORDERLINES";
  insert into ds2_combined.orderlines 
         select * from ds2_oracle.orderlines;
  insert into ds2_combined.orderlines 
         select * from ds2_postgresql.orderlines;
END;

Having all the data locally available you can do whatever you want with it. Have fun with reporting on your data …

Btw: The source for this was a session at the Swiss PGDAY 2016. You can download the slides there.

 

Cet article PostgreSQL as a central reporting hub? – Yes, of course est apparu en premier sur Blog dbi services.

Statspack Idle Events

Yann Neuhaus - Tue, 2016-06-28 11:26

When you don’t have Enterprise Edition + Diagnostic Pack, you cannot use AWR but you can, and should, install Statspack. Statspack is still there, but unfortunately do not evolve a lot. The most important section, the ‘Top 5 Timed Events’ shows only foreground events, or is supposed to do so. When a user process waits on a background process, this section must count only the foreground wait and not the activity of the background process or we will have double accounting. The background activity is included in ‘Idle’ events in order to be excluded from this section.
But unfortunately, new versions come with new wait events, and the list of Statspack idle events is not up to date anymore.

Here is the ‘Top 5 Timed Events’ I got from a database between 22:00 and 23:00 where there is no application activity:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
LGWR worker group idle 6 22,049 ###### 65.2
AQPC idle 120 3,602 30014 10.7
heartbeat redo informer 3,568 3,601 1009 10.7
lreg timer 1,195 3,598 3011 10.6
direct path read 31,221 466 15 1.4
-------------------------------------------------------------

Humm. What can you do with that? Idle events and timers are at the top. Direct path read seem to be minimal. And no CPU usage?
Obviously, something is wrong here.

Statspack uses a fixed list of wait events that are considered as ‘idle’ events and it is stored at Statspack installation into STATS$IDLE_EVENT.
This comes from an age where wait classes were not there. In current version, a more realistic list of wait events is in V$EVENT_NAME where class_name=’Idle’

Let’s compare them (that’s in 12.1.0.1)

First, are there some non-idle events that are considered as ‘idle’ by Statspack?


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s left outer join V$EVENT_NAME v on s.event=v.name where wait_class'Idle';
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
--------------------------------------- --------------------------------------- ----------
null event null event Other
SQL*Net message to client SQL*Net message to client Network
SQL*Net more data from client SQL*Net more data from client Network
KSV master wait KSV master wait Other
parallel recovery slave wait for change parallel recovery slave wait for change Other

The goal of this blog post is not to detail the meaning of each of those events (search for them on tanelpoder.com as a good start for that), but if they are now considered as non-idle, Statspack should obey the same rule.

Then we can check which idle events are not in Statspack list:


SQL> select s.event statspack_idle_event,v.name,v.wait_class from STATS$IDLE_EVENT s right outer join V$EVENT_NAME v on s.event=v.name where wait_class
 
STATSPACK_IDLE_EVENT NAME WAIT_CLASS
-------------------- ------------------------------------------ ----------
OFS idle Idle
heartbeat redo informer Idle
LGWR worker group idle Idle
Recovery Server waiting for work Idle
Recovery Server waiting restore start Idle
Recovery Server Surrogate wait Idle
Recovery Server Servlet wait Idle
Recovery Server Comm SGA setup wait Idle
parallel recovery coordinator idle wait Idle
recovery sender idle wait Idle
recovery receiver idle wait Idle
recovery merger idle wait Idle
virtual circuit next request Idle
lreg timer Idle
REPL Apply: txns Idle
REPL Capture/Apply: messages Idle
REPL Capture: archive log Idle
PL/SQL lock timer Idle
Emon coordinator main loop Idle
Emon slave main loop Idle
AQ: 12c message cache init wait Idle
AQ Cross Master idle Idle
AQPC idle Idle
Streams AQ: load balancer idle Idle
Sharded Queues : Part Maintenance idle Idle
REPL Capture/Apply: RAC AQ qmn coordinator Idle
iowp msg Idle
iowp file id Idle
netp network Idle
gopp msg Idle

There are a lot of them. We can see lot of idle events that have been introduced in recent versions.

The Statspack list is an old list. Here is how to refresh it:


delete from STATS$IDLE_EVENT;
insert into STATS$IDLE_EVENT select name from V$EVENT_NAME where wait_class='Idle';
commit;

Once I did that and run a new Statspack report on the same snapshots as above, I get a more realistic ‘Top 5 Timed Events':


Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
direct path read 31,221 466 15 48.7
CPU time 310 32.4
db file sequential read 49,137 77 2 8.0
SQL*Net vector data to client 15,189 31 2 3.3
enq: TM - contention 1 24 23937 2.5

Actually, this 22:00 to 23:00 time period is where the maintenance jobs are running. dbms_space.auto_space_advisor_job_proc likes to read your tables in bulk in order to see if there is some free space. And I don’t like this 24 seconds TM lock wait at a time where I though the database was not in use. This was hidden from the original report.

Patch?

Statspack is still supported and there’s a patch to add the following events as idle:
"virtual circuit next request" "AQ Cross Master idle" "AQ: 12c message cache init wait" "AQPC idle" "Emon coordinator main loop" "Emon slave main loop" "LGWR worker group idle" "OFS idle" "REPL Apply: txns" "REPL Capture/Apply: RAC AQ qmn coordinator" "REPL Capture/Apply: messages" "REPL Capture: archive log" "Recovery Server Comm SGA setup wait" "Recovery Server Servlet wait" "Recovery Server Surrogate wait" "Recovery Server waiting for work" "Recovery Server waiting restore start" "Sharded Queues : Part Maintenance idle" "Streams AQ: load balancer idle" "gopp msggopp msg" "heartbeat redo informer" "iowp file id" "iowp msg" "lreg timer" "netp network" "parallel recovery coordinator idle wait" "recovery merger idle wait" "recovery receiver idle wait" "recovery sender idle wait" "imco timer" "process in prespawned state"
(Nice way to be referenced by google for all those improbable wait events, isn’t it?)

However, I think that filing STATS$IDLE_EVENT from V$EVENTNAME, or maybe even replacing it as a view can be a better long term solution. Each version comes with new wait events and it seems that Statspack evolves only through patches.

 

Cet article Statspack Idle Events est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator