Skip navigation.

Feed aggregator

Full Disclosure

Michael Feldstein - Sat, 2014-08-02 12:41

As you probably know, we run a consulting business (MindWires Consulting) and sometimes work with the companies and schools that we write about here. Consequently, we periodically remind you and update you on our conflict of interest policies. We do our best to avoid or minimize conflicts of interest where we can, but since our system isn’t perfect, we want you to understand how we handle them when they arise so that you can consider our analysis with the full context in mind. We value your trust and don’t take it for granted.

We talk a lot with each other about how to deal with conflicts of interest because we run into them a lot. On the one hand, we find that working with the vendors and schools that we write about provides us with insight that is helpful to a wide range of clients and readers. There just aren’t too many people who have the benefit of being able to see how all sides of the ed tech relationships work. But along with that perspective comes an inevitable and perpetual tension with objectivity. When we started our business together 18 months ago, we didn’t have a clear idea where these tensions would show up or how big of an issue they might turn out to be. We originally thought that our blogging was going to remain an addiction that was subsidized but somewhat disconnected from our consulting. But it turns out that more than 90% of our business comes from readers of the blog, and a significant portion of it comes out of conversations stimulated by a specific post. Now that we understand that relationship better, we’re getting a better handle on the kinds of conflict of interest that can arise and how best to mitigate them. Our particular approach in any given situation depends on lot on whether the client wants analysis or advice.

Disclosure

In many cases, clients want us to provide deeper, more heavily researched, and more tailored versions of the analysis that we’ve provided publicly on this blog. In this situation, there isn’t a strong a direct conflict of interest between working providing them with what they are asking for and writing public analysis about various aspects of their business. That said, no matter how hard we try to write objectively about an organization that is, was, or could be a client, human nature being what it is, we can’t guarantee that we will never be even subconsciously influenced in our thinking. That is why we have a policy to always disclose when we are blogging about a client. We have done this in various ways in the past. Going forward, we are standardizing on an approach in which we will insert a disclosure footnote at the end of the first sentence in the post in which the client is named. It will look like this.[1] (We are not fully satisfied that the footnote is prominent enough, so we will be investigating ways to make it a little more prominent.) We will insert these notices in all future posts on the blog, whether or not we are the authors of those posts. In cases where the company in question is not currently a client but was recently and could be again in the near future, we will note that the company “was recently a client of MindWires Consulting”.

Recusal

Sometimes the client wants not only analysis but also strategic advice. Those situations can be trickier. We want to avoid cases in which we blog in praise (or condemnation) of a company for taking an action that they paid us to tell them to take. Our policy is that we don’t blog about any decisions that a company might make based on our advice. There are some theoretical situations in which we might consider making an exception to that rule, but if they ever do come up in reality, then the disclosure principle will apply. We will let you know if, when, and why we would make the exception. Aside from that currently theoretical exception, we recuse ourselves from blogging about the results of our own consulting advice. Furthermore, when potential clients ask us for advice that we think will put us into a long-term conflict of interest regarding one of our core areas of analysis, we turn down that work. Analysis take precedence over advice.

Getting Better at This

We’re going to continue thinking about this and refining our approach as we learn more. We also have some ideas about business models that could further minimize potential conflicts in the future. We’ll share the details with you if and when we get to the point where we’re ready to move forward on them. In the meantime, we will continue to remind you of our current policy periodically so that you are in a better position to judge our analysis. And as always, we welcome your feedback.

 

  1. Full disclosure: Acme Ed Tech Company is a client of MindWires Consulting, the sponsor of e-Literate.

The post Full Disclosure appeared first on e-Literate.

RMAN Pet Peeves

Michael Dinh - Sat, 2014-08-02 12:38

Do you validate your backup and what command do you use?

Lately, I have been using restore database validate preview summary to kill 2 birds with 1 stone.

The issue is RMAN will skip validation of archived log backupset when archived log exists.

Does this seem wrong to you?

Please take a look at a test case here

What do you think?


Are You Using BULK COLLECT and FORALL for Bulk Processing Yet?

Eddie Awad - Sat, 2014-08-02 12:01

Steven Feuerstein was dismayed when he found in a PL/SQL procedure a cursor FOR loop that contained an INSERT and an UPDATE statements.

That is a classic anti-pattern, a general pattern of coding that should be avoided. It should be avoided because the inserts and updates are changing the tables on a row-by-row basis, which maximizes the number of context switches (between SQL and PL/SQL) and consequently greatly slows the performance of the code. Fortunately, this classic antipattern has a classic, well-defined solution: use BULK COLLECT and FORALL to switch from row-by-row processing to bulk processing.

© Eddie Awad's Blog, 2014. | Permalink | Add a comment | Topic: Oracle | Tags: ,

Related articles:

Linking of Bugs, Notes and SRs now available in SRs

Joshua Solomin - Fri, 2014-08-01 18:01

We have extended the linking capability within the body of an SR. Because of security concerns and issues with dealing with embedded HTML, we don't let SRs contain HTML directly.

But we now allow a variety of formats to LINK from Bugs, Documents and other SRs within the body of an SR.

Screen shot of links that work in SR updates

So now you can a) direct link to these items when a support engineer gives you a bug or doc to follow, or you can update the SR using one of these formats. Hopefully they are not too tough to follow.

Knowledge Documents Formats
note 1351022.2
doc id 1351022.2
document id 1351022.2

Bug Formats
bug 1351022.2

Service Request Formats
SR 3-8777412995
SR Number 3-8777412995
Service Request 3-8777412995

Hope this helps!


REST enable your Database for CRUD with TopLink/EclipseLink and JDeveloper

Shay Shmeltzer - Fri, 2014-08-01 17:10

It seems that REST interfaces are all the rage now for accessing your backend data, this is especially true in the world of mobile development. In this blog I'm going to show you how easy it is to provide a complete REST interface for your database by leveraging TopLink/EclipseLink and JDeveloper.

This relies on a capability that is available in TopLink 12c where every JPA entity that you have created can be RESTified with a simple servlet that TopLink provides.

All you need to do is locate the file toplink-dataservices-web.jar on your machine (this is included in the JDeveloper install so you can just search that directory) and then package your project as a WAR.

At that point you'll be able to get a complete CRUD set of operation for this entity.

In the video below I'm to retrieving departments by their id using a URL like this:

http://127.0.0.1:7101/TLServices-Project1-context-root/persistence/v1.0/out/entity/Departments/30

(out - name of my persistence unit. Departments - name of my entity) 

A complete list of all the REST URL syntax is here part of the TopLink documentation on this feature.:

http://docs.oracle.com/middleware/1213/toplink/solutions/restful_jpa.htm#CHDEGJIG

Check out how easy the process is in this video (using MySQL database):

Here are some additional URL samples for getting other types of queries:

Get all the Employees -  http://127.0.0.1:7101/TLServices/persistence/v1.0/out/query/Employees.findAll

Get all the Employees in department 50 - http://127.0.0.1:7101/TLServices/persistence/v1.0/out/entity/Departments/50/employeesList

Executing a specific named query (@NamedQuery(name = "Employees.findByName", query = "select o from Employees o where o.first_name like :name order by o.last_name"))  -http://127.0.0.1:7101/TLServices/persistence/v1.0/out/query/Employees.findByName;name=John

Categories: Development

Best of OTN - Week of July 27th

OTN TechBlog - Fri, 2014-08-01 13:13
Systems Community - Rick Ramsey, OTN Systems Community Manager -

Tech Article -  Playing with ZFS Snapshots, by ACE Alexandre Borges -
Alexandre creates a ZFS pool, loads it with files, takes a snapshot, verifies that the snapshot worked, removes files from the pool, and finally reverts back to the snapshot file. Then he shows you how to work with snapshot streams. Great way to do backups

From OTN Garage FB - Recently a DBA at an IOUG event complained to Tales from the Data Center that they were unable to install from the Solaris 11.2 ISO. They had seen an Openstack a few weeks ago, and wanted to know how to install Solaris 11.2 in a VM. So guys… here is a step by step for you - Tales from the Datacenter.

Java Community - Tori Wieldt, OTN Java Community Manager

Tech Article: Learning Java Programming with BlueJ IDE https://blogs.oracle.com/java/entry/tech_article_learning_java_programming

The Java Source Blog - The Java Hub at JavaOne! Come see the Oracle Technology Network team and see cool demo's, interviews, etc.

Friday Funny : "An int and an int sometimes love each other very much and decide to make a long." @asz #jvmls Thanks @stuartmarks !

Database Community - Laura Ramsey, OTN Database Community Manager

OTN DBA/DEV Watercooler BlogOracle Database 12c Release 12.1.0.2 is Here! ..with the long awaited In-Memory option, plus 21 new features. Oracle Database 12c Release 12.1.0.2 supports Linux and Oracle Solaris (SPARC and x86 64 bit).  Read More!

Architect Community - Bob Rhubart, OTN Architect Community Manager
Top 3 Playlists on the OTN ArchBeat YouTube Channel

Common Roles get copied upon plug-in with #Oracle Multitenant

The Oracle Instructor - Fri, 2014-08-01 08:51

What happens when you unplug a pluggable database that has local users who have been granted common roles? They get copied upon plug-in of the PDB to the target container database!

Before Unplug of the PDBThe picture above shows the situation before the unplug command. It has been implemented with these commands:

 

SQL> connect / as sysdba
Connected.
SQL> create role c##role container=all;

Role created.

SQL> grant select any table to c##role container=all;

Grant succeeded.

SQL> connect sys/oracle_4U@pdb1 as sysdba
Connected.
SQL> grant c##role to app;

Grant succeeded.



SQL> grant create session to app;

Grant succeeded.

The local user app has now been granted the common role c##role. Let’s assume that the application depends on the privileges inside the common role. Now the pdb1 is unplugged and plugged in to cdb2:

SQL> shutdown immediate
Pluggable Database closed.
SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

Pluggable database altered.

SQL> drop pluggable database pdb1;

Pluggable database dropped.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@EDE5R2P0 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb2
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle
[oracle@EDE5R2P0 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jul 29 12:52:19 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create pluggable database pdb1 using '/home/oracle/pdb1.xml' nocopy;

Pluggable database created.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> connect app/app@pdb1
Connected.
SQL> select * from scott.dept;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

SQL> select * from session_privs;

PRIVILEGE
----------------------------------------
CREATE SESSION
SELECT ANY TABLE

SQL> connect / as sysdba
Connected.

SQL> select role,common from cdb_roles where role='C##ROLE';

ROLE
--------------------------------------------------------------------------------
COM
---
C##ROLE
YES

As seen above, the common role has been copied upon the plug-in like the picture illustrates:
After plug-in of the PDBNot surprisingly the local user app together with the local privilege CREATE SESSION was moved to the target container database. But it is not so obvious that the common role is copied then to the target CDB. This is something I found out during delivery of a recent Oracle University LVC about 12c New Features, thanks to a question of one attendee. My guess was it will lead to an error upon unplug, but this test-case proves it doesn’t. I thought that behavior may be of interest to the Oracle Community. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Multitenant
Categories: DBA Blogs

Log Buffer #382, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-01 07:41

Leading the way are the blogs which are acting as beacons of information guiding the way towards new vistas of innovation. This Log Buffer edition appreciates that role and presents you with few of those blogs.

Oracle:

Is there any recommended duration after which Exalytics Server should be rebooted for optimal performance of Server?

GlassFish On the Cloud Consulting Services by C2B2

This introduction to SOA Governance series contains two videos. The first one explains SOA Governance and why we need it by using a case study. The second video introduces Oracle Enterprise Repository (OER), and how it can help with SOA Governanc.

Oracle BI APPs provide two data warehouse generated fiscal calendars OOTB.

If you’re a community manager who’s publishing, monitoring, engaging, and analyzing communities on multiple social networks manually and individually, you need a hug.

SQL Server:

Spackle: Making sure you can connect to the DAC

Test-Driven Development (TDD) has a misleading name, because the objective is to design and specify that the system you are developing behaves in the ways that the customer expects, and to prove that it does so for the lifetime of the system.

Set a security standard across environments that developers can see and run, but not change.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others. The goal of defensive database programming, the goal of this book, is to help you to produce resilient T-SQL code that robustly and gracefully handles cases of unintended use, and is resilient to common changes to the database environment.

One option to get notified when TempDB grows is to create a SQL Alert to fire a SQL Agent Job that will automatically send an email alerting the DBA when the Tempdb reaches a specific file size.

MySQL:

By default when using MySQL’s standard replication, all events are logged in the binary log and those binary log events are replicated to all slaves (it’s possible to filter out some schema).

Testing MySQL repository packages: how we make sure they work for you

If your project does not have something that you can adapt that quote to, odds are your testing is inadequate.

Compare and Synchronize with Updated Comparison Tools!

Beyond the FRM: ideas for a native MySQL Data Dictionary.

Categories: DBA Blogs

How to Load Informix DB2 Using SSIS

Chris Foot - Fri, 2014-08-01 04:30

Can Microsoft SQL Server and Informix DB2 environments integrate together? The answer is YES!! I have received an increasing amount of questions concerning wanting to cross platform ETL development work between the two. Driven from these questions, I want to dig deeper into regards to manipulating data between Microsoft SQL Server and Informix DB2.

Recently, I have been asked to load data to Informix DB2 using SSIS which is the focus of my topic. When I was tasked with this request, I did some research and started to develop a solution. However, I ran into some common issues that had unanswered questions in regards to writing via Informix ODBC with SSIS out on the internet. Unfortunately, to this day, I have not seen an actual step- by- step blog about this topic based on my own personal searches. With that being said, I decided to blog about it myself.

Let’s start with the basic information first. What do you need to successfully use Informix with your SQL Server environment?

You should know, at minimum, the following:

  • What versions of the driver you have
  • What version of SQL Server is installed on your server
  • What the version of your operating system is

The version of the driver can cause unforeseen issues when trying to load into Informix via SSIS. Check how your ODBC driver is registered. You can do this by simply checking both 32 bit and 64 bit ODBC Data Source Administrator. Here are the commands for 32 bit and 64 bit respectively:

32 Bit: C:\Windows\SysWOW64\odbcad32.exe

64 Bit: C:\Windows\system32\odbcad32.exe

As you can see, I do have both registered in my current environment:

32 Bit

Image and video hosting by TinyPic

64 Bit

Image and video hosting by TinyPic

This is a common issue I have seen between the two. No matter if you have SQL Server 32 Bit or 64 Bit, BIDS is a 32 bit platform application, and the runtime of BIDS needs to be set to reflect this. This is done in the solution properties.

Image and video hosting by TinyPic

In the properties, you click the debugging option and set Run64BitRunTime from True to False.

Image and video hosting by TinyPic

Now, you are ready to set up your connections and build your package. In your connection manager, select where your source data is coming from. For my example, it’s going to be SQL Server, so I need an OLE DB connection. The destination I will use is an ADO.NET connection manager.

Image and video hosting by TinyPic

Here is the little piece that took a while to figure out. Your connection string within your ADO.NET connection manager needs to have “delimident=y” as an argument within the connection string.

Image and video hosting by TinyPic

Now, my connection string reads as follows:

Dsn=INFORMIX ODBC;Driver={ifxoledbc};delimident=y

Notice that I do not have my UID or password passed in through the connection string because they are already stored on my server when I set them up in my Data Source ODBC Administrator.

From here, I am going to simply set up my Dataflow with a source and destination using the connection managers that I have created and map all of my columns.

Image and video hosting by TinyPic

That’s it! Now, all you have to do is run it and test it.

Image and video hosting by TinyPic

I have just written 27 records to Informix DB2 via SSIS using the Informix ODBC driver provided by IBM! Extracting, Transforming, and Loading data (ETL) sometimes requires outside drivers and connection managers which require us to learn new thing, and we are learning new things every day in the development world. I hope that you found my blog informative and that it helps others reduce the search for writing to Informix via SSIS. Stay tuned for my next blog post in the next few weeks.

The post How to Load Informix DB2 Using SSIS appeared first on Remote DBA Experts.

JSON Parsing is Cake with WebLogic Server 12.1.3

Steve Button - Thu, 2014-07-31 18:46
Another feature of WebLogic Server 12.1.3 that developers will find really useful is the inclusion of an implementation of JSR-353 Java API for JSON Processing.

See Chapter 10 Java API for JSON Processing in the Developing Applications for Oracle WebLogic Server book @ http://docs.oracle.com/middleware/1213/wls/WLPRG/java-api-for-json-proc.htm#WLPRG1055

The original JSR submission for this API provides a good description of what it sets out to do.

JSR 353: JavaTM API for JSON Processing
 
This new API, working from the foundations provided by earlier implementations such as Jackson, Jettison and Google JSon, provides a standard API for working with JSON from Java. The goals and objectives of the API are described in the specification request as:
 JSON(JavaScript Object Notation) is a lightweight data-interchange format.

Many popular web services use JSON format for invoking and returning the data.

Currently Java applications use different implementation libraries to produce/consume JSON from the web services. Hence, there is a need to standardize a Java API for JSON so that applications that use JSON need not bundle the implementation libraries but use the API. Applications will be smaller in size and portable.

The goal of this specification is to develop such APIs to:
  • Produce and consume JSON text in a streaming fashion(similar to StAX API for XML)
  • Build a Java object model for JSON text using API classes(similar to DOM API for XML)
WebLogic Server 12.1.3 includes a module which contains the API/implementation of this relatively lightweight but important API, enabling developers and applications to more easily work with JSON in a portable, standard manner.

 Unlike JAX-RS 2.0 and JPA 2, both of which have pre-existing specification versions that need to be supported by default, there are no additional steps required for applications to use this API with WebLogic Server 12.1.3.  It's simply included as a default module of the server and available for any application to make use of.
The API and implementation is located in this jar file in a WebLogic Server 12.1.3 installation:

$ORACLE_HOME/wlserver/modules/javax.json_1.0.0.0_1-0.jar

In the my previous post, Using the JAX-RS 2.0 Client API with WebLogic Server 12.1.3
I have a short example of using the API to parse an JAX-RS supplied InputStream to marshall a JSON payload into a Java object.

        
...
GeoIp g = new GeoIp();
JsonParser parser = Json.createParser(entityStream);
while (parser.hasNext()) {
switch (parser.next()) {
case KEY_NAME:
String key = parser.getString();
parser.next();
switch (key) {
case "ip":
g.setIpAddress(parser.getString());
break;
case "country_name":
g.setCountryName(parser.getString());
break;
case "latitude":
g.setLatitude(parser.getString());
break;
case "longitude":
g.setLongitude(parser.getString());
break;
case "region_name":
g.setRegionName(parser.getString());
break;
case "city":
g.setCity(parser.getString());
break;
case "zipcode":
g.setZipCode(parser.getString());
break;
default:
break;
}
break;
default:
break;
}
}
...
 
The Java EE 7 tutorial has a section showing how to use the new javax.json API which is well worth having a look at if working with JSON is your thing.

http://docs.oracle.com/javaee/7/tutorial/doc/jsonp.htm

Arun Gupta also has a good hands-on lab under development for Java EE 7 that uses the JSON API to read and write JSON into Java objects that represent a movie database.   His examples collaborate with JAX-RS to issue both GET and POST calls to read and update data using JSON payload.

https://github.com/javaee-samples/javaee7-samples



Parallel Execution Skew - Summary

Randolf Geist - Thu, 2014-07-31 16:37
I've published the final part of my video tutorial and the final part of my mini series "Parallel Execution Skew" at AllThingsOracle.com concluding what I planned to publish on the topic of Parallel Execution Skew.

Anyone regularly using Parallel Execution and/or relying on Parallel Execution for important, time critical processing should know this stuff. In my experience however almost no-one does, and therefore misses possibly huge opportunities for optimizing Parallel Execution performance.

Since all this was published over a longer period of time this post therefore is a summary with pointers to the material.

If you want to get an idea what the material is about, the following video summarizes the content:

Parallel Execution Skew in less than four minutes

Video Tutorial "Analysing Parallel Execution Skew":

Part 1: Introduction
Part 2: DFOs and DFO Trees
Part 3: Without Diagnostics / Tuning Pack license
Part 4: Using Diagnostics / Tuning Pack license

"Parallel Execution Skew" series at AllThingsOracle.com:

Part 1: Introduction
Part 2: Demonstrating Skew
Part 3: 12c Hybrid Hash Distribution With Skew Detection
Part 4: Addressing Skew Using Manual Rewrites
Part 5: Skew Caused By Outer Joins

Oracle Database 12c Release 1 (12.1.0.2) Generally Available

Oracle Database Server 12.1.0.2.0 is now generally available under Oracle Software Delivery Website edelivery.oracle.com and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

July Security Alert

Paul Wright - Thu, 2014-07-31 15:25
Hi Oracle Security Folks, The July Oracle Security Alert is out. My part is smaller than last quarter as just an In-Depth Credit, but Mr David Litchfield makes a triumphal return with some excellent new research. http://www.oracle.com/technetwork/topics/security/cpujul2014-1972956.html There is a CVSS 9 and a remote unauthenticated issue in this patch so worth installing this one. [...]

New Oracle Technology Network PHP Forum URL

Christopher Jones - Thu, 2014-07-31 12:29
The Oracle Technology Network (which promotes the development community) is upgrading its software platform and reorganizing some content. The PHP Oracle forum is now at https://community.oracle.com/community/development_tools/php. The top level "PHP Developer Center" is at http://www.oracle.com/technetwork/database/database-technologies/php/whatsnew/index.html. I notice my old bookmarks for the Developer Center redirect to its current location, but this doesn't seem true of some very old URLs for the forum.

Correction: PeopleSoft Interaction Hub Support Plans

PeopleSoft Technology Blog - Thu, 2014-07-31 12:25
In a recent post, we said that Extended Support for the PeopleSoft Interaction Hub was ending in October 2014.  To be clearer, Extended Support for release 9.0 is ending on that date.  Extended Support for release 9.1 and its revisions will be available until at least October 2018.  Sustaining support will be available for those releases beyond the Extended Support dates.  Look for the release of Revision 3 of the Interaction Hub soon.

Oracle Priority Service Infogram for 31-JUL-2013

Oracle Infogram - Thu, 2014-07-31 10:34

Oracle and NFS
From Martin’s Blog: Setting up Direct NFS on Oracle 12c.
Testing
From flashdba: New section: Oracle SLOB Testing. And no, it’s not about finding people with soup stains on their shirt and giving them multiple choice exams with essay.
GoldenGate
GoldenGate Director Security, from Oracle DBA - Tips and Techniques.
VM
From AMIS Technology Blog: Fastest way to a Virtual Machine with JDeveloper 12.1.3 and Oracle Database XE 11gR2 – on Ubuntu Linux 64 bit.
Fusion
Discovering Fusion Applications in Oracle Enterprise Manager 12c, from the Oracle A-Team Chronicles.
SOA
Purging Data From the BPEL Store, from DZone.
Visualization
From the Oracle Data Visualizations Blog: A Guide to Diagram – Part 8 – Diagram Container Groups.
Java
org.openide.util.ContextGlobalProvider, from Geertjan’s Blog.
A closer look at Oracle IDM Auditing, from Java Code Geeks.
Big Data
From CIO: Oracle hopes to make SQL a lingua franca for big data.
Good Housekeeping
OraManageability brings us this article: Keeping a Tidy Software Library – Saved Patches
SPARC
From ZDNet: Oracle prepares to unveil next-gen SPARC 7 processor.
EBS
From the Oracle E-Business Suite Support Blog:
Oracle Service Contracts – How to Drive Contract Coverage by Severity
Let's Talk About Reclassifications in Fixed Assets
Just Released! July 2014 Procurement Rollup Patch 18911810
How to Customize the Field Service Debrief Report
From Oracle E-Business Suite Technology
Latest Updates to AD and TXK Tools for EBS 12.2
E-Business Suite Plug-in 12.1.0.3 for Enterprise Manager 12c Now Available
Shameless Boasting

Is your company’s HQ so cool people come film movies there? Ours is! ‘Terminator: Genesis’ Filming at Oracle Headquarters, from CBS.

MySQL 5.6.20-4 and Oracle Linux DTrace

Wim Coekaerts - Thu, 2014-07-31 09:57
The MySQL team just released MySQL 5.6.20. One of the cool new things for Oracle Linux users is the addition of MySQL DTrace probes. When you use Oracle Linux 6, or 7 with UEKr3 (3.8.x) and the latest DTrace utils/tools, then you can make use of this. MySQL 5.6 is available for install through ULN or from public-yum. You can just install it using yum.

# yum install mysql-community-server

Then install dtrace utils from ULN.

# yum install dtrace-utils

As root, enable DTrace and allow normal users to record trace information:

# modprobe fasttrap
# chmod 666 /dev/dtrace/helper

Start MySQL server.

# /etc/init.d/mysqld start

Now you can try out various dtrace scripts. You can find the reference manual for MySQL DTrace support here.

Example1

Save the script below as query.d.

#!/usr/sbin/dtrace -qws
#pragma D option strsize=1024


mysql*:::query-start /* using the mysql provider */
{

  self->query = copyinstr(arg0); /* Get the query */
  self->connid = arg1; /*  Get the connection ID */
  self->db = copyinstr(arg2); /* Get the DB name */
  self->who   = strjoin(copyinstr(arg3),strjoin("@",
     copyinstr(arg4))); /* Get the username */

  printf("%Y\t %20s\t  Connection ID: %d \t Database: %s \t Query: %s\n", 
     walltimestamp, self->who ,self->connid, self->db, self->query);

}

Run it, in another terminal, connect to MySQL server and run a few queries.

# dtrace -s query.d 
dtrace: script 'query.d' matched 22 probes
CPU     ID                    FUNCTION:NAME
  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:21 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: select @@version_comment limit 1

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database:  	 
    Query: SELECT DATABASE()

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show databases

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:28 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: show tables

  0   4133 _Z16dispatch_command19enum_server_commandP3THDPcj:query-start 2014 
    Jul 29 12:32:31 root@localhost	  Connection ID: 5 	 Database: database 	 
    Query: select * from foo

Example 2

Save the script below as statement.d.

#!/usr/sbin/dtrace -s

#pragma D option quiet

dtrace:::BEGIN
{
   printf("%-60s %-8s %-8s %-8s\n", "Query", "RowsU", "RowsM", "Dur (ms)");
}

mysql*:::update-start, mysql*:::insert-start,
mysql*:::delete-start, mysql*:::multi-delete-start,
mysql*:::multi-delete-done, mysql*:::select-start,
mysql*:::insert-select-start, mysql*:::multi-update-start
{
    self->query = copyinstr(arg0);
    self->querystart = timestamp;
}

mysql*:::insert-done, mysql*:::select-done,
mysql*:::delete-done, mysql*:::multi-delete-done, mysql*:::insert-select-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           0,
           arg1,
           this->elapsed);
    self->querystart = 0;
}

mysql*:::update-done, mysql*:::multi-update-done
/ self->querystart /
{
    this->elapsed = ((timestamp - self->querystart)/1000000);
    printf("%-60s %-8d %-8d %d\n",
           self->query,
           arg1,
           arg2,
           this->elapsed);
    self->querystart = 0;
}

Run it and do a few queries.

# dtrace -s statement.d 
Query                                                        RowsU    RowsM    Dur (ms)
select @@version_comment limit 1                             0        1        0
SELECT DATABASE()                                            0        1        0
show databases                                               0        6        0
show tables                                                  0        2        0
select * from foo                                            0        1        0

A look at how RDX’s Additional Services can meet your needs: Series Kick-off [VIDEO]

Chris Foot - Thu, 2014-07-31 09:08

Transcript

Today we’re kicking off a series about our additional offerings, because we think it’s important for your organization to leverage RDX’s full suite of data infrastructure services to improve your organization’s ability to turn raw information into actionable business knowledge.

From our Business Intelligence services – designed to get you the right information about your company to make savvy strategic decisions – to our application hosting, database security and non-database server monitoring, GoldenGate replication services, and support for Windows, MySQL and Oracle EBS, we’ve got every administration need you can think of covered.

We’ll take an in-depth look at each of these services in videos to come, so you can learn how they can benefit your business and choose the services that may be the most important to you.

For more information on our additional services, follow the link below for our Additional Services Whitepaper.

Tune in next time as we discuss the importance of Business Intelligence for your business!
 

The post A look at how RDX’s Additional Services can meet your needs: Series Kick-off [VIDEO] appeared first on Remote DBA Experts.

SQL Server and OS Error 1117, Error 9001, Error 823

Pythian Group - Thu, 2014-07-31 08:32

small__3212904193 Along with other administrators, life of us, the DBAs are no different but full of adventure.  At times, we encounter an issue which is very new for us, rather, one that we have not faced in the past.  Today, I will be writing about such case.  Not so long back, in the beginning of June, I was having my morning tea I got a page from a customer we normally do not receive pages from. While I was analyzing the error logs, I noticed several lines of error like the ones below:

2014-06-07 21:03:40.57 spid6s Error: 17053, Severity: 16, State: 1.
LogWriter: Operating system error 21(The device is not ready.) encountered.
2014-06-07 21:03:40.57 spid6s Write error during log flush.
2014-06-07 21:03:40.57 spid67 Error: 9001, Severity: 21, State: 4.
The log for database 'SSCDB' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
2014-06-07 21:03:40.58 spid67 Database SSCDB was shutdown due to error 9001 in routine 'XdesRMFull::Commit'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
2014-06-07 21:03:40.65 spid25s Error: 17053, Severity: 16, State: 1.
fcb::close-flush: Operating system error (null) encountered.
2014-06-07 21:03:40.65 spid25s Error: 17053, Severity: 16, State: 1.
fcb::close-flush: Operating system error (null) encountered.

I had never seen this kind of error in the past so my next step was to check Google , which returned too many results. There were two sites that were worthwhile: The first site covers the OS Error 1117 , a Microsoft KB article, whereas the second site by Erin Stellato ( B | T ) talks about other errors like Error 823, Error 9001.  Further, I checked the server details and found that it’s exactly what the issue is here,  the server is using  PVSCSI (Para Virtualized SCSI) controller to LSI on the VMWare host. 

Resolving the issue

I had a call with client and have his consent to restart the service. This was quick, and after it came back, I ran checkdb – “We are good!” I thought.

But wait. This was the temporary fix. Yes, you read that correctly. This was the temporary fix, and this issue is actually lies with the VMWare, it’s a known issue according to VMWare KB Article. To fix this issue, we’ll have to upgrade to vSphere 5.1 according to the VMWare KB article.

Please be advised that the first thing that I did here is to apply the temporary fix, the root cause analysis – I did that last, after the server is up and running fine.

photo credit: Andreas.  via photopin CC

Categories: DBA Blogs

Help Please! The UKOUG Partner of the Year awards

Duncan Davies - Thu, 2014-07-31 07:48

We’d really appreciate your help. But first, a bit of background:

The Partner of the Year awards is an annual awards ceremony held by the UK Oracle User Group. It allows customers to show appreciation for partners that have provided a service to them over the previous 12 months. As you would imagine, being voted a winner (for the categories that you operate in) is a wonderful accolade as it’s the end-users that have spoken.

Cedar Consulting has a long history of success in the competition, reflecting our long standing relationships with our clients. I wasn’t going to ask for votes this year, however I notice that many of our competitors are filling Twitter and LinkedIn with pleas so I feel that I should also ask for your vote.

If you’re an existing Cedar client site we’d love your vote. Also, if you are a recipient of any other Cedar service – and I guess here I’m talking about the free PeopleSoft and Fusion Weekly newsletters that we send out – we’d be very grateful if you gave 3 minutes of your time to vote for us.

What we’d like you to do:

1) Go to: http://pya.ukoug.org/index.php

2) Fill in your company name, first name and surname. Then click Next.

3) Enter your email address in both fields, then click Next.

4) Select any checkboxes if you want ‘follow-up communications’ from the UKOUG, or leave all blank, and click Next.

5) Select Cedar Consulting from the drop-down, and click Next.

6) On the PeopleSoft page, select the Gold radio button on the Cedar Consulting row (note, it’s the 3rd column!), then click Next.

7) Repeat by selecting the Gold radio button on the Cedar Consulting row of the Fusion page, then click Next.

8) Click Submit.

And you’re done. Thank you very much. If you want some gratitude for your 3 minutes of effort drop me an email and I’ll thank you personally!