Feed aggregator

SQL Injections

Tom Kyte - Tue, 2016-07-19 13:46
Hi Tom, 1)What is the sql injection,And how to solve sql injection problems with an example program tom?
Categories: DBA Blogs

Tom Kyte's books

Tom Kyte - Tue, 2016-07-19 13:46
Hi Tom, I have some of your fine Oracle books. Effective Oracle by design and Expert Oracle Database Architecture (latest version). I prefer reading on an ipad rather than carrying them around with me. Is there a PDF version of these books? Do ...
Categories: DBA Blogs

Converting Row to Column and Vice-versa

Tom Kyte - Tue, 2016-07-19 13:46
Hi Tom, I have a table Assigned_Task. ENAME JANUARY FEBRUARY MARCH APRIL ------ -------- -------- ------ ------ RASHMI 20 25 30 05 SOUMYA 11 21 09 15 BISWA 17 33 19 00 I want the output as below format: M...
Categories: DBA Blogs

Unique value in Oracle Collection variable

Tom Kyte - Tue, 2016-07-19 13:46
Hi team, I have one doubt on oracle collection while implementing on Scenario in my assignment. We need to have collection variable column (vARRAYS) inside a Table , Sample table structure looks like below id name list 1 ABC varra...
Categories: DBA Blogs

Alternatives for to_char(dt, 'DAY')

Tom Kyte - Tue, 2016-07-19 13:46
--->A simple SQL issue that I faced few days back. We normally perform some executions on weekends(SATURDAY or SUNDAY) like some sort of cleaning activity,stats gathering,partition creation etc. Please take caution If your are planning to achieve ...
Categories: DBA Blogs

VirtualBox 5.0.26

Tim Hall - Tue, 2016-07-19 07:28

If you’ve not taken the plunge and upgraded to VirtualBox 5.1, you might be interested to know that VirtualBox 5.0.26 has been released.

Downloads and changelog are now listed under the old builds.

In case you are wondering, I’ve been running VirtualBox 5.1 on Mac, Windows 7 and Oracle Linux 6 hosts since its release. As yet, not problems.



VirtualBox 5.0.26 was first posted on July 19, 2016 at 1:28 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

System Metrics Collectors

Rittman Mead Consulting - Tue, 2016-07-19 02:18
System Metrics Collectors

The need to monitor and control the system performances is not new. What is new is the trend of clever, lightweight, easy to setup, open source metric collectors in the market, along with timeseries databases to store these metrics, and user friendly front ends through which to display and analyse the data.

In this post I will compare Telegraf, Collectl and Topbeat as lightweight metric collectors. All of them do a great job of collecting variety of useful system and application statistic data with minimal overhead to the servers.  Each has the strength of easy configuration and accessible documentation but still there are some differences around range of input and outputs; how they extract the data, what metrics they collect and where they store them.

  • Telegraf is part of the Influx TICK stack, and works with a vast variety of useful input plugins such as Elasticsearch, nginx, AWS and so on. It also supports a variety of outputs, obviously InfluxDB being the primary one. (Find out more...)
  • Topbeat is a new tool from Elastic, the company behind Elasticsearch, Logstash, and Kibana. The Beats platform is evolving rapidly, and includes topbeat, winlogbeat, and packetbeat. In terms of metric collection its support for detailed metrics such as disk IO is relatively limited currently. (Find out more...)
  • Collectl is a long-standing favourite of systems performance analysts, providing a rich source of data. This depth of complexity comes at a bit of a cost when it comes to the documentation’s accessibility, it being aimed firmly at a systems programmer! (Find out more...)

In this post I have used InfluxDB as the backend for storing the data, and Grafana as the front end visualisation tool. I will explain more about both tools later in this post.

In the screenshot below I have used Grafana dashboards to show  "Used CPU", "Used Memory" and "Network Traffic" stats from the mentioned collectors. As you can see the output of all three is almost the same. What makes them different is:

    • What your infrastructure can support? For example, you cannot install Telegraf on old version of X Server.
    • What input plugins do you need? The current version of Topbeat doesn’t support more detailed metrics such as disk IO and network stats.
    • What storage do you want/need to use for the outputs? InfluxDB works as the best match for Telegraf data, whilst Beats pairs naturally with Elasticsearch
    • What is your visualisation tool and what does it work with best. In all cases the best front end should natively support time series visualisations.

System Metrics Collectors

Next I am going to provide more details on how to download/install each of the mentioned metrics collector services, example commands are written for a linux system.

Telegraf "An open source agent written in Go for collecting metrics and data on the system it's running on or from other services. Telegraf writes data it collects to InfluxDB in the correct format."
  1. Download and install InfluxDB: sudo yum install -y https://s3.amazonaws.com/influxdb/influxdb-0.10.0-1.x86_64.rpm
  2. Start the InfluxDB service: sudo service influxdb start
  3. Download Telegraf: wget http://get.influxdb.org/telegraf/telegraf-0.12.0-1.x86_64.rpm
  4. Install Telegraf: sudo yum localinstall telegraf-0.12.0-1.x86_64.rpm
  5. Start the Telegraf service: sudo service telegraph start
  6. Done!

The default configuration file for Telegraf sits in /etc/telegraf/telegraf.conf or a new config file can be generated using the -sample-config flag on the location of your choice:  telegraf -sample-config > telegraf.conf .  Update the config file to enable/disable/setup different input or outputs plugins e.g. I enabled network inputs: [[inputs.net]]. Finally to test the config files and to verify the output metrics run: telegraf -config telegraf.conf -test

Once all ready and started, a new database called 'telegraf' will be added to the InfluxDB storage which you can connect and query. You will read more about InfluxDB in this post.


Collectl Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interactively or as a daemon but not both, collectl tries to do it all. You can choose to monitor any of a broad set of subsystems which currently include buddyinfo, cpu, disk, inodes, infiniband, lustre, memory, network, nfs, processes, quadrics, slabs, sockets and tcp.
  • Install collectl: sudo yum install collectl
  • Update the Collectl config file at /etc/collectl.conf to turn on/off different switches and also to write the Collectl's output logs to a database, i.e. InfluxDB
  • Restart Collectl service  sudo service collectl restart
  • Collectl will write its log in a new InfluxDB database called “graphite”.


Topbeat Topbeat is a lightweight way to gather CPU, memory, and other per-process and system wide data, then ship it to (by default) Elasticsearch to analyze the results.
  • Download Topbeat: wget https://download.elastic.co/beats/topbeat/topbeat-1.2.1-x86_64.rpm
  • Install: sudo yum local install topbeat-1.2.1-x86_64.rpm
  • Edit the topbeat.yml configuration file at /etc/topbeat and set the output to elasticsearch or logstash.
  • If choosing elasticsearch as output, you need to load the index template, which lets Elasticsearch know which fields should be analyzed in which way. The recommended template file is installed by the Topbeat packages. You can either configure Topbeat to load the template automatically, Or you can run a shell script to load the template: curl -XPUT 'http://localhost:9200/_template/topbeat -d@/etc/topbeat/topbeat.template.json
  • Run topbeat: sudo /etc/init.d/topbeat start
  • To test your Topbeat Installation try: curl -XGET 'http://localhost:9200/topbeat-*/_search?pretty'
  • TopBeat logs are written at /var/log
  • Reference to output fields 


Why write another metrics collector?

From everything that I have covered above, it is obvious that there is no shortage of open source agents for collecting metrics. Still you may come across a situation that none of the options could be used e.g. specific operating system (in this case, MacOS on XServe) that can’t support any of the options above. The below code is my version of light metric collector, to keep track of Disk IO stats, network, CPU and memory of the host where the simple bash script will be run.

The code will run through an indefinite loop until it is forced quit. Within the loop, first I have used a CURL request (InfluxDB API Reference) to create a database called OSStat, if the database name exists nothing will happen. Then I have used a variety of built-in OS tools to extract the data I needed. In my example sar -u for cpu, sar -n for network, vm_stat for memory, iotop for diskio could return the values I needed. With a quick search you will find many more options. I also used a combinations of awk, sed and grep to transform the values from these tools to the structure that I was easier to use on the front end. Finally I pushed the results to InfluxDB using the curl requests.

export INFLUX_SERVER=$1  
while [ 1 -eq 1 ];  

#######CREATE DATABASE ########
curl -G http://$INFLUX_SERVER:8086/query  -s --data-urlencode "q=CREATE DATABASE OSStat" > /dev/null

####### CPU  #########
sar 1 1 -u | tail -n 1 | awk -v MYHOST=$(hostname)   '{  print "cpu,host="MYHOST"  %usr="$2",%nice="$3",%sys="$4",%idle="$5}' | curl -i -XPOST "http://${INFLUX_SERVER}:8086/write?db=OSStat"  -s --data-binary @- > /dev/null

####### Memory ##########
FREE_BLOCKS=$(vm_stat | grep free | awk '{ print $3 }' | sed 's/\.//')  
INACTIVE_BLOCKS=$(vm_stat | grep inactive | awk '{ print $3 }' | sed 's/\.//')  
SPECULATIVE_BLOCKS=$(vm_stat | grep speculative | awk '{ print $3 }' | sed 's/\.//')  
WIRED_BLOCKS=$(vm_stat | grep wired | awk '{ print $4 }' | sed 's/\.//')


curl -i -XPOST "http://${INFLUX_SERVER}:8086/write?db=OSStat"  -s --data-binary  "memory,host="$(hostname)" Free="$FREE",Inactive="$INACTIVE",Total-free="$TOTALFREE",Wired="$WIRED",Active="$ACTIVE",total-used="$TOTAL > /dev/null

####### Disk IO ##########
iotop -t 1 1 -P | head -n 2  | grep 201 | awk -v MYHOST=$(hostname)  
  '{ print "diskio,host="MYHOST" io_time="$6"read_bytes="$8*1024",write_bytes="$11*1024}'  | curl -i -XPOST "http://${INFLUX_SERVER}:8086/write?db=OSStat"  -s --data-binary @- > /dev/null

###### NETWORK ##########
sar -n DEV 1  |grep -v IFACE|grep -v Average|grep -v -E ^$ | awk -v MYHOST="$(hostname)" '{print "net,host="MYHOST",iface="$2" pktin_s="$3",bytesin_s="$4",pktout_s="$4",bytesout_s="$5}'|curl -i -XPOST "http://${INFLUX_SERVER}:8086/write?db=OSStat"  -s --data-binary @- > /dev/null

sleep 10;  



InfluxDB Storage "InfluxDB is a time series database built from the ground up to handle high write and query loads. It is the second piece of the TICK stack. InfluxDB is meant to be used as a backing store for any use case involving large amounts of timestamped data, including DevOps monitoring, application metrics, IoT sensor data, and real-time analytics."

InfluxDB's SQL-like query language is called InfluxQL, You can connect/query InfluxDB via Curl requests (mentioned above), command line or browser. The following sample InfluxQLs cover useful basic command line statements to get you started:

influx -- Connect to the database

SHOW DATABASES  -- Show existing databases, _internal is the embedded databased used for internal metrics

USE telegraf -- Make 'telegraf' the current database

SHOW MEASUREMENTS -- show all tables within current database

SHOW FIELD KEYS -- show tables definition within current database

InfluxDB also have a browser admin console that is by default accessible on port 8086. (Official Reference(Read more on RittmanMead Blog)

System Metrics Collectors


Grafana Visualisation "Grafana provides rich visualisation options best for working with time series data for Internet infrastructure and application performance analytics."

Best to use InfluxDB as datasource for Grafana as Elasticsearch datasources doesn't support all Grafana's features e.g. functions behind the panels. Here is a good introduction video to visualisation with Grafana.

System Metrics Collectors

Categories: BI & Warehousing

Links for 2016-07-18 [del.icio.us]

Categories: DBA Blogs

The New Ask Tom

Tom Kyte - Mon, 2016-07-18 19:26
Why do parts of the new Ask Tom appear in another language? e.g. on the list of questions asked previously: Kedd Februar 18, 2003 (H. Feb 17, 2003, 0 days old) Thanks.
Categories: DBA Blogs

pivot data into named columns

Tom Kyte - Mon, 2016-07-18 19:26
Hi, I have a table that looks like the following PHOTOID PHOTO PRODUCT_ID 1 BLOB 123 2 BLOB 123 3 BLOB 123 4 BLOB 123 5 BLOB 123 6 BLOB 124 7 BLOB 124 8 BLOB 125 9 BLOB 126 10 BLOB 126 11 BLOB 126 12 BLOB 126 13 BLOB 126 each PRODUC...
Categories: DBA Blogs

Code enchancement

Tom Kyte - Mon, 2016-07-18 19:26
Hello Chris Saxon, I have a below table structure. CREATE TABLE "PEOPLE" ( "PERSON_ID" NUMBER, "GIVEN_NAME" VARCHAR2(100), "CATEGORY" number(50) ) INSERT INTO people VALUES(1,Connor,1001); / INSERT INTO people VALU...
Categories: DBA Blogs

difference between soft parse and hard parse

Tom Kyte - Mon, 2016-07-18 19:26
Hi Tom Can you explain briefly the difference between soft and hard parse? cheers
Categories: DBA Blogs

Time conversion

Tom Kyte - Mon, 2016-07-18 19:26
Hi Team, Please let me know how we can convert <b>14-JUL-2016 17:11:05 PM</b> into 1<b>7:11:05 or 05:11:05 PM only</b> in Oracle. Regards, Atin
Categories: DBA Blogs

Why it is difficult to scale oracle

Tom Kyte - Mon, 2016-07-18 19:26
Why it is difficult to scale oracle database. Thanks!!
Categories: DBA Blogs

CX Sales—It’s all about Modern Selling at Oracle OpenWorld 2016

Linda Fishman Hoyle - Mon, 2016-07-18 17:53

A Guest Post by Michael Richter, Director of Product Management, Oracle (pictured left) 

Oracle Cloud apps shine on the big stage at Oracle OpenWorld 2016.

And none so brightly as Oracle Sales Cloud.

The CX Sales Track at OpenWorld 2016 is where sales professionals like you meet to learn, network, and experience everything there is to know about modern selling.

You can listen to success stories from customers, meet with partners that have implemented or extended sales applications, and engage with product experts in the exhibition center to see live demonstrations. What you will find in common is that modern selling is transforming the way sales is conducted across industries.

Industry Sessions for CX Leaders—Learn to stay ahead of the competition

How do you differentiate yourself beyond the products and services you sell? Pre-configured industry solutions accelerate the path to faster ROI and leverage Oracle’s extensive industry expertise. That combination lets you focus on your unique business components.

At Oracle OpenWorld, there will be a series of CX industry overview sessions, including High Tech and Manufacturing, Consumer Goods, Financial Services, Communications, Higher Education, Automotive, and Hospitality. There will be scheduled live demonstrations at the CX Sales Theater in the CX Central exhibition area on the 2nd floor at Moscone West.

Some featured sessions for industries include:

  • GEN6944  Learn how to Differentiate Yourself with Industry-Tailored Customer Experiences
  • CON6941  Redefining Automotive Industry Customer Experience for a Sharing, Digital World
  • CON6934  Become a Disruptor—Lead with your Digital Experience (Communications)
  • THT6954  Recruiting the Modern Student with Oracle CX (Higher Ed)

Learn from Customer Success Stories, Product Experts, and Live Demonstrations

There are many ways to interact with partners and Oracle product experts to gain deeper insights on modern selling. You'll find a number of conference sessions and live demonstrations focusing on Sales Cloud, Configure, Price, and Quote (CPQ) Cloud, Engagement Cloud topics, and more.

Oracle product managers will lead roadmap sessions to reveal the latest innovations to Sales Cloud. They’ll also share useful tools to speed migration and ways to streamline and get optimal value from integrations. Learn about the new Engagement Cloud through session presentations and live demonstrations.

We’re announcing Oracle Sales Cloud Release 12 at Oracle OpenWorld

  • GEN6317  CX Sales General Session: A Modern Way to Sell

Sales Cloud session highlights:

  • CON6319  Oracle Sales Cloud Roadmap
  • CON6928  Oracle Cloud Marketplace:  Drive Growth with Innovative Apps
  • CON6929  Oracle Sales Cloud:  Fast Deployments and Fast Results for Midsize Companies
  • CON6933  Getting Value with an Oracles Sales Cloud & Marketing Cloud Integration
  • HOL7492 & HOL7493—Hands on Labs (Introductory session and mobility session)
  • THT6321  Oracle Sales Cloud Sales Force Automation Demonstration
  • THT6940  Oracle Sales Cloud Partner Relationship Management Demonstration
  • THT6939  Oracle Sales Cloud Sales Performance Management Demonstration

CPQ session highlights:

  • CON6320  Oracle Configure, Price and Quote Roadmap
  • CON6937  How Oracle Sales Cloud and CPQ Cloud Work Together for Maximum Results
  • CON7076  What’s your CPQ Maturity?  A CPQ Business Panel Discussion
  • CON7078  Oracle CPQ Cloud and EBS Integration
  • THT7082  Oracle Configure, Price and Quote Demonstration
  • THT7083  Oracle Configure, Price and Quote and Oracle Commerce Cloud Demonstration

Engagement Cloud session highlights:

  • CON6938  Introducing Engagement Cloud—A New Way to Engage your High Value Customers
  • CON7120  Channel Surfing—Empowering Multi-channel Service Agents with Engagement Cloud
  • CON7150  Delivering Knowledge-Driven Self-Service with Oracle’s Engagement Cloud
  • CON6936  Optimizing Engagement to Develop Customer Loyalty (Financial Services)
  • THT7121  Oracle Engagement Cloud Sneak Peak

CX Central Exhibition Center

  •  Visit the vast array of partner kiosks to learn from the experts, whether it's implementation best practices or new software developments to extend your brand and functionality.

The NEW CX Sales Theater

  • Be sure to attend the sessions at the NEW CX Sales Theater in the exhibition area on 2/F at Moscone West. The prefix for these Theater sessions is “THT”, e.g. THT7121. Sales topics of interest include sales force automation, sales performance management, partner relationship management, customer data management, your preferred industry solution, sales analytics or the new Engagement Cloud. It’s all here for the taking.

A Unified and Seamless CX Approach—Oracle CX Cloud Suite

We encourage you to learn first hand about integrations, to become familiar with the tools and best practices to migrate from existing CRM systems to Sales Cloud, and to attend the multitude of valuable cross-experience sessions. These sessions will reveal how every experience with your brand matters and why it’s critical to strive for a unified and seamless CX approach.

Sessions will cover topics such as Industry Solutions, CX Platform, Oracle CX Cloud Suite and integrations, the Oracle CX Cloud Marketplace, social CRM, CX Cloud for Midsize, and Cloud user experience.

Attend a Fireside Chat with Oracle Sales Cloud and CPQ Cloud Product Management Team

  • MTE6318  Hear from Oracle product experts from Sales and CPQ in an informal Fireside Chat format; this is a great way to wrap up the conference on Thursday afternoon.

The San Francisco Experience—Networking and Entertainment

San Francisco is the host city for Oracle OpenWorld. Top attractions include the Golden Gate Bridge, Alcatraz, the famous Cable Cars, California Academy of Sciences, and Fisherman’s Wharf.

Be sure to set aside some time with colleagues or new friends to experience this beautiful city. 

Next Steps

Registration is now open. Visit Oracle OpenWorld 2016 for information on CX Sales session and other details!

See what attendees are already saying about Oracle OpenWorld Sales Track and more by joining the conversation on social media: Twitter, #oow16, Facebook, LinkedIn, blog.

See you in San Francisco!

The AppsLab’s Latest Inter-State Adventure: A Site Visit to Micros

Oracle AppsLab - Mon, 2016-07-18 16:17

Probably the best way to get to know your users is to watch them work, in their typical environment. That, and getting to talk to them right after observing them. It’s from that perspective that you can really see what works, what doesn’t, and what people don’t like. And this is exactly what we want to learn about in our quest to improve our users’ experience using Oracle software.

That said, we’ve been eager to get out and do some site visits, particularly for learning more about supply chain management (SCM). For one, SCM is an area most of us on the team haven’t spent too much time working on. But two, at least for me–working mostly in the abstract, or at least the virtual—there’s something fascinating and satisfying about how physical products and materials move throughout the world, starting as one thing and being manufactured or assembled into something else.

We had a contact at Micros, so we started there. Also, they’re an Oracle company, so that made it much easier. You’ve probably encountered Micros products, even if you haven’t noticed them—Micros does point of sales (POS) systems for retail and hospitality, meaning lots of restaurants, stadiums, and hotels.

Micros point-of-sales terminals throughout the years. This is in Micros's corporate office in Columbia, Maryland.

Micros point-of-sales terminals throughout the years. This is in Micros’s corporate office in Columbia, Maryland.

For this particular adventure, we teamed up with the SCM team within OAUX, and went to Hanover, Maryland, where Micros has its warehouse operations, and where all of its orders are put together and shipped out across the world.

We observed and talked to a variety of people there: the pickers, who grab all the pieces for an order; the shippers, who get the orders ready to ship out and load them on the trucks; receiving, who takes in all the new inventory; QA, who have to make sure incoming parts are OK, as well as items that are returned; and cycle counters, who count inventory on a nightly basis. We also spoke to various managers and people involved in the business end of things.

A view inside the Micros warehouse.

A view inside the Micros warehouse.

In addition to following along and interviewing different employees, the SCM team ran a focus group, and the AppsLab team ran something like a focus group, but which is called a User Journey Map. With this research method, you have users map out their tasks (using sticky notes, a UX researcher’s best friend), while also including associated thoughts and feelings corresponding to each step of each task. We don’t just want to know what users are doing or have to do, but how they feel about it, and the kinds of questions they may have.

In an age where we’re accustomed to pressing a button and having something we want delivered in two days (or less), it’s helpful on a personal level to see how this sort of thing actually happens, and all the people involved in the background. On a professional level, you see how software plays a role in all of it—keeping it all together, but also imposing limits on what can be done and what can be tracked.

This was my first site visit, though I hope there are plenty more in the future. There’s no substitute for this kind of direct observation, where you can also ask questions. You come back tired, but with lots of notes, and lots of new insights.Possibly Related Posts:

SQL Server AlwaysOn: new services packs and new diagnostic capabilities

Yann Neuhaus - Mon, 2016-07-18 13:24

As you certainly know, the SQL Server 2014 SP2 has been released by Microsoft with some interesting improvements that concern SQL Server AlwaysOn and availability groups feature. In fact, all of these improvements are also included into SQL Server 2012 SP3 and SQL Server 2016. Among all fixes and improvements that concern AlwaysOn, I would like to focus on those described in the Microsoft KB3173156 and KB3112363. But in this first blog post, let’s say that I will just cover the improvement about the lease timeout which is part of the AlwaysOn health model.

Did you already face lease timeout issue ? If yes, you have certainly notice dit is an good indicator of system wide problem and figure out what is the root cause could be a burden task because we missed diagnostic information and we had to correlate different performance metrics as well. Fortunately, the release of new service packs provide enhancements in this area.

Let’s take an example with a 100% CPU utilization scenario that leads to make the primary replica unresponsive and unable to respond to cluster isAlive() routine. This is typically a situation where we may face a lease timeout issue. After simulating this scenario on my lab environment,here what I found in the SQL Server error log from my primary replica. (I have voluntary filtered to include only the sample we want to focus on).

blog 101 - AG 2014 SP2 - lease timeout issue

Firstly, we may see different new messages related to lease timeout issues between the range interval 12:39:54 – 12:43:22. For example, the WSFC did not receive a process event signal from SQL Server within the lease timeout period or the lease between AG and the WSFC has expired. Diagnostic messages have been enhanced to give us a better understanding of the lease issue. But at this point we know we are facing lease timeout but we don’t know the root cause yet. Imrovements have also been extented to the cluster log in order to provide more insights to the system behavior at the moment of the lease timeout issue as we may see below:

00000644.00000768::2016/07/15-12:40:06.575 ERR   [RCM] rcm::RcmResource::HandleFailure: (TestGrp)

00000644.00000c84::2016/07/15-12:40:06.768 INFO [GEM] Node 2: Sending 1 messages as a batched GEM message

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] resource TestGrp: failure count: 0, restartAction: 0 persistentState: 1.

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] numDependents is zero, auto-returning true

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Will queue immediate restart (500 milliseconds) of TestGrp after terminate is complete.

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Res TestGrp: ProcessingFailure -> WaitingToTerminate( DelayRestartingResource )

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] TransitionToState(TestGrp) ProcessingFailure–>[WaitingToTerminate to DelayRestartingResource].

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Res TestGrp: [WaitingToTerminate to DelayRestartingResource] -> Terminating( DelayRestartingResource )

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] TransitionToState(TestGrp) [WaitingToTerminate to DelayRestartingResource]–>[Terminating to DelayRestartingResource].

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] Lease timeout detected, logging perf counter data collected so far

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] Date/Time, Processor time(%), Available memory(bytes), Avg disk read(secs), Avg disk write(secs)

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:24.0, 8.866394, 912523264.000000, 0.000450, 0.000904

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:34.0, 25.287347, 919531520.000000, 0.001000, 0.000594

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:44.0, 25.360508, 921534464.000000, 0.000000, 0.001408

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:55.0, 81.225454, 921903104.000000, 0.000513, 0.000640

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:40:5.0, 100.000000, 922415104.000000, 0.002800, 0.002619

00000cc0.00001350::2016/07/15-12:40:12.452 INFO [RES] SQL Server Availability Group: [hadrag] Stopping Health Worker Thread


According to the SQL Server error log time range we may notice similar messages that concern the detection of lease timeout with some additional information that came from the perfmon counters (Concerned lines are underlined in the sample above). If we reformat the concerned portion into the table below we may get a better identification of our issue

Date/Time Processor time (%) Availability memory(bytes) Avg disk read(secs) Avg disk write(secs) 10:39:24.0 8.866394 912523264 912523264 0.000904 10:39:34.0 25.287347 919531520 0.001000 0.000594 10:39:44.0 25.360508 921534464 0.000000 0.001408 10:39:55.0 81.225454 921903104 0.000513 0.000640 10:40:5.0 100.000000 922415104 0.002800 0.002619


CPU utilization is what we must focus on here. So getting this valuable information directly to the cluster.log when we troubleshoot lease timeout issue will help us a lot. But just to clarify, this doesn’t mean that it was not possible with older versions but we have to retrieve them in a more complicated way (by using the AlwaysOn_health extended event for example).

Next, other improvements concern existing extended events like availability_group_lease_expired and hadr_ag_lease_renewal. The next picture points out new available fields like current_time, new_timeout and state as well.

blog 101 - AG 2014 SP2 - lease time out xe new fields

Let me show you their interest with another example. This time, I voluntary hang my sqlserver.exe process related to the primary replica in order to trigger an unresponsive lease scenario. I got interesting outputs from the extended event trace on both sides.

blog 101 - AG 2014 SP2 - lease time out xe test 2

From the former primary, there are no related records during the period of the SQL Server process responsiveness but we may see a record at 17:19:11. The lease renewal process fails and  we get a better picture of the problem by looking at the corresponding state (LeaseNotValid) followed by the availability_group_lease_expired event. Note that the current_time (time at which the lease expired) value is greater than the new_timeout (time out time, when availability_group_lease_expired is raised) value here – 3215765 > 3064484 – which confirms that we experienced a timeout issue in this case.

On the new primary, we may notice the start of the lease worker thread but until the concerned replica stabilizes the PRIMARY ONLINE state, it voluntary postpones the lease check process (materialized by StartedExcessLeaseSleep / ExcessSleepSucceeded state values).

In the next blog I will talk about improvements in the detection of the availability group replication latency.

Stay tuned!



Cet article SQL Server AlwaysOn: new services packs and new diagnostic capabilities est apparu en premier sur Blog dbi services.

What is the instance name?

Laurent Schneider - Mon, 2016-07-18 09:48

If your Oracle SID doesn’t match your instance name in init.ora, this is quite confusing.

Check my previous post, what is sid in oracle

In the instance_name column of the view v$instance, as well as in USERENV context, it matches the ORACLE_SID of the underlying operating system.

SQL> var ORACLE_SID varchar2(9)
SQL> set autoprint on
SQL> exec dbms_system.get_env('ORACLE_SID',:ORACLE_SID)
PL/SQL procedure successfully completed.
SQL> select sys_context('USERENV','INSTANCE_NAME') from dual;
SQL> select instance_name from v$instance;

This is not the same as the init.ora parameter

SQL> select name, value, description from v$parameter where name='instance_name';

------------- --------- ----------------------------------------
instance_name INS001    instance name supported by the instance

The instance_name doesn’t have to match anything. It’s of relevance if you use ADR. And you probably do. Background dump dest and family are deprecated now. In your ADR docu you’ll read


But this SID is actually your init.ora instance name. And not your ORACLE_SID.

Importing windows scheduled tasks into a Powershell object before 5.0

Matt Penny - Mon, 2016-07-18 07:43

You don’t need this on Powershell 5.0 and upwards because there’s a built-in cmdlet, but for previous versions:

convertfrom-csv $(schtasks /Query /S server1 /TN "run somesstuff" /V /FO CSV)

HostName : server1
TaskName : \run somesstuff
Next Run Time : N/A
Status : Ready
Logon Mode : Interactive only
Last Run Time : 13/07/2016 10:05:43
Last Result : 0
Author : matt
Task To Run : C:\powershell\Modules\somesstuff-PCs\run-somesstuff.bat
Start In : N/A
Comment : Scheduled job which does some stuff
Scheduled Task State :
Idle Time :
Power Management :
Run As User :
Delete Task If Not Rescheduled :
Stop Task If Runs X Hours and X Mins :
Schedule :
Schedule Type :
Start Time :
Start Date :
End Date :
Days :
Months :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :

HostName : More detail at http://ourwebsite
TaskName : Enabled
Next Run Time : Disabled
Status : Stop On Battery Mode, No Start On Batteries
Logon Mode : matt
Last Run Time : Enabled
Last Result : 72:00:00
Author : Scheduling data is not available in this format.
Task To Run : One Time Only
Start In : 10:20:21
Comment : 25/05/2016
Scheduled Task State : N/A
Idle Time : N/A
Power Management : N/A
Run As User : Disabled
Delete Task If Not Rescheduled : Disabled
Stop Task If Runs X Hours and X Mins : Disabled
Schedule : Disabled
Schedule Type :
Start Time :
Start Date :
End Date :
Days :
Months :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :

This is outputting from schtasks in csv format, then importing that into a PowerShell object.

Categories: DBA Blogs

How Can You Create A Column With AUTO_INCREMENT in Oracle SQL?

Complete IT Professional - Mon, 2016-07-18 06:00
If you’ve used MySQL, you’re probably familiar with the AUTO_INCREMENT feature. But it’s not available in Oracle. Read this article to find out how you can auto increment a column in Oracle SQL. What Is Auto Increment? An auto increment column, or an identity column in other databases, is a column that has its value […]
Categories: Development


Subscribe to Oracle FAQ aggregator