Skip navigation.

DBA Blogs

Partner Webcast – Platform as a Service with Oracle WebLogic and OpenStack

Platform as a service is defined as Platform that facilitates the deployment of applications without the complexity of buying and managing the underlying hardware and software...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Keep jobs active with screen command

DBA Scripts and Articles - Thu, 2014-06-12 12:00

What is the screen command ? When you have long running jobs, you need to keep them active when you disconnect from server. The screen command allows you to detach your terminal from you session. By doing this there is no link between your session and your terminal and you can disconnect from server while [...]

The post Keep jobs active with screen command appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Internet Scale Design: Part One

Pythian Group - Thu, 2014-06-12 07:52

Whether your company is a small, bootstrapped startup or a rapidly growing, well-funded tier 2 or 3 business, everyone dreams of having ‘Internet scale’ systems which are highly automated, anti-fragile, self-healing, and inexpensive to maintain.

The problem, however, is that those types of complex systems are only within the reach of well-funded companies who can afford to hire elite technical resources and expensive, leading edge technology systems, right?

Wrong!

Recently, I was working with one of my smaller retail clients. Their business is highly seasonal, doing the vast majority of their business during two separate two-month periods each year. During these periods, the demand on their system is extremely high, and downtime would be catastrophic because of the amount of their annual revenue generated during those times.

They run on a LAMP stack (Linux Apache MySQL PHP) and they’re hosted on a mix of shared VMs, and colocation equipment.  Their hosting costs are fixed year round, even though most of the year they don’t need two thirds of it.

They were beginning their annual budget review when we brought up what the next steps were for them. About a year ago, I began introducing the idea of Amazon Web Services (AWS) cloud, taking more of a site reliability engineering (SRE) approach to their business, so we scheduled a conference call with their Chief Financial Officer, a few VPs, and one of my SREs.

“We would like to move you 100% into the Amazon Cloud,” I said. “In doing so, we will automate all of your systems using image and configuration management, and set up detailed monitoring, graphics, and trending systems. We will work with your developers to redesign your applications to take advantage of the platform and its flexibility. The end result will be a system that is considerably more anti-fragile, runs in multiple regions, offers significantly faster recovery in the event of major failures, and can scale down to very little and up to larger scale in a moment’s notice.” We assured them that the systems could be self-healing and would require very little management.

The Sr. VP of Product Management’s response surprised me.

“Bill, this sounds awesome. But we’re a pretty small company, and we couldn’t possibly afford that sort of system.” I laughed a bit and responded, “Actually, not only can you afford it, but it will save you tens of thousands of dollars.”

I explained the benefits of auto-scaling and the cost savings it would bring to their business model. We discussed how those cost savings would fund the expenses of having Pythian build and manage the new service.

His response? “This is amazing – why isn’t everyone doing this?”

The answer is twofold. I’ll be sharing my insights in part two, so stay tuned…

Categories: DBA Blogs

Adding an outline hint to a PS/Query

Bobby Durrett's DBA Blog - Wed, 2014-06-11 17:52

I just finished working with a coworker on speeding up a PS/Query.  It was running well in a development environment but not in production.  I tried to find a simple hint or two to make production run like development but didn’t have any luck.  Then I remembered that my script to get a query’s plan prints out a full set of outline hints like this:

  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH_AGGREGATION(@"SEL$E903463E")
      USE_NL(@"SEL$E903463E" "CLS"@"SEL$4")
      USE_NL(@"SEL$E903463E" "SEC"@"SEL$3")
      USE_NL(@"SEL$E903463E" "SOC"@"SEL$4")

... lines removed to make this post shorter ...

      OUTLINE_LEAF(@"SEL$F5BB74E1")
      PUSH_PRED(@"SEL$F5BB74E1" "A1"@"SEL$1" 1)
      OUTLINE_LEAF(@"SEL$E903463E")
      ALL_ROWS
      OPT_PARAM('_unnest_subquery' 'false')
      DB_VERSION('11.2.0.3')
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

A coworker edited the PS/Query and tried to add this hint but got errors because of the quotation marks.  So, I simplified the hint by removing all the double quotes and taking out the lines with single quotes because I knew they weren’t needed.  They only related to parameters that I knew were already set in production.

Here is the new quote-less hint:

  /*+
      BEGIN_OUTLINE_DATA
      USE_HASH_AGGREGATION(@SEL$E903463E)
      USE_NL(@SEL$E903463E CLS@SEL$4)
      USE_NL(@SEL$E903463E SEC@SEL$3)
      USE_NL(@SEL$E903463E SOC@SEL$4)

... lines removed to make this post shorter ...

      OUTLINE_LEAF(@SEL$F5BB74E1)
      PUSH_PRED(@SEL$F5BB74E1 A1@SEL$1 1)
      OUTLINE_LEAF(@SEL$E903463E)
      ALL_ROWS
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

We ran the modified PS/Query in production and I verified that the query was running the correct plan.  The only weird thing was that because the query has a group by PS/Query stuck the hint in the group by clause and in the select clause.

SELECT /*+ BEGIN_OUTLINE_DATA USE_HASH_AGGREGATION(@SEL$E90346...

... middle of select statement ...

GROUP BY /*+ BEGIN_OUTLINE_DATA USE_HASH_AGGREGATION(@SEL$E90346...

... end of select statement ...

At first this put us off, but really the hint in the group by clause is just a comment and syntactically is insignificant even though it is ugly.

By the way, here are the options I used to output the outline hint:

select * from 
table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

I get all my plans this way now.  It generates a lot of output that I don’t usually use.  But, seeing it reminds you that it is there if you need it.

- Bobby

Categories: DBA Blogs

so who is using oracle result caching? ( makes me grumpy )

Grumpy old DBA - Wed, 2014-06-11 17:05
Not giving out too many details ... but some people trying to use this feature ( cough cough ) have had issues.

Getting an oracle 600 and having to restart a database instance is not necessarily good right?

Some things sound so good "in theory" and maybe in some distant release the ( oracle code issues ) will be all worked out.

Now staying away from using it ... ( cough cough ) ...
Categories: DBA Blogs

Data Guard 12c New Features: Far Sync & Real-Time Cascade

The Oracle Instructor - Wed, 2014-06-11 08:10

UKOUG Oracle Scene has published my article about two exciting Data Guard 12c New Features:

http://viewer.zmags.com/publication/62b883ad#/62b883ad/44

Far Sync Instance enables Zero-Data-Loss across large distance

Hope you find it useful :-)


Tagged: 12c New Features, Data Guard
Categories: DBA Blogs

Availability Group – Delay with Listener Connectivity After Failover

Pythian Group - Wed, 2014-06-11 07:45

The beauty of working for multiple clients from different industries, is that you get exposed to a myriad of environment setup and configuration. Every company has its own standards for Network and Server configuration, as well different hardware vendors.  This introduces their own kinks and excitement to your everyday work – half of which you’ll likely not encounter if you are working in-house and using the same hardware.

The past week we encountered a rare and interesting issue with High Availability Group. The issue was two-fold, first it was initially not failing over automatically one one node though that was not as exciting as the second part – when it was able to failover correctly, our client was experiencing delays with the availability of the Listener Name outside its own sub-net  after failover, automatic or otherwise.  It is reachable within its own subnet but takes more than thirty minutes to be reachable outside of it even though the Failover happened smoothly and without error.

The first part was fairly straightforward. Checking on the cluster logs and event logs the automatic failover was throwing the error below when trying to failover on one of the nodes.

Cluster network name resource 'Listener_DNS_NAME' failed registration of one or more associated DNS name(s) for the following reason:
DNS operation refused.
.

Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.

The error is as it says, the Computer object does not have the appropriate permissions on the Domain to register the DNS Name Resource for the Listener.  For the cluster  to perform this operation smoothly “Authenticated Users” should have read/write all permissions on the Computer Object for the cluster, its nodes and the Listener DNS Name. To do this, Log in to the Active Directory Server

  1. Open Active Directory Users and Computers.
  2. On the View menu, select Advanced Features.
  3. Right-click the object  and then click Properties.
  4. On the Security tab, click Advanced to view all of the permission entries that exist for the object.
  5. Verify that the Authenticated Users is in the list and has the permission to Read and Write All. Add the required permissions then Save the changes.

Now after doing that and testing the fail over, it is now encountering a different error, Kerberos-related one showed below.

The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server ComputerName$. The target name used was HTTP/ComputerName.Domain.com. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password. If the server name is not fully qualified, and the target domain (Domain.COM) is different from the client domain (Doamin.COM), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server.

Ah, the often over-looked SPN. This should be part of your installation process – setting the SPN. To keep the story short and so we can get to the exciting part, you can refer here for the detailed instructions on how to configure the SPN for SQL Server.  Aside from registering the SPN for each of the Nodes as specified on the MSDN Link, You’ll also need to register the SPN for the Listener, as always 1433 is the port being used by your SQL Server:

setspn -A MSSQLSvc/Listener_DNS_NAME.Domain.com:1433 DOMAIN/SQLServiceAccount

This will enable Kerberos for the client connection to the Availability Group Listener and address the errors we received above. After configuring the SPN for the servers Automatic Fail over is now running smoothly, or so we thought.

The client came back to us that it was taking some time for the application to connect to the Listener Name. Checking on the cluster logs and SQL Server’s end, everything appears to be in order. No errors being thrown and Server is accessible. Now it get’s interesting. Ping test within the Database sub net is successful but ping test outside of it was timing out. It was able to connect though after a substantial amount of time.  After a few more test and checking the behavior is the same on both nodes. It takes more than thirty minutes for the Name to be reachable outside of the database sub net.  After involving the Network Admin we found out that  a MAC Address conflict is happening.  That’s our “Aha!” moment.  Windows 2003 servers and later issues a Gratuitous ARP (GARP) requests during failover. There are some switches/devices that does not forward Gratuitous ARP by default.   This causes the devices on the other end of the switch to not have the correct MAC address associated to the Name.  This causes the failure. It often corrects itself when the router detects the failures and do a broadcast and gets the correct value. That’s why it becomes accessible after some time. This KB details the issue. To address this, changes must be done on the configuration of the switches, you’ll need to check with your hard ware vendor for this.

Now, that could be the end of our problems, but after enabling the switch to forward GARP, we found out that the server itself is not sending a GARP request.  This is a server configuration issue and requires some Registry changes. Open the Registry for the server and locate the key below:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

From there check if there is a key for ArpRetryCount, if there is make sure that the Value is not set to 0. The value could be between 0-3. After changing this and restarting the servers Everything works perfectly.

Last two issues are a bit rare, and something I wouldn’t have experienced if the client wasn’t using that particular hard ware and that particular standard configuration.

Categories: DBA Blogs

SCN to Timestamp and back

DBASolved - Tue, 2014-06-10 09:37

When working with Oracle GoldenGate, understanding the System Change Number (SCN) is important.  The SCN is an internal number maintained by the database that keeps track of the changes made to the database for recovery purposes.  The SCN is also important when working with Oracle GoldenGate.  In many environments, instantiation of Oracle GoldenGate environments require knowing where to start the replicat from.

To find the current SCN for the database, Oracle has made this pretty easy.  There is a column in the V$DATABASE view called CURRENT_SCN.


select current_scn from v$database;

Now that the current SCN has been found, it can be used in an data pump export parameter file using FLASHBACK_SCN to ensure a consistent copy of the database at that point-in-time.  Once an import is completed on the target side of GoldenGate the replicat can be started using the SCN with the ATCSN or AFTERCSN option.

What does the SCN really mean to an Oracle GoldenGate Admin though?  As outlined above it really is just a point-in-time place holder to key admins in on a place to start the replicat.  At times there maybe need to start the replicat from an adjusted point-in-time.  In order to do this, it is handy to know how to convert the SCN to a Timestamp and back to SCN.  In order to do this, Oracle has provided two packages called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN.

Using these packages is pretty simple.  The following code blocks demonstrates how to convert from SCN to Timestamp and back.

Convert SCN to Timestamp:


select scn_to_timestamp(8697520) from dual;

Convert Timestamp to SCN:


select timestamp_to_scn('10-JUN-14 10.50.55.000000000 AM') from dual;

Knowing how to convert the SCN to Timestamp and back to SCN can be very handy in many different situations and useful when working with Oracle GoldenGate.

Enjoy!

twitter: @dbasolved

blog:http//dbasolved.com


Filed under: General, Golden Gate, Replication
Categories: DBA Blogs

EMEA OTN Virtual Technology Summit - Hands-On Learning

The Oracle Technology Network (OTN) is excited to invite you to our first Virtual Technology Summit. EMEA – Thursday July 10th / 9am to 1pm BST / 10am – 2pm CET / 12pm to 4pm MSK / GST - Register...

We share our skills to maximize your revenue!
Categories: DBA Blogs

2014 Hadoop Summit Summary

Pythian Group - Mon, 2014-06-09 07:41

hadoop_summit_logo

Last week I was at the 2014 Hadoop Summit in San Jose, trying to keep abreast of the ever-changing Apache landscape: what projects are up-and-coming, what projects are ready for production, and most importantly which projects can solve problems for our clients. It was also a great chance to hear about real, production deployments – both in the halls and in some of the presentations. And Doug Cutting and Arun Murthy had an excellent panel about the future of the platform, both from a business and technology perspective.

Security

Hadoop Security was an incredibly popular topic this year, reflecting the fact that Hadoop deployments are growing up and fulfilling their stated purpose: to consolidate organizations’ data and make it visible to everyone. “Visible to everyone” is a noble goal, but in reality PII and other sensitive information needs to be guarded, and access needs to be audited and limited. Apache Knox makes it possible to audit all user interactions with the cluster, tying user access to an existing identity management system. Cloudera Sentry provides fine-grained user permissions for Hive, HBase and Search, similar to existing RDBMSes. During the conference Cloudera also announced their acquisition of Gazzang, who make  a platform for key management and data encryption at rest in Hadoop (similar to Intel’s Project Rhino).

Booz Allen Hamilton also put on an excellent presentation about a real client system storing sensitive data on EMR using ephemeral storage – I strongly recommend looking at this as an example of what’s possible now, and also how difficult it is to implement cell or row-level security policies in Hadoop.

YARN

YARN is the new “data operating system” responsible for all computations running on your clutster. It handles container placement and resource allocation to allow multiple frameworks like MapReduce, Tez and Spark to co0exist on the same nodes without competing for resources. Applications can also be written to run directly on YARN, opening up the Hadoop cluster to support more general purpose tasks (Yahoo is apparently encoding video on YARN with low latency, although details were scarce), and making it easier for developers to provide distributed, fault-tolerant applications. Early adopters have been using YARN in production for a while, but now every major vendor is including it in their distribution, and features like the High-Availability ResourceManager (or “HARMful YARN”) are available.

Many talks from different sources (Twitter, Yahoo, HortonWorks) focused on different aspects of YARN: new features, production deployment hints, and the general architecture.

Storm

I thought with Spark becoming so popular and widely supported – in every major distribution – Spark Streaming would supplant Storm as the leading complex event processing engine. Visiting Hadoop Summit, however, it seems like Storm has plenty of momentum. It’s been ported to YARN to work seamlessly within your cluster, and multiple presentations demonstrated real-world systems running on Storm right now, as well as integrations with other technologies like R and Pig. Spark overall had nearly as many presentations, but these were more technical and theoretical: it might be another year before we see many presentations about Spark and Spark Streaming applications being deployed at scale.

Falcon

Apache Falcon had two talks this summit, and it’s been incubating since last year. It caught my attention as an open-source project which is aiming to supplant existing proprietary tools. Falcon allows you to declaratively define ETL flows in terms of sources, sinks and transformations, and schedule them on a regular basis. Flows are monitored and idempotent, and late data can be handled according to user-defined rules. Right now the emphasis is on power: an XML config coordinates Hive, Pig, Oozie and distcp, but more user-friendly features like libraries of transformations and a web UI for visualizing flows will bring Falcon closer to the feature set of commerical ETL tools.

SQL on Hadoop

This space has settled down a lot since last year, when Stinger and Impala seemed to invade every track and time slot. Yahoo still put on a number of Hive-on-Tez architecture and performance reviews, and less established projects like Apache Tajo (incubating), BlinkDB, Actian’s Vortex and Facebook’s Presto made appearances. Even though performance has been increasing year over year, SQL-on-Hadoop engines are still wildly variable in their features and performance, and there aren’t any clear winners right now – new entrants still have a chance to make their mark. If you’re curious about choosing a SQL-on-Hadoop engine, check out my presentation this year surveying the landscape.

More to Watch

There were so many great presentations, it was hard to choose for every time slot. Once the videos are released I also recommend watching:

  • Ted Dunning’s stunningly simple anomaly detection
  • Jagane Sundar (of WanDisco) explaining Paxos
  • Koji Noguchi (of Yahoo) with tips about stabilising your Hadoop clusters

Were you at Hadoop Summit? What were your favourite presentations and what trends did you notice?

Categories: DBA Blogs

how to hint ... a reminder

Grumpy old DBA - Sun, 2014-06-08 17:41
From Jonathan Lewis: how to hint part 1

A good primer there ...

Categories: DBA Blogs

The LGWR Three Second Rule. Really?

Does the Oracle Database 12c Log Writer Really Sleep For Three Seconds?I have learned the part of Oracle Database performance tuning is checking if what you have been taught is actually true. What I'm writing about today has been done before in Oracle Database 10g and 11g, but I wanted to document this using 12c.

When I was first learning about the Oracle Database process architecture, the instructor said there are a number of reasons the log writer (LGWR) background process will wake up and do "something." In addition to this, if the LGWR was not signaled to wake up, every three seconds it would wake up by itself and check if there was something it needed to do.

Is this "three second rule" really true? That's what this posting is all about.

If you don't want to read the details below, I created a short video for you to watch.


Here is what you just saw in static text. First let's get the LGWR process PID. Here's one way to do this.
$ ps -eaf | grep lgwr
oracle 41977 1 0 May16 ? 00:00:50 ora_lgwr_prod35
oracle 46294 46110 0 07:39 pts/0 00:00:00 grep lgwr
oracle 60264 1 0 Jan13 ? 00:14:14 ora_lgwr_prod30
My Oracle Database 12c instance is prod35, so the process I'm interested in 41977.

Note: Since this is a 12c instance, there are also two additional log writers, lg00 and lg01. But that's a topic for another posting!

At this point, for simplicity sake it's important there be little instance activity. It will make it much simpler to find what we are looking for... the three second sleep
To watch the LGWR process, I'm going to use the Linux strace command. To easily see the system call timing, I'm going to use the -r option. I'm also going to redirect the output, using the -o option to the out.txt file. I'll let the below strace run for about 15 seconds. OK... here we go:
$ strace -rp 41977 -o out.txt
Process 41977 attached - interrupt to quit
^CProcess 41977 detached
To break out of the strace, I simply did a control-c and the result is what you see above. Let's look at the first three lines of the out.txt file so I can tell you what we will be looking for.
$ head -3 out.txt
0.000000 clock_gettime(CLOCK_MONOTONIC, {19940373, 720653971}) = 0
0.000186 gettimeofday({1401979552, 853214}, NULL) = 0
0.000207 gettimeofday({1401979552, 853444}, NULL) = 0
With the -r option, the first column (far left) shows how long the system call took the for call directly above. For example, in the snipped above, the first clock_gettime call took 0.000186 seconds. The first gettimeofday call took 0.000207 seconds. Got it? Read on!
You may wonder why the system call time is the next line down? It actually makes since, because strace displays information as it becomes available. When the call starts obviously it does not know the call duration, so it just displays the call itself. When the call completes and therefore has the time, it shows return details (e.g., "= 0"), line feeds, prints the call time, the next call is displayed, and repeat. OK, back to our objective.

What we are looking for is a three second sleep. In other words, we are looking for a call that takes three seconds. And since a call's duration shown in the first column of the next line down, we want to look for values in the first column that start with 3.0.

Below is an example of what I found in the out.txt file.
...
0.000033 clock_gettime(CLOCK_MONOTONIC, {19940376, 746821505}) = 0
0.000033 semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0}) = -1 EAGAIN
                   (Resource temporarily unavailable)
3.000436 clock_gettime(CLOCK_MONOTONIC, {19940379, 747351774}) = 0
0.000188 gettimeofday({1401979558, 879922}, NULL) = 0
0.000171 clock_gettime(CLOCK_MONOTONIC, {19940379, 747708976}) = 0
...
Looking at the above output snippet, notice the time of 3.000436. Then look at the call directly above it. It's the semtimedop call. This call can be used to put a process to sleep, but the process can be woken up. In other words, the LGWR can set an alarm for three seconds, go to sleep, and if it is not disturbed, it will wake up in three seconds. Shown above is an example of the LGWR setting the alarm for three seconds, not being woken up until the alarm fired. This is an example of the three second rule!
If you do what I did and look at your output file, you'll see lots of these "three seconds." If you do this on a pretty-much idle Oracle Database 12c instance's LG01 process (one of the two default LGWR children), it may contain mostly semtimedop calls, each with a three second sleep.

As you might have guessed, the sleep time is part of the system call. Look closely at the semtimedop call below:
semtimedop(7503875, {{16, -1, 0}}, 1, {3, 0})
Notice the "{3, 0}" near the far right? The defines the maximum delay, that is, the sleep. The 3 is the seconds and the 0 is the number of nanoseconds. If you're interested in other sleep options and the actual code to make this work, check out this posting.

Oracle could have used the select system call (not "select * from...") but that would have been a bad idea because then the process could not be woken before the three seconds limit. This topic touches very closely on Oracle Time Based Analysis and more specifically about Oracle's wait interface. I really get into this, including sample C code in my online video seminar entitled, Introduction To Oracle Time Based Analysis.

So there you go! The Oracle kernel code developers set the default sleep time to three seconds along with the ability for the process to be awoken, should the need arise.

Give It A Try
Check this out for yourself on a test or QA system. Once you see it for yourself, you'll never forget about it!

But it gets even better! If you really want to dig into this, do the same but on your database writer (DBWR). Does the DBWR have a three second rule? Does Oracle use the same system call? I'm not going to tell you the answer. You'll have to figure that out for yourself!

Always A Three Second Delay?
You may have noticed that sometimes the process does not sleep the entire three seconds and at other times Oracle sets the sleep time less than three seconds. What up with that? That will be my next posting!

Thanks for reading,

Craig.
https://resources.orapub.com/OraPub_Online_Seminars_About_Oracle_Database_Performance_s/100.htmYou can watch seminar introductions (like above) for free on YouTube!
If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. For details go to www.orapub.com. I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.
Categories: DBA Blogs

Log Buffer #374, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-06-06 07:51

This high-spirited Log Buffer Edition covers some of the best blog posts of the previous week from Oracle, SQL Server and MySQL bloggers from across the planet.

Oracle:

GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet.

Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution.

New Rapid Install StartCD 12.2.0.48 for EBS 12.2 Now Available.

Oracle XE 11.2 for Windows 64 bit is here.

Oracle Accelerate is Oracle’s approach for providing simple to deploy, packaged, enterprise-class software solutions to growing midsize organizations through its network of expert partners.

SQL Server:

The ubiquity of databases and the potentially valuable information stored in them makes them attractive targets for people who want to steal data or harm its owner by tampering with it.

Who Truncated That Table & What Do I Do Now?

SQL Server Hardware Configuration Best Practices.

The DBA Team tackles data corruption.

T-SQL User-Defined Functions: Ten Questions You Were Too Shy To Ask.

MySQL:

Percona Replication Manager – Renaming Cluster Hostnames.

Performance impact of MySQL query cache on modern hardware.

How To Install Nginx With PHP5 (And PHP-FPM) And MySQL Support On CentOS 6.5.

Build a MySQL Fabric Farm in one step using AWS CloudFormation.

MariaDB 10 is a Sandbox killjoy?

Categories: DBA Blogs

Oracle Database 12c Spatial: Vector Performance Acceleration

Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by...

We share our skills to maximize your revenue!
Categories: DBA Blogs

A Look Back on SREcon14

Pythian Group - Thu, 2014-06-05 07:57

The landscape of the Systems Administration industry is shifting. As the trend continues towards Infrastructure as a Service (IaaS), Software as a Service (SaaS) and other *aaS offerings, the traditional role of systems administrations is being challenged. While this traditional model will remain (well beyond its expiration date) in large enterprises who have invested heavily in their hardware, processes, and infrastructure more and more companies will be looking to maximize their investment and get the most business value out of their resources.

The industry is being called on with increasing frequency to reinvent systems administration to meet the needs and the demands of a modern business and technology stack.

The traditional way that we have been doing systems administration is on the decline as the desire to break down the old silos between operations and development to maximize business value and inter-group communication and collaboration force both sides to evolve new skills, and at the core adopt new philosophies.

One such philosophy is Site Reliability Engineering, or SRE for short.

Generally accepted to have started at Google, the SRE movement has now spread well beyond to other companies such as Dropbox, Netflix, Twitter, Facebook, LinkedIn, and others.

As my colleague Bill Lincoln will explain in an upcoming post, although this has started with internet scale organizations, SRE is a philosophy put into action that companies of all sizes can benefit from.

At its core, the prime directive of SRE is reliability of a service as a whole and this subtle, yet important paradigm shift is what is driving change within the Systems Administration and Software Development industries towards a place where both groups have a unified objective of reliability and the differences between SRE and SWE become subtle and fluid.

I have been a strong advocate for the SRE philosophy as a major emerging trend in the Systems Administration space with the Pythian Leadership and was thrilled to be able to attend the USENIX Site Reliability Engineering Conference (SRECon14) which was held on Friday, May 30, 2014 in Santa Clara California USA along with two of my colleagues from the Pythian Enterprise Infrastructure Services Group.

It was a single day, but from the first keynote delivered by Ben Treynor, Vice President, Engineering and Google Site Reliability Tsar, to the final Talk by Michael “Mikey” Dickerson on how Silicon Valley’s SREs saved Healthcare.gov, the information delivered was packed full of value, and a good amount of inspiration.

With a prime directive of “reliability” the talks delivered ran the entire lifecycle of an IT Service from Designing for Reliability, Deploying at Scale, Metrics and Monitoring for Reliability, Cascading Failure of a Service and Disaster Preparedness.

The call to action was also clear; You are absolutely within your rights to not like that it is happening, but there is no denying that change is coming. We (SysAdmins and Software Engineers) can choose to evolve, we can choose to challenge ourselves and “up our game” or we can run the very real risk of being left behind.

SRECon14 was a great success and I look forward to attending the event again. I would enthusiastically recommend it to my friends and colleagues who are in systems administration / software engineering roles.

Finally I end with a Quote Ben Treynor “Let the Packets flow, and the pager remain silent”

Categories: DBA Blogs

v$sql_cs_statistics bug in 12c?

Bobby Durrett's DBA Blog - Wed, 2014-06-04 17:27

I think I’ve run across a bug in 12c where v$sql_cs_statistics is not populated in 12c as it was in 11.2.0.3.

I’ve messed with adaptive cursor sharing using an example on Tim Hall’s web site: web example

I’ve tried to break it and change it in various ways to understand adaptive cursor sharing.  I was using some 11g databases and on those systems the example results in output from v$sql_cs_statistics.  But in 12c the view returns nothing.  Maybe this is normal but it seems bug like to me.

Here is a zip of my version of Tim’s example and it’s output on an 11.2.0.3 and 12.1 database: zip

In the working 11.2.0.3 example you have this output from v$sql_cs_statistics:

ORCL::SYSTEM>SELECT * FROM v$sql_cs_statistics WHERE sql_id = '9bmm6cmwa8saf';

ADDRESS          HASH_VALUE SQL_ID        CHILD_NUMBER BIND_SET_HASH_VALUE P EXECUTIONS ROWS_PROCESSED BUFFER_GETS   CPU_TIME
---------------- ---------- ------------- ------------ ------------------- - ---------- -------------- ----------- ----------
0000000089035390 4171522382 9bmm6cmwa8saf            1          2064090006 Y          1          50001         501          0
0000000089035390 4171522382 9bmm6cmwa8saf            0          2342552567 Y          1              3           3          0

On 12c you get nothing:

ORCL:CDB$ROOT:SYSTEM>SELECT * FROM v$sql_cs_statistics WHERE sql_id = '9bmm6cmwa8saf';

no rows selected

The other queries in the example all return results so I guess it is a bug.

- Bobby

Categories: DBA Blogs

Common Users & SYSDBA with #Oracle 12c Multitenancy

The Oracle Instructor - Wed, 2014-06-04 08:36

A 12c multitenant database introduces the new concept of local users and common users. This article shows simple use cases why DBAs may want to create common users – in contrast to the common users that are created automatically, like SYS, SYSTEM, MDSYS etc.

A typical requirement is to have a superuser other than SYS, but with the same power. Like the common user C##_SYS in the picture below.

Or suppose we have many pluggable databases (PDBs) and different superusers responsible for different PDBs like C##_ADMIN1 and C##_ADMIN2:

Common Users in Oracle 12cLet’s implement it as above. Initially, my demo environment looks like this:

 

SQL> select name,open_mode,con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
PDB1                           READ WRITE          3
PDB2                           READ WRITE          4
PDB3                           READ WRITE          5

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1

At first, I create C##_SYS and grant SYSDBA as a common privilege to that new user:

SQL> create user c##_sys identified by oracle container=all;

User created.

SQL> grant sysdba to c##_sys container=all;

Grant succeeded.

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1
C##_SYS                        TRUE  FALSE FALSE FALSE FALSE FALSE          0

C##_SYS can now do anything to any PDB:

SQL> connect c##_sys/oracle@edd2r6p0/pdb1 as sysdba
Connected.
SQL> shutdown immediate
Pluggable Database closed.
SQL> connect / as sysdba
Connected.
SQL> select name,open_mode,con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
PDB1                           MOUNTED             3
PDB2                           READ WRITE          4
PDB3                           READ WRITE          5

SQL> connect c##_sys/oracle@edd2r6p0/pdb1 as sysdba
Connected.
SQL> startup
Pluggable Database opened.
SQL> connect / as sysdba
Connected.
SQL> select name,open_mode,con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
PDB1                           READ WRITE          3
PDB2                           READ WRITE          4
PDB3                           READ WRITE          5

Notice that there is a subtle difference in granting SYSDBA of the container database (CDB) as local or common privilege:

SQL> revoke sysdba from c##_sys container=all;

Revoke succeeded.

SQL> grant sysdba to c##_sys;

Grant succeeded.

SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1
C##_SYS                        TRUE  FALSE FALSE FALSE FALSE FALSE          1

SQL> connect c##_sys/oracle@edd2r6p0/pdb1 as sysdba
ERROR:
ORA-01031: insufficient privileges


Warning: You are no longer connected to ORACLE.

C##_SYS has SYSDBA of the CDB “only”, therefore the error. Although:

SQL> connect c##_sys/oracle as sysdba
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> alter session set container=pdb1;

Session altered.

SQL> shutdown immediate
Pluggable Database closed.
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> startup
Pluggable Database opened.
SQL> connect c##_sys/oracle as sysdba
Connected.
SQL> select name,open_mode,con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
PDB1                           READ WRITE          3
PDB2                           READ WRITE          4
PDB3                           READ WRITE          5

However, the proper way is probably granting it as a common privilege:

SQL> revoke sysdba from c##_sys;

Revoke succeeded.

SQL> grant sysdba to c##_sys container=all;

Grant succeeded.

Now to the implementation of C##_ADMIN1 and C##_ADMIN2. The point is here, that SYSDBA can be granted as a local privilege for certain PDBs only, to the effect that different superusers can be responsible for different groups of PDBs:

SQL> create user c##_admin1 identified by oracle container=all;

User created.

SQL> alter session set container=pdb1;

Session altered.

SQL> grant sysdba to c##_admin1 container=current;

Grant succeeded.

SQL> connect / as sysdba
Connected.
SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1
C##_SYS                        TRUE  FALSE FALSE FALSE FALSE FALSE          0
C##_ADMIN1                     TRUE  FALSE FALSE FALSE FALSE FALSE          3

6 rows selected.

For now, C##_ADMIN1 can only connect to PDB1:

SQL> connect c##_admin1/oracle@edd2r6p0/pdb1 as sysdba
Connected.
SQL> select count(*) from session_privs;

  COUNT(*)
----------
       233

SQL> connect c##_admin1/oracle@edd2r6p0/pdb2 as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.

The completed implementation of the picture above:

SQL> connect / as sysdba
Connected.
SQL> alter session set container=pdb2;

Session altered.

SQL> grant sysdba to c##_admin1 container=current;

Grant succeeded.

SQL> connect / as sysdba
Connected.
SQL> create user c##_admin2 identified by oracle;

User created.

SQL> alter session set container=pdb3;

Session altered.

SQL> grant sysdba to c##_admin2 container=current;

Grant succeeded.

SQL> connect / as sysdba
Connected.
SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1
C##_SYS                        TRUE  FALSE FALSE FALSE FALSE FALSE          0
C##_ADMIN1                     TRUE  FALSE FALSE FALSE FALSE FALSE          3
C##_ADMIN1                     TRUE  FALSE FALSE FALSE FALSE FALSE          4
C##_ADMIN2                     TRUE  FALSE FALSE FALSE FALSE FALSE          5

8 rows selected.

The whole example is from my present 12c New Features in Düsseldorf, by the way. Hope you find it useful :-)
As always: Don’t believe it, test it!


Tagged: 12c New Features
Categories: DBA Blogs

Expanding the Couchbase Collector for Diamond

Pythian Group - Wed, 2014-06-04 08:24
The code

For the impatient ones, the couchbase collector can be found in github:

Couchbase Collector

Follow the instructions in the README file to get it installed under your diamond!

Intro

If you have been involved with metric collections at any point you may have heard of BrightCove’s Diamond.

Diamond is literally a little piece of diamond regarding metrics collection. With its modular architecture it is possible to collect metrics from a large collection of operating system and software components.

In addition to that, it is able to ship those metrics to a diverse range of trending software including Graphite, RRD or anything that supports StatsD.

While recently working with Couchbase, I needed to collect and ship metrics using Diamond; a github project was brought to my attention doing exactly that.

Unfortunately the author zooldk, has only one entry in the commit history listed as “Initial skeleton of collector” and the only statistic currently collected is itemCount from basicStats.

Luckily the python code is quite simple and straightforward, so I went ahead and extended it.

First let’s have an overview of the metrics you can expect to see in Graphite after installing the collector.

What did we achieve?

The target is to graph, as many of the useful Couchbase metrics as possible.

After installing the extended Couchbase Collector this is what we can expect to see in Graphite:

Graphite_Couchbase_Tree

Here is a plot of memory used by Couchbase on my (memory starved) vm:
Graphite_basicstats_memused

A bit of theory: (Data) Buckets and Couchbase cluster metrics

Couchbase is a NoSQL database using JSON for Documents. It is highly scalable and very easy to create a cluster.

For the sake of working on extending the above mentioned metrics collector, I installed the Couchbase server, community edition on two VMs.

My VMs have IP addresses: 192.168.60.100 and 192.168.60.101.

I mostly used the default parameters in the setup and installed both demo databases “beersample” and “gamesim-sample”.

My Couchbase user interface now looks like:

couchbase_basic_installation Metrics in Couchbase

Collecting metrics from Couchbase Buckets is as simple as executing a GET request:

For example:

http://192.168.60.100:8091/pools/default/buckets/beersample

$ curl -s http://192.168.60.100:8091/pools/default/buckets/beer-sample
{"name":"beer-sample","bucketType":"membase","authType":"sasl","saslPassword":"","proxyPort":0,"replicaIndex":false,"uri":"/pools/default/buckets/beer-sample?bucket_uuid=3a088dd60672ce16aea01c738ec96928","streamingUri":"/pools/default/bucketsStreaming/beer-sample?bucket_uuid=3a088dd60672ce16aea01c738ec96928","localRandomKeyUri":"/pools/default/buckets/beer-sample/localRandomKey","controllers":{"compactAll":"/pools/default/buckets/beer-sample/controller/compactBucket","compactDB":"/pools/default/buckets/default/controller/compactDatabases","purgeDeletes":"/pools/default/buckets/beer-sample/controller/unsafePurgeBucket","startRecovery":"/pools/default/buckets/beer-sample/controller/startRecovery"},"nodes":[{"couchApiBase":"http://192.168.60.100:8092/beer-sample","systemStats":{"cpu_utilization_rate":16.831683168316832,"swap_total":855629824,"swap_used":112218112,"mem_total":1968685056,"mem_free":934641664},"interestingStats":{"cmd_get":0.0,"couch_docs_actual_disk_size":138325417,"couch_docs_data_size":137479323,"couch_views_actual_disk_size":637700,"couch_views_data_size":616830,"curr_items":7888,"curr_items_tot":7889,"ep_bg_fetched":0.0,"get_hits":0.0,"mem_used":99496472,"ops":0.0,"vb_replica_curr_items":1},"uptime":"352954","memoryTotal":1968685056,"memoryFree":934641664,"mcdMemoryReserved":1501,"mcdMemoryAllocated":1501,"replication":0.0,"clusterMembership":"active","status":"healthy","otpNode":"ns_1@192.168.60.100","thisNode":true,"hostname":"192.168.60.100:8091","clusterCompatibility":131072,"version":"2.2.0-837-rel-community","os":"x86_64-unknown-linux-gnu","ports":{"proxy":11211,"direct":11210}}],"stats":{"uri":"/pools/default/buckets/beer-sample/stats","directoryURI":"/pools/default/buckets/beer-sample/statsDirectory","nodeStatsListURI":"/pools/default/buckets/beer-sample/nodes"},"ddocs":{"uri":"/pools/default/buckets/beer-sample/ddocs"},"nodeLocator":"vbucket","fastWarmupSettings":false,"autoCompactionSettings":false,"uuid":"3a088dd60672ce16aea01c738ec96928","vBucketServerMap":{"hashAlgorithm":"CRC","numReplicas":1,"serverList":["192.168.60.100:11210"],"vBucketMap":[[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1],[0,-1]]},"replicaNumber":1,"threadsNumber":3,"quota":{"ram":104857600,"rawRAM":104857600},"basicStats":{"quotaPercentUsed":33.76667785644531,"opsPerSec":0.0,"diskFetches":0.0,"itemCount":7303,"diskUsed":50731634,"dataUsed":49454080,"memUsed":35406928},"bucketCapabilitiesVer":"","bucketCapabilities":["touch","couchapi"]}

Now this is not very readable so let’s reformat it using Python’s JSON library. I am only pasting the output that is useful for metric collection.

$ curl -s http://192.168.60.100:8091/pools/default/buckets/beer-sample | python -mjson.tool
{
    ...
    "basicStats": {
        "dataUsed": 49454080,
        "diskFetches": 0.0,
        "diskUsed": 50731634,
        "itemCount": 7303,
        "memUsed": 35406928,
        "opsPerSec": 0.0,
        "quotaPercentUsed": 33.76667785644531
    },
    "name": "beer-sample",
    "nodes": [
        {
            "clusterCompatibility": 131072,
            "clusterMembership": "active",
            "couchApiBase": "http://192.168.60.100:8092/beer-sample",
            "hostname": "192.168.60.100:8091",
            "interestingStats": {
                "cmd_get": 0.0,
                "couch_docs_actual_disk_size": 138325417,
                "couch_docs_data_size": 137479323,
                "couch_views_actual_disk_size": 637700,
                "couch_views_data_size": 616830,
                "curr_items": 7888,
                "curr_items_tot": 7889,
                "ep_bg_fetched": 0.0,
                "get_hits": 0.0,
                "mem_used": 99496472,
                "ops": 0.0,
                "vb_replica_curr_items": 1
            },
            "mcdMemoryAllocated": 1501,
            "mcdMemoryReserved": 1501,
            "memoryFree": 932651008,
            "memoryTotal": 1968685056,
            "os": "x86_64-unknown-linux-gnu",
            "otpNode": "ns_1@192.168.60.100",
            "ports": {
                "direct": 11210,
                "proxy": 11211
            },
            "replication": 0.0,
            "status": "healthy",
            "systemStats": {
                "cpu_utilization_rate": 18.0,
                "mem_free": 932651008,
                "mem_total": 1968685056,
                "swap_total": 855629824,
                "swap_used": 112218112
            },
            "thisNode": true,
            "uptime": "353144",
            "version": "2.2.0-837-rel-community"
        }
    ],
    "quota": {
        "ram": 104857600,
        "rawRAM": 104857600
    },
    ...
}

So what are interesting statistics to collect? The array basicStats sounds like a good candidate as it contains keys like:

'diskUsed', 'memUsed', 'diskFetches', 'quotaPercentUsed', 'opsPerSec', 'dataUsed', 'itemCount'

All of those sound great values to graph, so we will keep/collect them.

Then there is the quota object, showing ram which is useful to graph as well, so we keep this too.

Finally there is nodes which is an array. This object is an array because it includes statistics for each node forming the cluster. If the bucket does not occupy more than one nodes, there will be a single entry in this array.

In my setup, the gamesim-sample Bucket spans across two virtual machines, hence ‘nodes’ contains two items in its array corresponding to each vm.

Following I am showing side-by-side the keys used for each of nodes array members (note that this is for the gamesim-sample bucket):

nodes[0]                nodes[1]
====================    ====================
clusterCompatibility    clusterCompatibility 
clusterMembership       clusterMembership    
couchApiBase            couchApiBase         
hostname                hostname             
interestingStats        interestingStats     
mcdMemoryAllocated      mcdMemoryAllocated   
mcdMemoryReserved       mcdMemoryReserved    
memoryFree              memoryFree           
memoryTotal             memoryTotal          
os                      os                   
otpNode                 otpNode              
ports                   ports                
replication             replication          
status                  status               
systemStats             systemStats          
                        thisNode
uptime                  uptime               
version                 version

thisNode is a boolean that helps us understand which array member corresponds to the machine we are querying.

In this case I got those stats from:
http://192.168.60.100:8091/pools/default/buckets/gamesim-sample


data['nodes'][1]['thisNode']
True

To determine exactly which stats refer to which node, the couchApiBase key can be used for more detail:

data['nodes'][1]['couchApiBase']
u'http://192.168.60.100:8092/gamesim-sample'

data['nodes'][0]['couchApiBase']
u’http://192.168.60.101:8092/gamesim-sample’

This further confirms that nodes[0] refers to my second vm (192.168.60.101) and nodes[1] to the first vm.

Installing/Configuring the Couchbase collector on Diamond

Get the Couchbase Collector and copy it under:

/usr/share/diamond/collectors/couchbase_collector/couchbase_collector.py

Edit the python file couchbase_collector.py and enter your IP/port/name of databag/username/password; mine looks like so:


...
class CouchBaseCollector(diamond.collector.Collector):

def get_default_config(self):
config = super(CouchBaseCollector, self).get_default_config()
config.update({
‘host’: ‘localhost’,
‘port’: 8091,
‘path’: ‘beer-sample’,
‘username’: ‘Administrator’,
‘password’: ‘obfuscated’
})
return config

You will also need to create a config file under:

/etc/diamond/collectors/CouchBaseCollector.conf

With the contents:

$ cat CouchBaseCollector.conf
enabled = True

Cluster Metrics

The collector has the intelligence to present only the nodes statistics that are applicable for the node it polls.
For clustered couchbase environments, every node will be running a diamond collector of it’s own.

This is how Graphite presents the two nodes of the cluster, corresponding to my two vm’s:

Graphite_Cluster_stats
Categories: DBA Blogs

Pythian Named 2014 Global Outsourcing 100 Rising Star

Pythian Group - Wed, 2014-06-04 07:50

Pythian has once again been named a 2014 Global Outsourcing 100® Rising Star by the International Association of Outsourcing Professionals® (IAOP®). The list is an annual ranking of the world’s best outsourcing service providers and highlights on-the-rise companies in the outsourcing space.

“As companies continue to accumulate more data than ever before, IT teams struggle to manage all of that information in-house,” said Rob Hamel, Vice President of Service Delivery at Pythian. “More businesses are rethinking their internal data infrastructure management processes and rightsourcing to find flexible, cost-effective solutions to optimize performance, reduce IT burdens, and improve productivity. Pythian brings together top-tier talent, proven processes, and a customer-centric business model to help our clients preserve the wellness of their valuable data infrastructures.”

See the full press release here.

Categories: DBA Blogs