DBA Blogs

DataStax OpsCenter upgrade (4.1 to 5.1) for Cassandra – issue and resolution

Pythian Group - Tue, 2016-03-29 09:42

For the Apache Cassandra cluster (version C* 1.2.16) that I’ve supported, the monitoring of the cluster is through DataStax OpsCenter, version 4.1.2. As part of the effort to improve the monitoring capability for this cluster, my team decided first to upgrade OpsCenter to version 5.1.4, the latest available version of OpsCenter that is compatible with Cassandra 1.2.16. The same OpsCenter is also used to monitor another cluster of DataStax Enterprise (DSE) 4.5.2 (it corresponds to Apache Cassandra version 2.0.10).

During the upgrade we ran into an issue, and  we couldn’t find a similar problem to this one on Google. We’d like to use this post to document the problems that we faced as well as the solutions and findings we found during the upgrade.

 

Problem Overview

The OpsCenter upgrade procedure is as straightforward as what is described in the DataStax OpsCenter document. After OpsCenter upgrade, the OpsCenter web portal detects mismatched version of datastax-agents on all nodes. Choose the “FixMe” option from the portal to upgrade datastax-agents to version 5.1.4 on all nodes being monitored. After the datastax-agent upgrade, we addressed some datastax-agent configuration issues in “address.yaml” file to reflect the changes brought by the new version of OpsCenter/datastax-agent.

After all this was done, we double checked the log files for OpsCenter and datastax-agent. The OpsCenter log file was mostly clear, the datastax-agent log for DSE 4.5.2 cluster was also clear, but the datastax-agent log for Cassandra 1.2.16 was NOT. The corresponding OpsCenter web portal was not able to display Cassandra metrics for C* 1.2.16 cluster.

On each of the datastax-agent log files in the C* 1.2.16 cluster, we saw a lot of repeating  errors like the ones below:

          ERROR [async-dispatch-3] 2016-02-19 12:57:52,750 There was an error when attempting to load stored rollups.
          com.datastax.driver.core.exceptions.InvalidQueryException: Undefined name key in where clause (‘key EQ ‘<… …>”)
          at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
          at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:291)
          at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:205)
          at clojurewerkz.cassaforte.client$execute.invoke(client.clj:289)
          … …
          ERROR [cassandra-processor-1] 2016-02-19 13:00:02,295 Error when proccessing cassandra callcom.datastax.driver.core.exceptions.InvalidQueryException: Unknown identifier key

 

Problem Analysis and Solution

The fact that the error showed up in datastax-agent log file gave me a hint that the error might be related with datastax-agent failing to write collected metrics into OpsCenter tables. So as the first step of the analysis, I compared the schema of “OpsCenter” keyspace between the two clusters monitored. Below is the example of two OpsCenter table definition comparison between the two clusters.

C* 1.2.16 Cluster

DSE 4.5.3 Cluster

CREATE TABLE events (  “KEY” blob,  column1 blob,  value blob,  PRIMARY KEY (“KEY”, column1)CREATE TABLE events (  key text,  action bigint,  level bigint,  success boolean,  time bigint,  PRIMARY KEY ((key))CREATE TABLE events_timeline (  “KEY” blob,  column1 bigint,  value blob,  PRIMARY KEY (“KEY”, column1)CREATE TABLE events_timeline (  key text,  column1 bigint,  value blob,  PRIMARY KEY ((key), column1)

 

From this table, we can clearly see that the upgrade process of OpsCenter and datastax-agent to verion 5.1.4 somehow doesn’t migrate OpsCenter schema properly for C* 1.2.16 cluster. The theory for the error is that the upgraded datastax-agent in C* 1.2.16 cluster is trying to query or update Cassandra metrics from OpsCenter tables in a fashion that matches the OpsCenter schema as in the DSE 4.5.2 cluster. But the actual OpsCenter schema in C* 1.2.16 still has the old definition, thus causing the invalid query exception as presented in the log file.

Once the problem is clear, the solution is straightforward. The steps are summarized below:

In C* 1.2.16 cluster,

  1. Take a snapshot for OpsCenter keyspace on all nodes
  2. Stop DataStax agents on all nodes, so they won’t try to write metrics into OpsCenter tables.
  3. Use CQL to drop OpsCenter tables and re-create them, matching the OpsCenter schema for DSE 4.5.3 cluster. Make sure that all table properties are the same.
  4. Once OpsCenter schema is recreated. Start DataStax agents on all nodes in.
  5. Verify the agent log file that the error message is gone.
  6. Restart OpsCenter service.

 

After these steps, we double checked the log files for all datastax-agents, and for OpsCenter, and we can confirm that there were no errors. The OpsCenter web portal was also able to display the Cassandra metrics properly.

Categories: DBA Blogs

Another SQL Profile to the rescue!

Bobby Durrett's DBA Blog - Mon, 2016-03-28 18:57

We have had problems with set of databases over the past few weeks. Our team does not support these databases, but my director asked me to help. These are 11.2.0.1 Windows 64 bit Oracle databases running on Windows 2008. The incident reports said that the systems stop working and that the main symptom was that the oracle.exe process uses all the CPU. They were bouncing the database server when they saw this behavior and it took about 30 minutes after the bounce for the CPU to go back down to normal. A Windows server colleague told me that at some point in the past a new version of virus software had apparently caused high CPU from the oracle.exe process.

At first I looked for some known bugs related to high CPU and virus checkers without much success. Then I got the idea of just checking for query performance. After all, a poorly performing query can eat up a lot of CPU. These Windows boxes only have 2 cores so it would not take many concurrently running high CPU queries to max it out. So, I got an AWR report covering the last hour of a recent incident. This was the top SQL:

Top SQL

The top query, sql id 27d8x8p6139y6, stood out as very inefficient and all CPU. It seemed clear to me from this listing that the 2 core box had a heavy load and a lot of waiting for CPU queuing. %IO was zero but %CPU was only 31%. Most likely the rest was CPU queue time.

I also looked at my sqlstat report to see which plans 27d8x8p6139y6 had used over time.

PLAN_HASH_VALUE END_INTERVAL_TIME     EXECUTIONS Elapsed ms
--------------- --------------------- ---------- -----------
     3067874494 07-MAR-16 09.00.50 PM        287  948.102286
     3067874494 07-MAR-16 10.00.03 PM        292  1021.68191
     3067874494 07-MAR-16 11.00.18 PM        244  1214.96161
     3067874494 08-MAR-16 12.00.32 AM        276  1306.16222
     3067874494 08-MAR-16 01.00.45 AM        183  1491.31307
      467860697 08-MAR-16 01.00.45 AM        125      .31948
      467860697 08-MAR-16 02.00.59 AM        285  .234073684
      467860697 08-MAR-16 03.00.12 AM        279  .214354839
      467860697 08-MAR-16 04.00.25 AM        246   .17147561
      467860697 08-MAR-16 05.00.39 AM         18        .192
     2868766721 13-MAR-16 06.00.55 PM         89    159259.9
     3067874494 13-MAR-16 06.00.55 PM          8  854.384125
     2868766721 13-MAR-16 07.00.50 PM         70  1331837.56

Plan 2868766721 seemed terrible but plan 467860697 seemed great.

Our group doesn’t support these databases so I am not going to dig into how the application gathers statistics, what indexes it uses, or how the vendor designed the application. But, it seems possible that forcing the good plan with a SQL Profile could resolve this issue without having any access to the application or understanding of its design.

But, before plunging headlong into the use of a SQL Profile I looked at the plan and the SQL text.  I have edited these to hide any proprietary details:

SELECT T.*
    FROM TAB_MYTABLE1 T,
         TAB_MYTABLELNG A,
         TAB_MYTABLE1 PIR_T,
         TAB_MYTABLELNG PIR_A
   WHERE     A.MYTABLELNG_ID = T.MYTABLELNG_ID
         AND A.ASSIGNED_TO = :B1
         AND A.ACTIVE_FL = 1
         AND T.COMPLETE_FL = 0
         AND T.SHORTED_FL = 0
         AND PIR_T.MYTABLE1_ID = T.PIR_MYTABLE1_ID
         AND ((PIR_A.FLOATING_PIR_FL = 1 
               AND PIR_T.COMPLETE_FL = 1)
              OR PIR_T.QTY_PICKED IS NOT NULL)
         AND PIR_A.MYTABLELNG_ID = PIR_T.MYTABLELNG_ID
         AND PIR_A.ASSIGNED_TO IS NULL
ORDER BY T.MYTABLE1_ID

The key thing I noticed is that there was only one bind variable. The innermost part of the good plan uses an index on the column that the query equates with the bind variable. The rest of the plan is a nice nested loops plan with range and unique index scans. I see plans in this format in OLTP queries where you are looking up small numbers of rows using an index and join to related tables.

-----------------------------------------------------------------
Id | Operation                        | Name                     
-----------------------------------------------------------------
 0 | SELECT STATEMENT                 |                          
 1 |  SORT ORDER BY                   |                          
 2 |   NESTED LOOPS                   |                          
 3 |    NESTED LOOPS                  |                          
 4 |     NESTED LOOPS                 |                          
 5 |      NESTED LOOPS                |                          
 6 |       TABLE ACCESS BY INDEX ROWID| TAB_MYTABLELNG           
 7 |        INDEX RANGE SCAN          | AK_MYTABLELNG_BY_USER    
 8 |       TABLE ACCESS BY INDEX ROWID| TAB_MYTABLE1             
 9 |        INDEX RANGE SCAN          | AK_MYTABLE1_BY_MYTABLELNG
10 |      TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1             
11 |       INDEX UNIQUE SCAN          | PK_MYTABLE1              
12 |     INDEX UNIQUE SCAN            | PK_MYTABLELNG            
13 |    TABLE ACCESS BY INDEX ROWID   | TAB_MYTABLELNG           
-----------------------------------------------------------------

The bad plan had a gross Cartesian merge join:

Plan hash value: 2868766721

----------------------------------------------------------------
Id | Operation                       | Name                     
----------------------------------------------------------------
 0 | SELECT STATEMENT                |                          
 1 |  NESTED LOOPS                   |                          
 2 |   NESTED LOOPS                  |                          
 3 |    MERGE JOIN CARTESIAN         |                          
 4 |     TABLE ACCESS BY INDEX ROWID | TAB_MYTABLE1             
 5 |      INDEX FULL SCAN            | PK_MYTABLE1              
 6 |     BUFFER SORT                 |                          
 7 |      TABLE ACCESS BY INDEX ROWID| TAB_MYTABLELNG           
 8 |       INDEX RANGE SCAN          | AK_MYTABLELNG_BY_USER    
 9 |    TABLE ACCESS BY INDEX ROWID  | TAB_MYTABLE1             
10 |     INDEX RANGE SCAN            | AK_MYTABLE1_BY_MYTABLELNG
11 |   TABLE ACCESS BY INDEX ROWID   | TAB_MYTABLELNG           
12 |    INDEX RANGE SCAN             | AK_MYTABLELNG_BY_USER    
----------------------------------------------------------------

Reviewing the SQL made me believe that there was a good chance that a SQL Profile forcing the good plan would resolve the issue. Sure, there could be some weird combination of data and bind variable values that make the bad plan the better one. But, given that this was a simple transactional application it seems most likely that the straightforward nested loops with index on the only bind variable plan would be best.

We used the SQL Profile to force these plans on four servers and so far the SQL Profile has resolved the issues. I’m not saying that forcing a plan using a SQL Profile is the only or even best way to resolve query performance issues. But, this was a good example of where a SQL Profile makes sense. If modifying the application, statistics, parameters, and schema is not possible then a SQL Profile can come to your rescue in a heartbeat.

Bobby

Categories: DBA Blogs

Amazon Database Migration Service – first try

Pythian Group - Mon, 2016-03-28 09:04

Recently, checking Amazon Web Services, I stumbled upon a service I hadn’t tested before. It was Data Migration Service (DMS). I read documentation and checked other resources. I found a good, fresh blog post AWS Database Migration Service written by Jeff Barr. It was really interesting and I decided to give a try and test the service.

I created an Oracle RDS on AWS as a target and an Oracle Linux box on Azure with Oracle 12c EE as a source database for migration. The source database sid was “test” and destination was “orcl”. I created tablespaces and users on both sides with the name “testmig” and created a table on the source database. Initially I loaded 1000000 records to the table and created an index. The schema on destination database was empty. I also enabled archivelog mode on the source database.

Creating user and table on the source:

test> create user testmig identified by welcome1 default tablespace testmig temporary tablespace temp;

User created.

test> grant connect,resource to testmig;

Grant succeeded.

test> conn test

test> create table test_tab_1 (pk_id number, rnd_str_1 varchar2(15),use_date date,rnd_str_2 varchar2(15), acc_date date);

Table created.

test>

Loading the data:

[oracle@oradb1 patchdepot]$ head test_tab_1.dat
340,MLBO07LV,10/30/13 15:58:04,NABCFVAQ,12/08/17 18:22:48
341,M48R4107,12/09/13 12:30:41,ACA79WO8,12/15/16 08:13:40
342,KARMF0ZQ,04/21/14 08:53:33,JE6SOE0K,06/18/17 07:12:29
343,8NTSYDIS,11/09/14 23:41:48,FBJXWQNX,08/28/15 20:47:39
344,0LVKBJ8T,09/28/12 06:52:05,VBX3FWQG,10/28/15 06:10:42
345,Z22W1QKW,06/06/13 11:14:32,26BCTA9L,08/21/17 08:35:15
346,CGGQO9AL,08/27/14 02:37:41,15SRXZSJ,11/09/17 19:58:58
347,WKHINIUK,07/02/13 14:31:53,65WSGVDG,08/02/15 10:45:50
348,HAO9X6IC,11/17/12 12:08:18,MUQ98ESS,12/03/15 20:37:20
349,D613XT63,01/24/15 16:49:11,3ELW98N2,07/03/16 11:03:40
[oracle@oradb1 patchdepot]$ export NLS_DATE_FORMAT="MM/DD/YY HH24:MI:SS"
[oracle@oradb1 patchdepot]$ sqlldr userid=testmig table=test_tab_1
Password:

SQL*Loader: Release 12.1.0.1.0 - Production on Wed Mar 16 13:07:50 2016

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: TEST_TAB_1
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO

Table TEST_TAB_1:
  100000 Rows successfully loaded.

Check the log files:
  test_tab_1.log
  test_tab_1_%p.log_xt
for more information about the load.
[oracle@oradb1 patchdepot]$

On the target system:

rdsorcl> create tablespace testmig;

Tablespace TESTMIG created.

rdsorcl> create user testmig identified by welcome1 default tablespace testmig;

User TESTMIG created.

rdsorcl>

In the blog post mentioned, the migration was done without replication and I was curious to test it with some kind of ongoing DML activity on the source database. I setup a linux box with Jmeter and started my load with pace about 15 transactions per second. The transactions were inserts and updates on the created table.

Everything was working fine so far and I switched to the Data Migration Service on AWS. The service has a pretty easy and clear workflow. You need just push the button “Create migration” and it will guide you through the process. In general, you need to create a replication instance, endpoints for source and target and task to start initial load and replication.

I created a replication instances and while it was creating (it took some time) was asked to setup endpoints for source and target. The first issue I hit when I tried to use a DNS name for my Azure instance. The test connection was failing by timeout and it was not clear where the problem were. It could be either connection or DNS problem. The issue was solved by providing IP address instead of domain name for my Azure instance.
Screen Shot 2016-03-16 at 1.26.40 PM
The test for target endpoint failed with the same timeout, but the reason was totally different. It was not DNS, but rather a connection issue. At first, I couldn’t figure that out because I was able to connect to my RDS instance from my laptop using server name and port but test endpoint in DMS was not working. Eventually I figured out that the problem was in security groups for endpoint in RDS. By default the AWS RDS instance was created with security group allowing connections outside but somehow restricting connections from DMS. I changed the security group for AWS RDS to “default” and was able to successfully test the endpoint in DMS.

The next step was to create a task. I created a task with initial load and ongoing replication for my testmig schema. The task was supposed to drop any tables on the target (you can choose truncate instead if you want) create objects, move data and keep replication until cutover day when you will be able to switch your applications to the new database. It will tell you that you need to setup supplemental logging for replication. Unfortunately it doesn’t tell you what kind of supplemental logging you have to setup.

So, I enabled minimal data supplemental logging on my Azure test instance.

test> alter database add supplemental log data;
Database add SUPPLEMENTAL altered.

test> exec dbms_capture_adm.prepare_table_instantiation('testmig.test_tab_1','keys')

PL/SQL procedure successfully completed.

test>

It was not enough and I got the error. By default you are not getting logging for your task but only configuration and statistics about replicated and loaded objects. As a result if you get an error, it is not clear where to look. I enabled supplemental logging for primary key on my replicated table and recreated task checking and logging checkbox. I got error again but I had a log and was able to see what was causing the issue.

2016-03-16T19:41:11 [SOURCE_CAPTURE  ]I:  Oracle compatibility version is 12.1.0.0.0  (oracle_endpoint_conn.c:86)
2016-03-16T19:41:11 [SOURCE_CAPTURE  ]I:  Oracle capture start time: now  (oracle_endpoint_capture.c:701)
2016-03-16T19:41:12 [SOURCE_CAPTURE  ]I:  New Log Miner boundaries in thread '1' : First REDO Sequence is '4', Last REDO Sequence is '4'  (oracdc_reader.c:589)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]W:  Supplemental logging is not defined for table with no key 'TESTMIG.TEST_TAB_1'  (oracle_endpoint_utils.c:831)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]E:  Supplemental logging for table 'TESTMIG.TEST_TAB_1' is not enabled properly [122310] Supplemental logging is not correct (oracle_endpoint_unload.c:245)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]I:  Unload finished for table 'TESTMIG'.'TEST_TAB_1' (Id = 1). 0 rows sent.  (streamcomponent.c:2567)
2016-03-16T19:41:18 [SOURCE_UNLOAD   ]E:  Failed to init unloading table 'TESTMIG'.'TEST_TAB_1' [122310] Supplemental logging is not correct (oracle_endpoint_unload.c:441)

It looked like my supplemental logging was not enough. So, I added supplemental logging for all columns and for entire schema testmig. I recreated task and started it again.

test> exec dbms_capture_adm.prepare_table_instantiation('testmig.test_tab_1','all');
PL/SQL procedure successfully completed.

test> exec dbms_capture_adm.prepare_schema_instantiation('testmig');
PL/SQL procedure successfully completed.

test>

It was working fine and was able to perform initial load.

2016-03-16T19:49:19 [SOURCE_CAPTURE  ]I:  Oracle capture start time: now  (oracle_endpoint_capture.c:701)
2016-03-16T19:49:20 [SOURCE_CAPTURE  ]I:  New Log Miner boundaries in thread '1' : First REDO Sequence is '4', Last REDO Sequence is '4'  (oracdc_reader.c:589)
2016-03-16T19:49:31 [SOURCE_UNLOAD   ]I:  Unload finished for table 'TESTMIG'.'TEST_TAB_1' (Id = 1). 100723 rows sent.  (streamcomponent.c:2567)
2016-03-16T19:49:31 [TARGET_LOAD     ]I:  Load finished for table 'TESTMIG'.'TEST_TAB_1' (Id = 1). 100723 rows received. 0 rows skipped. Volume transfered 45929688  (streamcomponent.c:2787)

What about ongoing changes? Yes, it was keeping the replication on and the tables were in sync. Replication lag for my case was minimal but we need to note that it was just one table with a low transaction rate. By the end I switched my load to AWS RDS database, stopped and deleted the DMS task. Migration was completed. I compared data in tables running a couple of simple checks for count and rows and running also one table “minus” other. Everything was fine.

rdsorcl> select max(pk_id) from testmig.test_tab_1;

      MAX(PK_ID)
----------------
         1000843

rdsorcl> select * from testmig.test_tab_1 where pk_id=1000843;

           PK_ID RND_STR_1       USE_DATE                    RND_STR_2       ACC_DATE
---------------- --------------- --------------------------- --------------- ---------------------------
         1000843 OUHRTHQ8        02/11/13 07:27:44           NFIAODAU        05/07/15 03:49:29

rdsorcl>

----------------

test> select max(pk_id) from testmig.test_tab_1;

      MAX(PK_ID)
----------------
         1000843

test> select * from testmig.test_tab_1 where pk_id=1000843;

           PK_ID RND_STR_1       USE_DATE                    RND_STR_2       ACC_DATE
---------------- --------------- --------------------------- --------------- ---------------------------
         1000843 OUHRTHQ8        02/11/13 07:27:44           NFIAODAU        05/07/15 03:49:29

test>

test> select count(*) from (select * from test_tab_1 minus select * from test_tab_1@rdsorcl);

        COUNT(*)
----------------
               0

test>

A summary of DMS:

    • We may need to adjust security groups for target RDS or EC2 systems. It may prevent connections.
    • Better to use IP for source endpoints since DNS may be not reliable.
    • Enable logging when you create task.
    • If you enable replication from Oracle database you have to setup full supplemental logging for the replicated schemas on your source system.
    • It requires basic knowledge about replication and how it works to understand and fix the error.

Next time I will try heterogeneous replication from MySQL to Oracle and then the other way around.

Categories: DBA Blogs

MySQL Memory Usage Docs Get a FaceLift

Pythian Group - Mon, 2016-03-28 08:59

The MySQL Documentation team recently gave these docs on how MySQL uses memory a much needed face-lift. The new page provides a much clearer overview on how MySQL allocates memory, and provides many helpful links to be able to dig deeper.

For instance, if you weren’t aware of how Performance Schema memory utilization changed in 5.7, there is this helpful paragraph (emphasis mine):

The MySQL Performance Schema is a feature for monitoring MySQL server execution at a low level. As of MySQL 5.7, the Performance Schema dynamically allocates memory incrementally, scaling its memory use to actual server load, instead of allocating required memory during server startup. Once memory is allocated, it is not freed until the server is restarted. For more information, see Section 22.14, “The Performance Schema Memory-Allocation Model”.

Therefore, if you are starting a new project on MySQL 5.7, or upgrading an existing environment, and you have Performance Schema enabled, you might see your memory footprint rising inexplicably. According to the linked Performance Schema Memory-Allocation Model documentation, one reason might because of auto-scaling Performance Schema variables:


performance_schema_accounts_size
performance_schema_hosts_size
performance_schema_max_cond_instances
performance_schema_max_file_instances
performance_schema_max_index_stat
performance_schema_max_metadata_locks
performance_schema_max_mutex_instances
performance_schema_max_prepared_statements_instances
performance_schema_max_program_instances
performance_schema_max_rwlock_instances
performance_schema_max_socket_instances
performance_schema_max_table_handles
performance_schema_max_table_instances
performance_schema_max_table_lock_stat
performance_schema_max_thread_instances
performance_schema_users_size

Of course, you can limit each variable by supplying a value to prevent autoscaling beyond a point.

There might me some areas missing, such as explicit MEMORY tables, but by-and-large it is a vast improvement.

Other honorable mentions that I’ve seen updates in the documentation include Limiting memory utilization of range optimizations and Configuring innodb_buffer_pool_size

Happy reading!

Categories: DBA Blogs

Log Buffer #466: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-03-28 08:47

This Log Buffer Edition covers weekly round up of blog posts from Oracle, SQL Server and MySQL.

Oracle:

The Universal Theme introduced with APEX 5 is immensely good looking and comes with a powerful Theme Roller to customize it.

The implementation of Ksplice has been greatly simplified. Now you just need to register your system(s) with Unbreakable Linux Network (ULN), subscribe to the appropriate Ksplice channel, use the yum command to install the uptrack package, and perform any custom configuration. Your systems will be automatically updated with the latest kernel and user space patches.

Every business book you read talks about delegation. It’s a core requirement for successful managers: surround yourself with good people, delegate authority and responsibility to them, and get out of their way.

Accelerating SQL Queries that Span Hadoop and Oracle Database

Oracle Big Data SQL 3.0 adds support for Hortonworks Data Platform and commodity clusters

SQL Server:

Instant File Initialization : Impact During Setup

Enumerate Windows Group Members

How to execute an SSIS package from the command line or a batch file

When AUTO_UPDATE_STATISTICS Doesn’t Happen

SQL Server Table Smells

MySQL:

MySQL replication primer with pt-table-checksum / pt-table-sync, part 2

How do you dig down into the JSON data, say like in comments on a blog post?

Percona XtraBackup 2.3.4 is now available

Connection timeout parameters in MySQL

What have we learnt in two decades of MySQL?

Categories: DBA Blogs

Links for 2016-03-27 [del.icio.us]

Categories: DBA Blogs

Create a PrimaryKey on existing table without unique values

Learn DB Concepts with me... - Mon, 2016-03-28 00:25
lets assume that there was a table ABC that was already existing in the database and you want to add an additional column with unique primary key values.


sql to create table ABC :

  CREATE TABLE "ABC"
   (    "USERNAME" VARCHAR2(30 BYTE) NOT NULL ENABLE,
    "USER_ID" NUMBER NOT NULL ENABLE,
    "CREATED" DATE NOT NULL ENABLE )
   TABLESPACE "USERS" ;

Now we  can add an additional column ID which will be populated with all unique values for PrimaryKey.

alter table abc add(ID NUMBER);

We will now create a sequence and get the values from the seq and insert them into table ABC new ID column:

CREATE SEQUENCE SEQ_ID
START WITH 1
INCREMENT BY 1
MAXVALUE 999999
MINVALUE 1
NOCYCLE;

Now insert the unique values into the database with below sql:
UPDATE abc SET ID = SEQ_ID.NEXTVAL;


now you can add unique constraint (or) add primary key constraint to table,so that it wont take any more duplicate value into the table.

alter table abc add primarykey (ID);
Categories: DBA Blogs

Partner Webcast – Oracle Java Cloud Service: Getting Started with

Oracle Java Cloud Service is a complete platform and infrastructure cloud solution for building, deploying, and managing Java EE applications. Your Businesses can maximize productivity and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Now an OCP 12c

Hemant K Chitale - Sun, 2016-03-27 10:59
I have upgraded my Oracle Database Certified Professional status from 11g to 12c, having passed the 1Z0-060 Exam last Monday.

This is the list of material I used :

1.  Documentation :  Oracle 12c Database Documentation.  Not just the New Features Guide but a large number of pages spread throughout the documentation.  I went through *selected* chapters / pages of the Administrator's Guide, Reference Guide, PLSQL Package Guide, Developers Guide and Backup and Recovery Guide

2.  Books :
a.  Oracle Database 12c New Features  by Robert G. Freeman  (Oracle Press)
b.  OCP Upgrade to Oracle Database 12c Exam Guide  by Sam R. Alapati  (Oracle Press)
c.  Study Guide for 1Z0-060 Upgrade to Oracle Database 12c  by Matthew Morris (Self-Published as Oracle Certification Prep)

3.  FlashCards
a.  FlashCards with Test Software for 1Z0-060 at https://www.selftestsoftware.com
b.  FlashCards OCPFlash1Z0-060 on Google PlayStore  by Matthew Morris

4.  Practice Tests
a.  Practice Tests and Test Software for 1Z0-060 at https://www.selftestsoftware.com
b.  Oracle Certification Prep Practice Test for 1Z0-060  at http://www.oraclecertificationprep.com

5.  Links to other Resources :  http://www.oraclecertificationprep.com and  www.oracle-base.com


Note : This does NOT mean that I didn't have practice environments.  The books alone aren't sufficient.   I created three practice environments :
a.  Fresh Install of Oracle Linux, RDBMS 12.1.0.1 and creation NonCDB database on FileSystem
b. Donwloaded Oracle Virtual Box PreBuilt VM with 12.1.0.2 and MultiTenant Database
c. Fresh Install of Oracle Linux, creation of disks for ASM, install of Grid Infrastructure and RDBMS 12.1.0.2 and creation of NonCDB database on ASM
.
.
.

Categories: DBA Blogs

#Oracle University Expert Summit Berlin 2016

The Oracle Instructor - Sun, 2016-03-27 06:20

Join us in Berlin, 18th – 20th April. The event will take place at the Adlon Hotel with Jonathan Lewis, Pete Finnigan, Christian Antognini, Javier de la Torre Medina and myself as speakers. We have over 70 enrollments already, so take action to secure your seat in time!

Expert Summit Berlin 2016

My topic will be Rolling Upgrade from 11g to 12c, with a special focus on doing it with Transient Logical Standby. In a live demonstration I will start with a Primary and a Physical Standby Database both running 11.2.0.4 in Maximum Availability protection mode using the Data Guard Broker. This is likely one of the most common setups today. We will then see how to upgrade this configuration to 12.1.0.2 with minimum downtime.

When I did this seminar before during another Expert Summit in Dubai, I was still using 11.2.0.1 as the initial release, then upgrading it to 11.2.0.3. It took me some time and effort to update my demo environment and the lessons to cover more recent and meanwhile more relevant versions. Hope to see you there:-)


Categories: DBA Blogs

Compression -- 5 : OLTP Compression

Hemant K Chitale - Sat, 2016-03-26 10:03
Taking the test case from my first Blog Post on compression,  is there any way to support OLTP ?
As demonstrated in the first Blog Post and here, once the table undergoes DML, its size changes.

PDB1@ORCL> select segment_name, segment_type, bytes/1048576
2 from user_segments
3 where segment_name in ('SOURCE_DATA','COMPRESSED_1')
4 /

SEGMENT_NAME SEGMENT_TYPE BYTES/1048576
------------------------------ ------------------ -------------
COMPRESSED_1 TABLE 12
SOURCE_DATA TABLE 49

PDB1@ORCL>
PDB1@ORCL> update compressed_1
2 set owner=owner;

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_1';

BYTES/1048576
-------------
22

PDB1@ORCL>
PDB1@ORCL> update compressed_1
2 set owner=substr(owner||owner,1,30);

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_1';

BYTES/1048576
-------------
58

PDB1@ORCL>

So, I setup another table that supports OLTP compression.  This requires the Advanced Compression licence.  Make sure that you have purchased the licence !

PDB1@ORCL> create table compressed_2 tablespace hemant COMPRESS FOR OLTP
2 as select * from source_data where 1=2;

Table created.

PDB1@ORCL> insert /*+ APPEND */ into compressed_2
2 select * from source_data;

364496 rows created.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
13

PDB1@ORCL> update compressed_2
2 set owner=owner;

364496 rows updated.

PDB1@ORCL> commit;

Commit complete.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
13

PDB1@ORCL> update compressed_2
2 set owner=substr(owner||owner,1,30);

364496 rows updated.

PDB1@ORCL> select bytes/1048576
2 from user_segments
3 where segment_name = 'COMPRESSED_2';

BYTES/1048576
-------------
29

PDB1@ORCL>
PDB1@ORCL> select table_name, pct_free, compression, compress_for
2 from user_tables
3 where table_name like 'COMPRESS%'
4 order by 1;

TABLE_NAME PCT_FREE COMPRESS
------------------------------ ---------- --------
COMPRESS_FOR
------------------------------
COMPRESSED_1 0 ENABLED
BASIC

COMPRESSED_2 10 ENABLED
ADVANCED


PDB1@ORCL>


Note the initial size of COMPRESSED_2 is slightly large because it starts with PCT_FREE=10.
The UPDATE does seem to take longer to run.  COMPRESS FOR OLTP preserves PCT_FREE at 10. But it does handle UPDATEs better than BASIC Compression. However, if you have no subsequent UPDATEs to the data, BASIC Compression (which does not need an additional licence when running Enterprise Edition) will suffice.

I urge you to *TEST* COMPRESS FOR OLTP with the different types of UPDATE operations that you actually have in your live database before you choose to implement it.

In 12c, COMPRESS FOR OLTP is called ROW STORE COMPRESS ADVANCED.  I presented the 11g style COMPRESS FOR OLTP syntax so that you could use it in 11g.
.
.
.


Categories: DBA Blogs

Links for 2016-03-24 [del.icio.us]

Categories: DBA Blogs

ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

Learn DB Concepts with me... - Fri, 2016-03-25 00:08
In this scenario I am trying to increase the value of parameter memory_max_target. My initial memory_max_target = 804 I want to increase it to 900

SQL> show parameter sga

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga     boolean FALSE
pre_page_sga     boolean FALSE
sga_max_size     big integer 804M
sga_target     big integer 0

SQL> show parameter max_target

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
memory_max_target     big integer 804M

SQL> show parameter memory

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address     integer 0
memory_max_target     big integer 804M
memory_target     big integer 804M
shared_memory_address     integer 0

SQL> alter system set memory_max_target=900 scope=spfile;

System altered.

SQL> show parameter memory;

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address     integer 0
memory_max_target     big integer 804M
memory_target     big integer 804M
shared_memory_address     integer 0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET
SQL> startup mount;
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET
SQL> startup nomount;
ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET

Since we can't login into DB to check the value  that was set. Lets create pfile and check the actual value.

SQL> create pfile from spfile;

File created.

[oracle@Linux01 ~]$ cd $ORACLE_HOME/dbs

[oracle@Linux01 dbs]$ ls -ll

[oracle@Linux01 dbs]$ vi initDB11G.ora


Haha .. here is the problem in my case.




*********************************************************************************
In my case the problem is that, I didn't mention the MEMORY_MAX_TARGET in MB
Changing the value to MB did the trick
*********************************************************************************




[oracle@Linux01 dbs]$ sqlplus  /"AS sysdba"

SQL*Plus: Release 11.2.0.1.0 Production on Thu Mar 24 23:46:18 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

Below reboot not needed but since I want to use spfile. I did it

SQL> startup pfile='$ORACLE_HOME/dbs/initDB11G.ora';
ORACLE instance started.

Total System Global Area  939495424 bytes
Fixed Size    2218952 bytes
Variable Size  675284024 bytes
Database Buffers  255852544 bytes
Redo Buffers    6139904 bytes
Database mounted.
Database opened.
SQL> create spfile from pfile;

File created.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area  939495424 bytes
Fixed Size    2218952 bytes
Variable Size  675284024 bytes
Database Buffers  255852544 bytes
Redo Buffers    6139904 bytes
Database mounted.
Database opened.
SQL>


SQL> show parameter memory;

NAME     TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address     integer 0
memory_max_target     big integer 900M
memory_target     big integer 800M
shared_memory_address     integer 0
Categories: DBA Blogs

To apply or not to apply that Cumulative Update (CU)

Pythian Group - Thu, 2016-03-24 12:56

Today the SQL Server Engineering posted an important shift in their recommendations regarding applying Cumulative Updates (often referred to as CUs) on their blog. You can find it here.

About 4 months ago we had an internal debate regarding the best patch strategy and I noticed that our SQL Server DBAs were divided on the best approach.

Some insisted that installing CUs as they were released was the best practice, while others insisted that you should only patch if you needed the Hotfix. I don’t know what spurred it but I saw other discussions on the subject pop up in the community a few days later.

Throughout my career, I’ve been torn on the best strategy myself. I like to keep my systems up to date but I’d always taken the approach that if you needed the hotfix, then with proper testing you should apply a CU. The release of a CU has never been the trigger for me to patch all my SQL Servers in any other occasion – except – when a lot of time had passed between Service Packs which did happen.

I think that strategy of waiting a long time before applying a service pack is a flawed one and don’t recommend it. I don’t think it’s a good idea to be “one release behind” or wait a year. That said, as a career DBA I don’t think I’ll rush out and apply the CU unless it’s fixing something. If I have the cycles, I may test it early on, but I’ll probably wait a month or two and see what the community has to say about it before apply it to production.

I predict that you’ll see Microsoft (and other vendors) move away from large service packs as they move into a more agile approach to their own software. I suspect it won’t belong before we see a formal cancellation of large releases. This is all the more reason for us to ensure we have automation in place for testing and deployment so that the release of an update isn’t a significant topic in our systems-planning meetings.

Do you apply CUs right away or delay? What’s your patch-strategy?

Categories: DBA Blogs

Big Data SQL 3.0 Expanded Across All Platforms

The unprecedented explosion in data that can be made useful to enterprises – from the Internet of Things, to the social streams of global customer bases – can create tremendous value. However, with...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQLSERVER QUERIES - SQLSERVER2015

Learn DB Concepts with me... - Wed, 2016-03-23 16:02
SELECT TOP 1000 [FNAME]
      ,[LNAME]
      ,[ID]
  FROM [TESTDB].[dbo].[USERS]

FNAME    LNAME    ID
Arvind    Reddy    1
Ravi    Reddy    2
Tom        Shawn    3


SELECT TOP 1000 [ORDER_ID]
      ,[USER_ID]
      ,[ORDER_INFO]
      ,[ORDER_AMT]
  FROM [TESTDB2].[dbo].[USER_ORDERS]
 
ORDER_ID    USER_ID    ORDER_INFO    ORDER_AMT
9001            1        BOOKS        10
9002            2        SHOES        20
 

SELECT A.[ORDER_ID]
      ,A.[USER_ID]
      ,A.[ORDER_INFO]
      ,A.[ORDER_AMT],B.ID,A.USER_ID
  FROM [TESTDB2].[dbo].[USER_ORDERS] A ,[TESTDB].[dbo].[USERS] B where A.USER_ID=B.ID ;
 
ORDER_ID    USER_ID    ORDER_INFO    ORDER_AMT    ID    USER_ID
9001            1        BOOKS        10        1        1
9002            2        SHOES        20        2        2


Address    Phone    City    User_id    ID
1234 test    2145524585    Hyd    1    1
52426 test    5246853652    Hyd    2    2
582 test st    5286943568    Bglr    3    3
768 TEST RD    56799976887    OMAHA    4    4
768 TEST RD    56799976887    OMAHA    5    5
Categories: DBA Blogs

Database maintenance tasks

DBA Scripts and Articles - Wed, 2016-03-23 15:46

The default maintenance window is not a good fit for every database, by default the maintenance window start at 10 PM and run for 4 hours during the week and start at 6 AM and last for 20 hours during the week-end. A different window exists for each day of the week and all the windows … Continue reading Database maintenance tasks

The post Database maintenance tasks appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

chr function and its values - CHR and ASCII values

Learn DB Concepts with me... - Wed, 2016-03-23 15:11
chr function returns the ascii letter for that integer. We know that there are 255 ascii characters defined.

SQL> select chr(65) as CHR from dual;

CHR

A

Below code print all 255 ascii characters

Sample code to check the values :

begin
  for i in 1..255 loop
      dbms_output.put_line( 'CHR('||i||')' ||'=='|| chr(i) );
    end loop;
    end;
    /

This output might differ actually based on the chacterset you have choosen while installing you Database.

DBMS_OUTPUT :


   CHR(1)==
CHR(2)==
CHR(3)==
CHR(4)==
CHR(5)==
CHR(6)==
CHR(7)==
CHR(8)==
CHR(9)==   
CHR(10)==

CHR(11)==
CHR(12)==
CHR(13)==
CHR(14)==
CHR(15)==
CHR(16)==
CHR(17)==
CHR(18)==
CHR(19)==
CHR(20)==
CHR(21)==
CHR(22)==
CHR(23)==
CHR(24)==
CHR(25)==
CHR(26)==
CHR(27)==
CHR(28)==
CHR(29)==
CHR(30)==
CHR(31)==
CHR(32)==
CHR(33)==!
CHR(34)=="
CHR(35)==#
CHR(36)==$
CHR(37)==%
CHR(38)==&
CHR(39)=='
CHR(40)==(
CHR(41)==)
CHR(42)==*
CHR(43)==+
CHR(44)==,
CHR(45)==-
CHR(46)==.
CHR(47)==/
CHR(48)==0
CHR(49)==1
CHR(50)==2
CHR(51)==3
CHR(52)==4
CHR(53)==5
CHR(54)==6
CHR(55)==7
CHR(56)==8
CHR(57)==9
CHR(58)==:
CHR(59)==;
CHR(60)==<
CHR(61)===
CHR(62)==>
CHR(63)==?
CHR(64)==@
CHR(65)==A
CHR(66)==B
CHR(67)==C
CHR(68)==D
CHR(69)==E
CHR(70)==F
CHR(71)==G
CHR(72)==H
CHR(73)==I
CHR(74)==J
CHR(75)==K
CHR(76)==L
CHR(77)==M
CHR(78)==N
CHR(79)==O
CHR(80)==P
CHR(81)==Q
CHR(82)==R
CHR(83)==S
CHR(84)==T
CHR(85)==U
CHR(86)==V
CHR(87)==W
CHR(88)==X
CHR(89)==Y
CHR(90)==Z
CHR(91)==[
CHR(92)==\
CHR(93)==]
CHR(94)==^
CHR(95)==_
CHR(96)==`
CHR(97)==a
CHR(98)==b
CHR(99)==c
CHR(100)==d
CHR(101)==e
CHR(102)==f
CHR(103)==g
CHR(104)==h
CHR(105)==i
CHR(106)==j
CHR(107)==k
CHR(108)==l
CHR(109)==m
CHR(110)==n
CHR(111)==o
CHR(112)==p
CHR(113)==q
CHR(114)==r
CHR(115)==s
CHR(116)==t
CHR(117)==u
CHR(118)==v
CHR(119)==w
CHR(120)==x
CHR(121)==y
CHR(122)==z
CHR(123)=={
CHR(124)==|
CHR(125)==}
CHR(126)==~
CHR(127)==
CHR(128)==€
CHR(129)==
CHR(130)==‚
CHR(131)==ƒ
CHR(132)==„
CHR(133)==…
CHR(134)==†
CHR(135)==‡
CHR(136)==ˆ
CHR(137)==‰
CHR(138)==Š
CHR(139)==‹
CHR(140)==Œ
CHR(141)==
CHR(142)==Ž
CHR(143)==
CHR(144)==
CHR(145)==‘
CHR(146)==’
CHR(147)==“
CHR(148)==”
CHR(149)==•
CHR(150)==–
CHR(151)==—
CHR(152)==˜
CHR(153)==™
CHR(154)==š
CHR(155)==›
CHR(156)==œ
CHR(157)==
CHR(158)==ž
CHR(159)==Ÿ
CHR(160)==
CHR(161)==¡
CHR(162)==¢
CHR(163)==£
CHR(164)==¤
CHR(165)==¥
CHR(166)==¦
CHR(167)==§
CHR(168)==¨
CHR(169)==©
CHR(170)==ª
CHR(171)==«
CHR(172)==¬
CHR(173)==­
CHR(174)==®
CHR(175)==¯
CHR(176)==°
CHR(177)==±
CHR(178)==²
CHR(179)==³
CHR(180)==´
CHR(181)==µ
CHR(182)==¶
CHR(183)==·
CHR(184)==¸
CHR(185)==¹
CHR(186)==º
CHR(187)==»
CHR(188)==¼
CHR(189)==½
CHR(190)==¾
CHR(191)==¿
CHR(192)==À
CHR(193)==Á
CHR(194)==Â
CHR(195)==Ã
CHR(196)==Ä
CHR(197)==Å
CHR(198)==Æ
CHR(199)==Ç
CHR(200)==È
CHR(201)==É
CHR(202)==Ê
CHR(203)==Ë
CHR(204)==Ì
CHR(205)==Í
CHR(206)==Î
CHR(207)==Ï
CHR(208)==Ð
CHR(209)==Ñ
CHR(210)==Ò
CHR(211)==Ó
CHR(212)==Ô
CHR(213)==Õ
CHR(214)==Ö
CHR(215)==×
CHR(216)==Ø
CHR(217)==Ù
CHR(218)==Ú
CHR(219)==Û
CHR(220)==Ü
CHR(221)==Ý
CHR(222)==Þ
CHR(223)==ß
CHR(224)==à
CHR(225)==á
CHR(226)==â
CHR(227)==ã
CHR(228)==ä
CHR(229)==å
CHR(230)==æ
CHR(231)==ç
CHR(232)==è
CHR(233)==é
CHR(234)==ê
CHR(235)==ë
CHR(236)==ì
CHR(237)==í
CHR(238)==î
CHR(239)==ï
CHR(240)==ð
CHR(241)==ñ
CHR(242)==ò
CHR(243)==ó
CHR(244)==ô
CHR(245)==õ
CHR(246)==ö
CHR(247)==÷
CHR(248)==ø
CHR(249)==ù
CHR(250)==ú
CHR(251)==û
CHR(252)==ü
CHR(253)==ý
CHR(254)==þ
CHR(255)==ÿ

Categories: DBA Blogs

Three essential practices for security compliance

Pythian Group - Wed, 2016-03-23 14:40

No IT or business person needs to be told twice that a major security breach can have a devastating impact on a business. Yet enterprises routinely find themselves non-compliant with security best practices and even their own policies.

Why? First, there’s a lot of complexity to manage. And second, with IT teams constantly putting out fires, background functions like security tend to get shortchanged.

That said, there are a few simple things you can do to strengthen how you protect your data and your business.

1. Stay patched and monitor for unauthorized changes

You really aren’t safe without up-to date security patches for your vulnerable systems — which means most of them. Any software that faces out or touches the Internet is definitely at risk. But internal personnel can pose threats, as well , meaning even “inside” systems can be vulnerable.

The problem with patching is scale. If you’re a bank with 300 branches across the country, all with their own IT systems, you don’t have the time or the people to manually patch every system in a centralized, whole-enterprise way. Automation is essential: a mechanism for pushing patches out across all your departments and locations — and verifying successful installation.

Patching is essential, but it’s not enough. If you’re breached, the intruders will try to downgrade or otherwise weaken your defenses. So you need an automated auditing platform that: a) looks for unauthorized changes that could weaken your software systems; and b) reverts compromised systems back to the authorized version of software.

2: Only allow access that’s strictly necessary

Mindset is a big part of security. When it comes to controlling access to system resources, data and applications, your default should be that no one has access to anything. “Permissions” then become very deliberate enablement of specific apps and services to specific users based on specific needs. People should only ever have access to the data and systems they need to do their immediate jobs.

Access rights should be linked to your provisioning systems so that when a person changes jobs or leaves your company, their old rights are immediately removed.

In general, security should match risk to systems, with levels of increasing verification when an employee’s behavior is unusual. For example, if someone has never logged in from a particular location but appears to be doing so now, serve them up an additional verifying question. If they’ve never logged into a system before, get them to verify their location and identity.

Users should also be prompted to confirm or deny unusual behaviour. Did you just log in from a new computer? Did you just change your password? These kinds of security health checks are being integrated into applications, periodically forcing users to review their settings and ensure their identity and security information is up to date.

What happens when you don’t have stringent rules like these? Weakly enforced access rules were at the root of a recent, headline-grabbing security breach at a major U.S. retailer. The intruders had access to one hacked device, but by exploiting weak permissions were able to access many other devices — and make off with 40 million credit card numbers.

3: Assume you’ve been hacked.

It’s easy to have a defensive mindset about security: “We’ll stop the bad guys from getting in.” But the reality is they may already be in. The strongest security position comes from assuming you’ve already been hacked. Keep a vigilant watch for evidence of it.

This starts by imposing tight controls on systems that are key to your business operations. Audit all planned changes daily, recording these with approvals in a change log accessible only through off-site logging not connected to systems. If an intruder makes changes, the change log will be your first line of defense — it will be impossible for the hacker to cover their tracks because you will have a forensic change record in a protected location.

Security needs to be a priority in every area of your business. Business units should test the security of their operational practices as part of quarterly business continuity planning. You should regularly test your company-wide systems internally to identify vulnerabilities, and consider hiring professional security experts to attack or socially engineer access to your systems. When it comes to enterprise security, offence really is the best defense.

Categories: DBA Blogs

A Tale of Three Cities: Perspectives on innovation from New York, San Francisco and Sydney

Pythian Group - Tue, 2016-03-22 12:29

Recently, Pythian hosted a number of Velocity of Innovation (Velocity) events. I moderated two of these: one last June in New York, and one in November in San Francisco. Another event in Sydney, Australia was moderated by Tom McCann, senior customer experience analyst with Forrester.

Our Velocity events have given us unique insights into what IT professionals in various regions see as their top priorities or concerns. And although we always framed our discussions with similar questions, it was interesting to see the different directions they took in each location — especially when it came to the topic of innovation.

So what makes a particular region fertile ground for innovation? And can you measure it?

The Global Innovation Index (GII) ranks countries based on a multitude of indicators of innovation. The United States ranks number 2 on the GII, behind Switzerland, while Australia is number 17, out of 141 countries. According to the GII website, the index aims to capture the multi-dimensional facets of innovation and provide the tools to assist in tailoring policies to promote long-term output growth, improved productivity and job growth.

The ideas discussed in the US and Australian locations seemed to align with the GII results, with US panelists expressing more positive attitudes and concrete ideas on how companies can improve agility and become more innovative. And while being at the forefront of technology in the Asia-Pacific region, the Australian panelists and audience members described more cautious approaches to achieving innovation.

Sydney: Cautiously moving forward

Early in the Sydney panel discussion, Chris Mendez, executive consultant big data and analytics from Industrie IT, sparked a lively discussion about innovation by asserting that innovation is lacking in that region.

“I actually don’t think there’s enough innovation in Australia, in particular. There’s a lot of talk about it, people are doing a lot of experiments, and there are some companies who’ve set up business purely based on tool sets that use data to innovate. But there are a few things that seem to be working against innovation, and I think one of those things is that it doesn’t stand on its own,” Mendez said.

According to Francisco Alvarez, vice president, APAC at Pythian, the risks associated with innovation might be holding companies back in Australia. “The main problem for most companies is that innovation equals risk,” Alvarez said.

Alvarez also commented on what it takes to make innovation work. “If you take a step back and look at the companies that are doing well in the market, you can see that there is one factor that differentiates them: they were not afraid to try to innovate. And because of that innovation they are getting their share of the market and gaining ground. Just look at the financial market. CBA was considered crazy a few years ago for all the investment they were making in technology, social media, apps and so on. They got ahead. And now everybody is trying to do the same,” he said.

Mendez thinks that innovation needs to start from the top. “I think there’s also a very big misunderstanding at board levels about innovation because boards are there to actually stop you changing your business. The fundamental tenant is: ‘We’ve got a great business model here, it’s running well, we’ve got to make sure that any change to it doesn’t damage that.’ There’s a natural caution at board levels and it’s totally understandable,” he said.

While cautious, the Sydney panelists expressed that they thought there is hope for more innovation in the future. They expressed a need to proceed slowly, watching what works for innovation leaders.

“The key is to have a balance,” Alvarez said.

New York: Early adopters

If you were to put our New York panelists on Geoffrey Moore’s https://en.wikipedia.org/wiki/Geoffrey_Moore Technology Adoption Lifecycle, you might classify them as early adopters, rather than true innovators. Not surprising, since New York’s competitive industries such as banking and publishing rely on innovative technologies, but they don’t create them.

According to New York panelist, Forrester Analyst Gene Leganza, what makes an enterprise agile is the ability to sense what’s going on in the marketplace and to quickly respond to it. But, he said that innovation comes at a cost. “The flip side of agility is innovation. An interesting aspect of innovation is getting really hot talent into your environment. Getting the right talent and doing smart things and being leading edge are challenges. You have to figure out what level to drop in on, where you are in the industry. You need to determine if you are a startup or a state organization that needs to be a fast follower,” Leganza said.

Otto Toth, CTO at Huffington Post warned that innovating quickly is not always in the best interest of the business, or it may not be the way to do it properly. He asserted that quick innovation can actually work against the business, and that instead of making your business faster, being very agile can slow everything down.

“Too many decision-makers just slow down the process. It’s better to have a few people or a core team who make the decisions and come up with new features,” he added.

Leganza went on to describe what it takes at various levels of the organization. He said that there’s a notion at the engineer level that agility means bureaucracy won’t get in their way. Then there’s agility at the enterprise level, which is about reducing risk and understanding how soon change can be in production.

“The higher up you go, the more people are going to be receptive to what improves the whole portfolio rather than one project. This is where architects come in. They have been hands-on, but have the credibility and knowledge to guide the organization more strategically,” Leganza said.

San Francisco: The innovators

In San Francisco the narratives on innovation were quite different. Although cities don’t have their own GII ranking, you might assume that the West Coast IT leaders are the innovators. And judging by the discussion at the San Francisco event, this assumption seemed to be true.

Cory Isaacson, CTO at RMS was one of our San Francisco panelists. His company runs catastrophe models for some of the world’s largest insurance companies, like scenarios that will tell what a disaster like an earthquake or hurricane might cost them. Isaacson has been working on bringing big data and scalable systems together to create a new cloud-based platform.

“At my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. But when you’re doing innovation, it is risky. There’s no way around it. There is a lot to evaluate: from different algorithms to the risk models and the catastrophe models,” said Isaacson.

Sean Rich, director of IT at Mozilla added to the San Francisco discussion by talking about some of the concrete innovations his company is working on. They’re taking a partnership approach to enable agility.

“Innovation is doing something new. In an effort toward achieving agility, one of the things that we’re doing is enabling the agility of our business partners, by changing our own operating model. Instead of traditional IT where we run all the services and infrastructure necessary to drive the business, we’re taking more of an enabler or partnership approach,” Rich said.

“We’re now doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management of user adoption for certain platforms and data integration” he added.

“Overall, we’re looking at ourselves differently, and asking what new capabilities we need to develop, and what processes, tools and skills we need to enable agility for our marketing group or our product lines, as an example,” Rich said.

Aaron Lee, the Chief Data Officer at Pythian, runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics — he’s involved in some of the most leading edge initiatives for Pythian customers. He takes a practical approach to innovation with clients, and said that companies could improve innovation by looking at the root of the motivation for it.

“They need to ask: Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate?

“If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it starts to get more risk averse. Then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place,” Lee said.

Isaacson added that companies like Google have been known to allow an engineer to take a day a week or a day every two weeks to just look at things. “I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. Once we get more ahead of the curve, I think we could do that kind of thing,” he said.

Interested in being a part of a discussion like these? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com. To view our schedule of upcoming events visit our Velocity of Innovation page.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs