Skip navigation.

DBA Blogs

How to Deal with MetaData Lock

Pythian Group - Thu, 2016-05-05 07:59
What is MetaData Lock?

MySQL uses metadata locking to manage concurrent access to database objects, and to ensure data consistency when performing modifications to the schema: DDL operations. Metadata locking applies not just to tables, but also to schemas and stored programs (procedures, functions, triggers, and scheduled events).

In this post I am going to cover metadata locks on tables and triggers, that are usually seen by DBAs during regular operations/maintenance.

Kindly refer to these 4 different connections to MySQL Instance:

Screen Shot 2016-04-19 at 2.58.52 pm

 

The screenshot shows that the uncommitted transaction may cause metadata lock to ALTER operations. The ALTER will not proceed until the transaction is committed or rolled-back. What is worse, after the ALTER is issued, any queries to that table (even simple SELECT queries) will be blocked. If the ALTER operation is an ONLINE DDL operation available in 5.6+, queries will proceed as soon as the ALTER begins.

Refer to this video tutorial on MySQL Metadata Locks for further context.

These days we have a “DBAs” favourite tool “pt-online-schema-change” (osc). Let’s have a look what will happen If we run osc instead of ALTER.

Screen Shot 2016-04-19 at 3.07.26 pm

OSC gets stuck at metadata lock at the point of creating triggers on table.

Let’s jump on the second topic how can we mitigate MDL issues:

Mitigating the MetaData Lock Issues

There are various solutions to tackling MDL:

  1. Appropriate setting of wait_timeout variable which will kill stuck/sleep threads after a certain time.
  2. Configure pt-kill to get rid of stuck/sleep threads  
  3. Fix code where transactions are not committed after performing DB queries
How to kill Sleep Connections in RDS which are causing MDL

If you are on RDS and your MySQL is having bunch of Sleep threads and you don’t know which connection is causing metadata lock, then you have to kill all the Sleep queries which are in mysql for more than a certain time. As we know “kill thread_id” is not permitted in RDS, but you can use the query below to get the exact queries to kill Sleep threads.

Example Output:

mysql> SELECT CONCAT('CALL mysql.rds_kil ( ',id,')',';') FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND='Sleep' AND TIME > 10 ;
+---------------------------------------------+
| CONCAT('CALL mysql.rds_kill ( ',id,')',';') |
+---------------------------------------------+
| CALL mysql.rds_kill ( 5740758); |
| CALL mysql.rds_kill ( 5740802); |
| CALL mysql.rds_kill ( 5740745); |
| CALL mysql.rds_kill ( 5740612); |
| CALL mysql.rds_kill ( 5740824); |
| CALL mysql.rds_kill ( 5740636); |
| CALL mysql.rds_kill ( 5740793); |
| CALL mysql.rds_kill ( 5740825); |
| CALL mysql.rds_kill ( 5740796); |
| CALL mysql.rds_kill ( 5740794); |
| CALL mysql.rds_kill ( 5740759); |
| CALL mysql.rds_kill ( 5740678); |
| CALL mysql.rds_kill ( 5740688); |
| CALL mysql.rds_kill ( 5740817); |
| CALL mysql.rds_kill ( 5740735); |
| CALL mysql.rds_kill ( 5740818); |
| CALL mysql.rds_kill ( 5740831); |
| CALL mysql.rds_kill ( 5740795); |
| CALL mysql.rds_kill ( 4926163); |
| CALL mysql.rds_kill ( 5740742); |
| CALL mysql.rds_kill ( 5740797); |
| CALL mysql.rds_kill ( 5740832); |
| CALL mysql.rds_kill ( 5740751); |
| CALL mysql.rds_kill ( 5740760); |
| CALL mysql.rds_kill ( 5740752); |
| CALL mysql.rds_kill ( 5740833); |
| CALL mysql.rds_kill ( 5740753); |
| CALL mysql.rds_kill ( 5740722); |
| CALL mysql.rds_kill ( 5740723); |
| CALL mysql.rds_kill ( 5740724); |
| CALL mysql.rds_kill ( 5740772); |
| CALL mysql.rds_kill ( 5740743); |
| CALL mysql.rds_kill ( 5740744); |
| CALL mysql.rds_kill ( 5740823); |
| CALL mysql.rds_kill ( 5740761); |
| CALL mysql.rds_kill ( 5740828); |
| CALL mysql.rds_kill ( 5740762); |
| CALL mysql.rds_kill ( 5740763); |
| CALL mysql.rds_kill ( 5740764); |
| CALL mysql.rds_kill ( 5740773); |
| CALL mysql.rds_kill ( 5740769); |
| CALL mysql.rds_kill ( 5740770); |
| CALL mysql.rds_kill ( 5740771); |
| CALL mysql.rds_kill ( 5740774); |
| CALL mysql.rds_kill ( 5740784); |
| CALL mysql.rds_kill ( 5740789); |
| CALL mysql.rds_kill ( 5740790); |
| CALL mysql.rds_kill ( 5740791); |
| CALL mysql.rds_kill ( 5740799); |
| CALL mysql.rds_kill ( 5740800); |
| CALL mysql.rds_kill ( 5740801); |
| CALL mysql.rds_kill ( 5740587); |
| CALL mysql.rds_kill ( 5740660); |
+---------------------------------------------+
53 rows in set (0.02 sec)
  1. Capture sql queries to kill Sleep threads

mysql -htest-server.us-west-2.rds.amazonaws.com. –skip-column-names -e ‘SELECT CONCAT(“CALL mysql.rds_kill ( “,id,”)”,”;”) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND=”Sleep” AND TIME > 10’ > kill_sleep_threads.sql

2.Execute queries from mysql prompt

mysql -htest-server.us-west-2.rds.amazonaws.com.

mysql> source kill_sleep_threads.sql
Improvements in MySQL 5.7 related to MDL

Generally, we would want to kill as few connections as possible. But the trouble with metadata locks prior to 5.7 is that there is no insight available into which threads are taking the metadata lock. In MySQL 5.7, there are several improvements in getting insight into metadata lock information.

“The Performance Schema now exposes metadata lock information:

  • Locks that have been granted (shows which sessions own which current metadata locks)
  • Locks that have been requested but not yet granted (shows which sessions are waiting for which metadata locks).
  • Lock requests that have been killed by the deadlock detector or timed out and are waiting for the requesting session’s lock request to be discarded

This information enables you to understand metadata lock dependencies between sessions. You can see not only which lock a session is waiting for, but which session currently holds that lock.

The Performance Schema now also exposes table lock information that shows which table handles the server has open, how they are locked, and by which sessions.”

To check who holds the metadata lock in MySQL 5.7, We have to enable global_instrumentation and wait/lock/metadata/sql/mdl.

Below is the example to enable global_instrumentation and wait/lock/metadata/sql/mdl

mysql> UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME = 'global_instrumentation';

Query OK, 0 rows affected (0.00 sec)

Rows matched: 1  Changed: 0  Warnings: 0

mysql> UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'wait/lock/metadata/sql/mdl';

Query OK, 1 row affected (0.00 sec)

Rows matched: 1  Changed: 1  Warnings: 0

Once global_instrumentation and wait/lock/metadata/sql/mdl are enable, below query will show the locks status on connections.

 

mysql> SELECT OBJECT_TYPE, OBJECT_SCHEMA, OBJECT_NAME, LOCK_TYPE, LOCK_STATUS, THREAD_ID, PROCESSLIST_ID, PROCESSLIST_INFO FROM performance_schema.metadata_locks INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID WHERE PROCESSLIST_ID <> CONNECTION_ID();
+-------------+---------------+-------------+---------------------+-------------+-----------+----------------+------------------------------------------+
| OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | LOCK_TYPE | LOCK_STATUS | THREAD_ID | PROCESSLIST_ID | PROCESSLIST_INFO |
+-------------+---------------+-------------+---------------------+-------------+-----------+----------------+------------------------------------------+
| TABLE | sbtest | sbtest1 | SHARED_READ | GRANTED | 29 | 4 | NULL |
| GLOBAL | NULL | NULL | INTENTION_EXCLUSIVE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| SCHEMA | sbtest | NULL | INTENTION_EXCLUSIVE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| TABLE | sbtest | sbtest1 | SHARED_UPGRADABLE | GRANTED | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| TABLE | sbtest | sbtest1 | EXCLUSIVE | PENDING | 30 | 5 | alter table sbtest1 add key idx_pad(pad) |
| TABLE | sbtest | sbtest1 | SHARED_READ | PENDING | 31 | 6 | select count(*) from sbtest1 |
+-------------+---------------+-------------+---------------------+-------------+-----------+----------------+------------------------------------------+
6 rows in set (0.00 sec)

Here PROCESSLIST_ID 4 is GRANTED and other PROCESSLIST_IDs are in PENDING state.

Conclusion

Best-practice when running any DDL operation, even with performance schema changes in 5.7, it to make sure to check processlist for presence of MDL waits, check SHOW ENGINE INNODB STATUS for long active transactions. Kill DDL operation while resolving the MDL issue so as to prevent query pileup. For a temporary fix implement pt-kill or wait_timeout. Review and fix application code/scripts for any uncommitted transactions to solve metadata lock issue.

Categories: DBA Blogs

Partner Webcast - Oracle Database Cloud Service: Cloud Migration Options

Oracle’s Public Cloud leverages existing technologies and best practices to deliver Oracle Database Technologies utilizing Oracle’s secure and optimized cloud-computing platform. At cloud.oracle.com...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Transparent Data Encryption for SQL Server in an Availability Group

Pythian Group - Tue, 2016-05-03 12:24

With the all new features in SQL Server 2016 always on, which you can read up on here, it’s easy to forget about Transparent Data Encryption (TDE). This blog post will focus on TDE.

TDE encrypts database files at rest. What this means is your .MDF and .NDF Files, and consequently your backups, will be encrypted, meaning you will not be able to detach the database files and restore them on another server unless that server has the same certificate that was used to encrypt the database.

In this blog post I am using SQL Server 2014 and will explain how to enable TDE on an existing AG Group database

  1. The first thing we need to check is if the server has a master encryption key on all replica in the AG Group
USE MASTER
GO
SELECT * FROM sys.symmetric_keys
WHERE name = '##MS_DatabaseMasterKey##'

The Screenshot below shows I don’t have a key so I need to create one

No Master Encryption Key

  1. Create a Database Master Encryption Key on each of the replicas in the AG Group. It is important to use a complex password

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'C&amp;mpl£xP@$$Wrd'
GO

  1. Run the code in step 1 and this time you should see the below

Master Encryption Key

  1. Now we need to create a certificate to use for the encryption of the database on the primary replica. This can be accomplished by using the below

CREATE CERTIFICATE BackupEncryptionCert
WITH SUBJECT = 'SQL Server 2014 AdventureWorks2012 Encryption Certificate';
GO

  1. Validate the Certificate

SELECT name, pvt_key_encryption_type_desc, thumbprint FROM sys.certificates

Validate Encryption Key

The thumbprint will be useful because when a database is encrypted, it will indicate the thumbprint of the certificate used to encrypt the Database Encryption Key.  A single certificate can be used to encrypt more than one Database Encryption Key, but there can also be many certificates on a server, so the thumbprint will identify which server certificate is needed

  1. Next We need to backup the certificate on the Primary Replica

BACKUP CERTIFICATE BackupEncryptionCert
TO FILE = ‘C:\BackupCertificates\BackupEncryptionCert.bak’
WITH PRIVATE KEY ( FILE = ‘C:\BackupCertificates\BackupEncryptionCertKey.bak’ ,
ENCRYPTION BY PASSWORD = ‘Certi%yC&amp;mpl£xP@$$Wrd’)

Encryption Files

The BACKUP CERTIFICATE command will create two files.  The first file is the server certificate itself.  The second file is a “private key” file, protected by a password. Both files and the password will be used to restore the certificate onto other instances.

  1. The Files created in step 6 needs to be copied to each of the other replicas and created in SQL Server. After the files are copied the below command can be used to create the certificates

CREATE CERTIFICATE BackupEncryptionCert
FROM FILE = ‘C:\BackupCertificates\BackupEncryptionCert.bak’
WITH PRIVATE KEY (FILE = ‘C:\BackupCertificates\BackupEncryptionCertKey.bak’,
DECRYPTION BY PASSWORD = ‘Certi%yC&amp;mpl£xP@$$Wrd’);

  1. That’s all the configuration needed for each instance now we are ready to start encrypting the database. We now need to tell SQL Server which Encryption Type we want to use and which certificate to use. This can be done using the following code on the Primary Replica

Use Adventureworks2012
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE BackupEncryptionCert

  1. Finally, the last step is to enable TDE by executing the below command on the Primary Replica

ALTER DATABASE AdventureWorks2012 SET ENCRYPTION ON

 

And that’s it, I hope you enjoyed this tutorial and found it informative. If you have any questions, please comment below.

Categories: DBA Blogs

Compression -- 6 : Advanced Index Compression

Hemant K Chitale - Tue, 2016-05-03 08:23
Earlier, I had covered Index (Key) Compression which is included in the Enterprise Edition.

In Key Compression, the DBA must specify the Prefix (i.e. number of leading columns in a composite index) that must be used as the compression key.

12.1.0.2 Advanced Index Compression does not require the DBA to manually identify the prefix key length.  Advanced Index Compression auto(magically) identifies the optimal prefix columns.
(Note : Advanced Index Compression requires the Advanced Compression licence Option and is available only in 12.1.0.2 and higher)

SQL> create table target_data as select * from source_data where 1=2;

Table created.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ APPEND */ into target_data select * from source_data;

367156 rows created.

SQL> commit;

Commit complete.

SQL>


Creating a Key Compression Index by specifying the Prefix size (the leading 2 columns) :

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 2;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
5508

SQL>


Note how I specified "2" as the Prefix size as I want to compress on repeated values of (OWNER, OBJECT_NAME).

Using Advanced Index Compression by specifying "COMPRESS ADVANCED LOW" and letting Oracle decide on the compression strategy in each leaf block :

SQL> drop index target_data_ndx_1_comp;

Index dropped.

SQL> create index target_data_ndx_2_advcomp on
2 target_data (owner, object_type, object_name)
3 compress advanced low;

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_ADVCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_2_ADVCOMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


Wow, that's significantly smaller.  What's more, I did not have to spend time analyzing the data and the index definition to identify the "correct" Prefix size.

However, it is now possible to specify the entire composite key as the Prefix, although that is not what I would have done in earlier versions.  Identifying the Prefix size requires analyzing the data.

SQL> create index target_data_ndx_1_comp on
2 target_data (owner, object_type, object_name) compress 3
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_1_COMP'
4 /

LEAF_BLOCKS
-----------
2277

SQL>


So, the Advanced option allows me to let Oracle automatically decide the appropriate mechanism to compress the Index keys.

UPDATE : Also see the subsequent test with a reversal of the columns in the composite index.

Just for comparison, here is a regular index :

SQL> drop index target_data_ndx_2_advcomp;

Index dropped.

SQL> create index target_data_ndx_3_nocomp on
2 target_data (owner, object_type, object_name)
3 /

Index created.

SQL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_3_NOCOMP');

PL/SQL procedure successfully completed.

SQL> select leaf_blocks
2 from user_indexes
3 where index_name = 'TARGET_DATA_NDX_3_NOCOMP'
4 /

LEAF_BLOCKS
-----------
7289

SQL>


That is a much larger regular index !
.
.
.

Categories: DBA Blogs

Links for 2016-05-02 [del.icio.us]

Categories: DBA Blogs

SQL vs. Excel – Subgroup medians

RDBMS Insight - Mon, 2016-05-02 18:34

Recently I ran across this post on how to do subgroup medians in Excel 2010. First you need to create a pivot table, then “do some copying and pasting and use a formula to make it happen”. In SQL you can do this with one command.

Suppose that you have the same table as the Excel article, something like this:

CREATE TABLE sampletab
(arrest_day_of_week varchar2(10), 
arrest_ts TIMESTAMP, 
fingerprint_ts TIMESTAMP, 
days_between NUMBER);

and you want to get the median value of days_between for each day of the week.

The steps in Excel apparently go like this:

  1. Create pivot table to get the means
  2. Copy and paste the column values from the pivot table
  3. For Sunday, create an IF formula to include a cell’s days_between in the median calculation only if the arrest_day_of_week for that row is Sunday
  4. Repeat for other six days

Ouch!

In SQL, it’s one simple statement:

SELECT arrest_day_of_week, median(days_between) AS median_days_between FROM sampletab
GROUP BY arrest_day_of_week;

Conclusion – if you’re into data analysis, SQL can be a big time-saver!

Categories: DBA Blogs

Reserved words usage in MySQL

Pythian Group - Mon, 2016-05-02 14:07

It is not uncommon to come across MySQL databases where reserved words are in use as identifiers for any kind of database objects.

Perhaps when the application schema was implemented, the words were not reserved yet, and they became reserved later on a subsequent MySQL release.

It is a good practice to check reserved words usage prior to doing any database upgrades, as any newly reserved keywords will cause syntax errors on the new version.

This is usually not a problem if proper quoting is used for referencing the objects, as described on the official manual page.

The actual steps to do this depend on the environment; for example, the following can be configured to tell Hibernate to escape identifiers:

property name="hibernate.globally_quoted_identifiers" value="true"

This does not appear to be documented properly (there is an open bug unresolved at the time of this writing).

However, we cannot make the assumption that all application code is properly escaped to deal with this kind of issues.

So what are the symptoms?

Error 1064 will be reported while trying to use a reserved word:

mysql> CREATE TABLE interval (begin INT, end INT);
ERROR 1064 (42000): You have an error in your SQL syntax ...
near 'interval (begin INT, end INT)'
How can we check for reserved words?

The following procedure can help you find out if any particular MySQL version’s reserved words are in use:

  1. Using the list on the corresponding manual page, create a text file with one reserved word on each line
  2. Load data into a temporary table
     USE test;
    CREATE TABLE reserved_words VARCHAR(50); 
    LOAD DATA INFILE 'reserved_words.txt' INTO TABLE test.reserved_words;
    
  3. Check for any column names using reserved keywords
    SELECT table_schema, table_name, column_name, ordinal_position 
    FROM information_schema.columns
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND column_name = ANY ( SELECT * FROM test.reserved_words ) 
    ORDER BY 1,2,4;
  4. Check for any table names using reserved keywords
    SELECT table_schema, table_name
    FROM information_schema.tables
    WHERE table_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' ) 
    AND table_name = ANY ( SELECT * FROM test.reserved_words );
  5. Check for any procedures or functions
    SELECT routine_schema, routine_name, routine_type
    FROM information_schema.routines
    WHERE routine_schema NOT IN ( 'mysql', 'information_schema', 'performance_schema' )
    AND routine_name = ANY ( select * from test.words );

I hope this post helps you avoid one of the many issues you may encounter during the challenging task of database upgrades.

Categories: DBA Blogs

#GoldenGate #Cloud Service (#GGCS) … what to expect?

DBASolved - Mon, 2016-05-02 06:30

As I sit here working on performing some GoldenGate migrations to AWS for a client, I’ve been thinking about the glimpse of GoldenGate Cloud Service (GGCS) that was provided to me earlier this week. That glimpse has helped me define what and how GGCS is going to work within the Oracle Cloud space. Ever since this service was announced back at Oracle Open World 2015, I’ve been wanting to get my hands on this cloud product from Oracle to just better understand it. Hopefully, what I’m about to share with you will provide some insight into what to expect.

First, you will need a cloud account. If you do not have a cloud account; visit http://cloud.oracle.com and sign up for an account. This will typically be the same account you use to login to My Oracle Support (MOS).

Once you have an account and are in the cloud interface, subscribe to some services. You will need a Database Cloud Service or an Compute Cloud Service. These services will be the end points for the GGCS to point to. As part of setting up the compute node, you will need to setup SSH access with a public/private key. Once you create the GGCS instance, the same public/private key should be use to keep everything simple.

Once GGCS is made available for trial, currently it is only available through the sales team, many of us will have the opportunity to play with this. The following screen captures and comments were taken from the interface I had access to while discussing GGCS with Oracle Product Management.

Like any of the other cloud services from Oracle, once you have access to GGCS it will appear in your dashboard as available cloud services. In the figure below, GGCS is listed at the top of the services that I had access to. You will notice over on the right, there is a link called “service console”.

When you click on the service console link, you are taken to the console that is specific to GGCS. On the left hand side of the console, you will see three primary areas. The “overview” area is the important one; it provides you with all the information needed about your GGCS environment. You will see the host and port number, what version of GGCS you are running and the status of your environment.

With the environment up and running, you will want to create a new GGCS instance. This instance is created under your cloud service console. On this screen you are given information that tells you how many instances you have running with the number of OCPUs, Memory and storage for the configuration along with the public IP address. Notice the button to the right, just below Public IPs, this is the button that allows you to create a new GGCS instance. In the figure below, the instance has already been created.

Drilling down into the instance, you are taken to a page that illustrates your application nodes for GGCS. Notice that the GGCS instance actually created a compute node VM to run GoldenGate from.

With everything configured from the Oracle Cloud interface, you can now access the cloud server using the details provided (do not have current screen shots of this). Once you access the cloud server, you will find that Oracle GoldenGate has been configured for you along with a TNS entry that points to a “target” location. These items are standard template items for you to build your GoldenGate environment from. The interesting thing about this configuration is that Oracle is providing a single virtual machine (compute node) that will handle all the apply process to a database (compute node).

With the GGCS service running, you are then ready to build out your GoldenGate environment.

Like many other GoldenGate architectures, you build out the source side of the architecture like anything else. You install the GoldenGate software, build an extract, trail files and a data pump. The data pump process is then pointed to the GoldenGate Cloud Service (GGCS) instance instead of the target instance. The local trail files will be shipped to the GGCS machine. Once on the GGCS instance, the replicat would need to be configured. Part of the configuration of the replicat at this point is updating the TNSNames.ora file to point to the correct “target” compute node/database instance. The below picture illustrates this concept.

You will notice that the GGCS is setup to be an intermediary point in the cloud. This allows you to be flexible with your GoldenGate architecture in the cloud. From a single GGCS service you can run multiple replicats that can point to multiple difference cloud compute nodes; turning your GGCS into a hub that can send data to multiple cloud resources.

In talking with the Oracle Product team about GGCS, the only downside to GGCS right now is that it cannot be used for bi-directional setup or pulling data from the cloud. In essence, this is a uni-direction setup that can help you move from on-premise to cloud with minimal configuration setup needed.

Well, this is my take on GGCS as of right now. Once GGCS trials are available, I’ll try to update this post or add more posts on this topic. Until then, hope you have gain a bit of information this topic and looking forward to using GGCS.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: Cloud, Golden Gate
Categories: DBA Blogs

FBDA -- 7 : Maintaining Partitioned Source Table

Hemant K Chitale - Mon, 2016-05-02 01:46
Taking up the TEST_FBDA_PARTITIONED table,  let's look at a couple of Partition Maintenance operations.

SQL> select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 order by partition_position
5 /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ------------------------- ----------
P_100 101 100
P_200 201 100
P_300 301 100
P_400 401 100
P_MAX MAXVALUE 301

SQL>


Let's try a TRUNCATE PARTITION

SQL> alter table test_fbda_partitioned truncate partition p_100;

Table truncated.

SQL>


So, that's supported.

Let's try a SPLIT PARTTIION

SQL> alter table test_fbda_partitioned       
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /
alter table test_fbda_partitioned
*
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table


SQL>


So, a SPLIT PARTITION fails.  We need to DISASSOCIATE the Flashback Archive.

SQL> execute dbms_flashback_archive.disassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 DISASSOCIATED


SQL>
SQL> alter table test_fbda_partitioned
2 split partition p_max at (501)
3 into (partition p_500, partition p_max)
4 /

Table altered.

SQL> execute dbms_flashback_archive.reassociate_fba('HEMANT','TEST_FBDA_PARTITIONED');

PL/SQL procedure successfully completed.

SQL>
SQL> select table_name, flashback_archive_name, archive_table_name, status
2 from user_flashback_archive_tables
3 where table_name = 'TEST_FBDA_PARTITIONED'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
ARCHIVE_TABLE_NAME STATUS
----------------------------------------------------- -------------
TEST_FBDA_PARTITIONED
FBDA
SYS_FBA_HIST_93342 ENABLED


SQL>


While a Table is disassociated with it's Flashback Archive, DDL that would not normally be permitted may be done under strict control to ensure that there is no data divergence.

.
.
.
Categories: DBA Blogs

Partition Storage -- 7 : Revisiting HWM - 2 (again)

Hemant K Chitale - Mon, 2016-05-02 01:19
Revisiting the previous test case, but with a larger AVG_ROW_LEN

SQL> create table part_table_large
(id_column number(6), data_col_1 varchar2(100), data_col_2 varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/
2 3 4 5 6 7 8 9

Table created.

SQL>
SQL> insert into part_table_large values
(51,rpad('String',75,'X'), rpad('Another',60,'Y'))
2 3
SQL> /

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_large
values (25, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (45, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_large
values (55, rpad('String',75,'X'), rpad('Another',60,'Y'));
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12 13
14 /

PL/SQL procedure successfully completed.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22349

SQL>
SQL>
SQL> alter table part_table_large move partition p_100 ;

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_LARGE',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_LARGE'
and partition_name = 'P_100'
/
2 3 4 5
AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
140 1100003 22626

SQL>
SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_LARGE'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/
2 3 4 5 6 7 8
EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024
3 1024
4 1024
5 1024
6 1024
7 1024
8 1024
9 1024
10 1024
11 512
12 1024
13 1024
14 1024
15 1024
16 1024
17 1024
18 1024
19 1024
20 1024
21 1024
22 1024

23 rows selected.

SQL>


Aha ! Unlike the previous case (where an AVG_ROW_LEN of 11, a MOVE reduced the HWM from 3,022 to 2,484), with a larger row size, the HWM has moved from 22,349 to 22,626.

So, space consumption is a factor of both the AVG_ROW_LEN and the manner in which the rows are  inserted / relocated.

SQL> l
1 select avg_row_len*num_rows*1.2/8192 Expected_Blocks, Blocks
2 from user_tab_partitions
3 where table_name = 'PART_TABLE_LARGE'
4* and partition_name = 'P_100'
SQL> /

EXPECTED_BLOCKS BLOCKS
--------------- ----------
22558.6553 22626

SQL>

Also, see how the "Expected Blocks" count seems more accurate than earlier.
.
.
.



Categories: DBA Blogs

Speakers: Put your Twitter Handle on the Windows taskbar!

The Oracle Instructor - Sat, 2016-04-30 03:56

If you speak often at conferences, sharing your screen to demo things, this could be helpful:

Twitter Handle on the Windows taskbar

Throughout your presentation, the audience will be able to see your Twitter Handle, reminding them to include it with tweets about the event. I used to include it in the slides, but this is better, because it works also with live demonstrations where no slides are being showed. Which is incidentally my favorite way to do presentations:-)

Now how can you do it? Quite easy, you open the Windows Control Panel and click on Region and Language. Then click on Additional settings:

Region and Language 1

Then you insert your Twitter Handle (or any other text you like to see on the taskbar) as AM and PM symbol. Make sure to select Time formats with trailing tt:

Region and Language 2

That’s it. If you want the font size as large as on the first picture above, that can be done here:

twitterhandel_taskbar4

I did that with Windows 7 Professional 64 bit. Hope you find it useful:-)


Tagged: speaker tip
Categories: DBA Blogs

Leaving Behind the Limits of Binary Thinking for Full Inclusiveness

Pythian Group - Fri, 2016-04-29 14:39

diversityaward

When Pythian became the first tech company in Canada to release gender-based metrics last November, we wanted to make a bold statement with the launch of the Pythia Program. And apparently it worked. We’ve already increased the amount of female applicants by more than 10% over just one quarter. Our internal Pythia Index has also risen 3% from 56% to 59%. And just this week, Pythian’s CEO Paul Vallée received the WCT Diversity Champion award in recognition of his leadership and efforts to promote diversity in the workplace, and a more inclusive tech industry that promotes men and women from all backgrounds.

Despite a clear case for gender parity, and research confirming the financial return for companies, full inclusion is still ‘controversial’ to implement. A lot of this has to do with the unconscious associations we still have with male and female roles which are placed in opposition. This kind of binary thinking is rampant, especially in our social constructions of what constitutes masculinity and femininity.

When the Pythia Program was in its early stages, we actually noticed a lot of binary, either/or thinking was shaping our assumptions. Off/On. 0/1. We can do this OR that. We can empower women technical professionals OR talk to employees about unconscious bias. We can take a stand on gender diversity OR maintain good relationships with male colleagues. Wait a minute…why can’t we do both?

If we had continued to believe our choices were that limited, it would have seriously eroded any impetus to act on our values of gender equity and inclusiveness. It was time to reframe our thinking, and that’s when we stopped compromising. A bolder stance emerged when we did away with limited, binary thinking that was trapping us in false dichotomies.

Let’s look at this from a data perspective, because that’s what we love and do best.

Current computer chips store information in electrical circuits as binary bits, either in a state of 0 or 1, so there’s a finite amount of data that can be processed. Quantum computer chips, or ‘qubits’ however, can be in the state of 0, 1, or both at the same time–giving quantum computers mind-blowing processing power.

So if we apply this idea of ‘binary’ vs. ‘quantum’ into a human context, could we potentially become quantum thinkers? Quantum thinking would be holistic, and enable the mind to function at a greater level of complexity. It’s an unlimited approach that ‘either/or’ binary thinking simply does not permit. Wouldn’t it be more exciting to break away from these limitations and move to a higher, more innovative level? Things look different when this binary thinking is disrupted. Start by replacing either/or with ‘and’.

We can help achieve gender parity AND we can achieve diversity in other important areas. Pythian can be inclusive, people-focused AND financially strong. Men can be powerful leaders AND feminist.

There is one big exception, one area where it’s either/or: whether you support the status quo of tech’s current ‘bro culture’, or inclusive leadership that embraces the value of multiple perspectives. Those two states cannot co-exist.

As he accepted his award for Diversity Champion at the WCT Gala on April 27, Pythian CEO Paul Vallée made his position clear “To the women who are working hard in high tech, and who are marginalized by bro culture — which is a real problem, we are in the midst of a culture war — I salute you and keep fighting the good fight because we will prevail. To the male leaders that have taken sides in this battle, the Pythia Index will help you keep score, whether you’re on my team [fighting to end bro culture] or the opposite team.”

As Einstein said, “you can’t solve problems with the same thinking used to create them.” And lack of gender diversity in the tech industry is a problem Pythian wants to help solve.

Categories: DBA Blogs

Partition Storage -- 6 : Revisiting Partition HWM

Hemant K Chitale - Fri, 2016-04-29 09:42
After the curious finding in my previous blog post, where a Partition's HighWaterMark was noticeably higher than that for a non-Partitioned Table but then shrunk on a MOVE operation, retrying the same rows with a different pattern of INSERT statements.

However, I am still sticking to a single session doing the INSERT (as I don't want ASSM spreading the incoming rows to different non-contiguous blocks)

This in 12.1.0.2
SQL> connect hemant/hemant
Connected.
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
2 partition by range (id_column)
3 (partition p_100 values less than (101),
4 partition p_200 values less than (201),
5 partition p_300 values less than (301),
6 partition p_400 values less than (401),
7 partition p_max values less than (maxvalue))
8 /

Table created.

SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL>
SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100' 2 3 4
5 /

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, a Row-By-Row Insert still resulted in the the HWM being 3,022 and shrinking to 2,484 after a MOVE.



Let's try the same data-set in 11.2.0.4
SQL> connect hemant/hemant
Connected.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

SQL>
SQL> create table part_table_3(id_column number(6), data_column varchar2(100))
partition by range (id_column)
(partition p_100 values less than (101),
partition p_200 values less than (201),
partition p_300 values less than (301),
partition p_400 values less than (401),
partition p_max values less than (maxvalue))
/

2 3 4 5 6 7 8
Table created.

SQL> SQL> show parameter deferr

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
deferred_segment_creation boolean TRUE
SQL>
SQL> insert into part_table_3 values (51,'Fifty One');

1 row created.

SQL> commit;

Commit complete.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 100000
loop
insert into part_table_3 values (25, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (55, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end; 2 3 4 5 6 7 8 9 10 11
12 /

PL/SQL procedure successfully completed.

SQL>
SQL> declare
cntr number;
begin
cntr := 0;
while cntr < 500001
loop
insert into part_table_3 values (45, 'New Row') ;
commit;
cntr := cntr + 1;
end loop;
end;
2 3 4 5 6 7 8 9 10 11 12
13 /

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 3022

SQL>
SQL> alter table part_table_3 move partition p_100 ;

Table altered.

SQL> exec dbms_stats.gather_table_stats('','PART_TABLE_3',granularity=>'ALL');

PL/SQL procedure successfully completed.

SQL> select avg_row_len, num_rows, blocks
from user_tab_partitions
where table_name = 'PART_TABLE_3'
and partition_name = 'P_100'
/ 2 3 4 5

AVG_ROW_LEN NUM_ROWS BLOCKS
----------- ---------- ----------
11 1100003 2484

SQL>
SQL> select extent_id, blocks
from dba_extents
where segment_name = 'PART_TABLE_3'
and segment_type = 'TABLE PARTITION'
and partition_name = 'P_100'
and owner = 'HEMANT'
order by 1
/ 2 3 4 5 6 7 8

EXTENT_ID BLOCKS
---------- ----------
0 1024
1 1024
2 1024

SQL>


So, 11.2.0.4 and 12.1.0.2 display the same behaviour for the Partition HWM.  A HWM of 3,022 blocks shrinking to 2,484 blocks.

The next test would be with a larger AVG_ROW_LEN.
.
.
.


Categories: DBA Blogs

Turkish Oracle User Group Conference in Istanbul 2016 #TROUGDays

The Oracle Instructor - Fri, 2016-04-29 01:31

Straight after the Oracle University Expert Summit in Berlin – which was a big success, by the way – the circus moved on to another amazing place: Istanbul!

Istanbul viewThe Turkish Oracle User Group (TROUG) did its annual conference in the rooms of the Istanbul Technical University with local and international speakers and a quite attracting agenda.

Do you recognize anyone here?:-)

#TROUGDays speakers

I delivered my presentation “Best of RMAN” again like at the DOAG annual conference last year:

Uwe Hesse speaking in Istanbul

Many thanks to the organizers for making this event possible and for inviting us speakers to dinner

Istanbul speakers dinner

The conference was well received and in my view, it should be possible to attract even more attendees in the coming years by continuing to invite high-profile international speakers

audience

My special thanks to Joze, Yves and Osama for giving me your good company during the conference – even if that company was sometimes very tight during the car rides

Categories: DBA Blogs

A Practitioner’s Assessment: Digital Transformation

Pythian Group - Thu, 2016-04-28 12:49

 

Rohinee Mohindroo is a guest blogger on Pythian Business Insights.

 

trans·for·ma·tion/ noun: a thorough or dramatic change in form or appearance

The digital transformation rage continues into 2016 with GE, AT&T, GM, Domino’s, Flex, and Starbucks, to name a few. So what’s the big deal?

Technical advances continue to progress at a rapid rate. Digital transformation simply refers to the rate at which the technological trends are embraced by an individual, organization or team.

Organizational culture and vocabulary are leading indicators of the digital transformation maturity level.

blogimagerohinee

Level 1: Business vs. Tech (us vs. them). Each party is fairly ignorant of the value and challenges of the other. Each blames the other for failures and takes credit for successes. Technology is viewed as a competency with a mandate to enable the business.

Level 2: Business and Tech (us and them). Each party is aware of the capability and challenges of the other. Credit for success is shared, failure is not discussed publicly or transparently. Almost everyone  is perceived to be technically literate with a desire to deliver business differentiation.

Level 3: Business is Tech (us). Notable awareness of the business model and technology capabilities and opportunities throughout the organization. Success is expected and failure is an opportunity. The organization is relentlessly focused on learning from customers and partners with a shared goal to continually re-define the business.

Which level best describes you or your organization? Please share what inhibits your organization from moving to the next level.

 

Categories: DBA Blogs

Log Buffer #471: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-04-28 09:14

This Log Buffer Edition covers Oracle, SQL Server and MySQL blog posts of the week.

Oracle:

Improving PL/SQL performance in APEX

A utility to extract and present PeopleSoft Configuration and Performance Data

No, Oracle security vulnerabilities didn’t just get a whole lot worse this quarter.  Instead, Oracle updated the scoring metric used in the Critical Patch Updates (CPU) from CVSS v2 to CVSS v3.0 for the April 2016 CPU.  The Common Vulnerability Score System (CVSS) is a generally accepted method for scoring and rating security vulnerabilities.  CVSS is used by Oracle, Microsoft, Cisco, and other major software vendors.

Oracle Cloud – DBaaS instance down for no apparent reason

Using guaranteed restore points to navigate through time

SQL Server:

ANSI SQL with Analytic Functions on Snowflake DB

Exporting Azure Data Factory (ADF) into TFS Source Control

Getting started with Azure SQL Data Warehouse

Performance Surprises and Assumptions : DATEADD()

With the new security policy feature in SQL Server 2016 you can restrict write operations at the row level by defining a block predicate.

MySQL:

How to rename MySQL DB name by moving tables

MySQL 5.7 Introduces a JSON Data Type

Ubuntu 16.04 first stable distro with MySQL 5.7

MariaDB AWS Key Management Service (KMS) Encryption Plugin

MySQL Document Store versus Bug hunter

Categories: DBA Blogs

How to recover space from already deleted files

Pythian Group - Wed, 2016-04-27 13:15

Wait, what? Deleted files are gone, right? Well, not so if they’re currently in use, with an open file handle by an application. In the Windows world, you just can’t touch it, but under Linux (if you’ve got sufficient permissions), you can!

Often in the Systems Administration, and Site Reliability Engineering world, we will encounter a disk space issue being reported, and there’s very little we can do to recover the space. Everything is critically important! We then check for deleted files and find massive amounts of space consumed when someone has previously deleted Catalina, Tomcat, or Weblogic log files while Java had them in use, and we can’t restart the processes to release the handles due to the critical nature of the service. Conundrum!

Here at Pythian, we Love Your Data, so I thought I’d share some of the ways we deal with situations like this.

How to recover

First, we grab a list of PIDs with files still open, but deleted. Then iterate over the open file handles, and null them.

PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done

This could be scripted in an automatic nulling of all deleted files, with great care.

Worked example

1. Locating deleted files:

[root@importantserver1 usr]# lsof | head -n 1 ; lsof | grep -i deleted
 COMMAND   PID   USER   FD  TYPE DEVICE SIZE/OFF NODE   NAME
 vmtoolsd  2573  root   7u  REG  253,0  9857     65005  /tmp/vmware-root/appLoader-2573.log (deleted)
 zabbix_ag 3091  zabbix 3wW REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3093  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3094  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3095  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3096  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3097  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 java      23938 tomcat 1w  REG  253,0  0        32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 2w  REG  253,0  45322216 32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 9w  REG  253,0  174      32133  /opt/log/tomcat/catalina.2015-01-17.log (deleted)
 java      23938 tomcat 10w REG  253,0  57408    32154  /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 java      23938 tomcat 11w REG  253,0  0        32156  /opt/log/tomcat/manager.2014-12-09.log (deleted)
 java      23938 tomcat 12w REG  253,0  0        32157  /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 java      23938 tomcat 65w REG  253,0  363069   638386 /opt/log/archive/athena.log.20160105-09 (deleted)

2. Grab the PIDs:

[root@importantserver1 usr]# lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq
 2573
 3091
 3093
 3094
 3095
 3096
 3097
 23938

Show the deleted files that each process still has open (and is consuming space):

[root@importantserver1 usr]# export PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
[root@importantserver1 usr]# for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done
 lrwx------ 1 root root 64 Mar 21 21:15 7 -> /tmp/vmware-root/appLoader-2573.log (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 1 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 10 -> /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 11 -> /opt/log/tomcat/manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 12 -> /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 2 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 65 -> /opt/log/archive/athena.log.20160105-09 (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 9 -> /opt/log/tomcat/catalina.2015-01-17.log (deleted)

Null the specific files (here, we target the catalina.out file):

[root@importantserver1 usr]# cat /dev/null > /proc/23938/fd/2
Alternative ending

Instead of deleting the contents to recover the space, you might be in the situation where you need to recover the contents of the deleted file. If the application still has the file descriptor open on it, you can then recover the entire file to another one (dd if=/proc/23938/fd/2 of=/tmp/my_new_file.log) – assuming you have the space to do it!

Conclusion

While it’s best not to get in the situation in the first place, you’ll sometimes find yourself cleaning up after someone else’s good intentions. Now, instead of trying to find a window of “least disruption” to the service, you can recover the situation nicely. Or, if the alternative solution is what you’re after, you’ve recovered a file that you thought was long since gone.

Categories: DBA Blogs

Deploy Docker containers using AWS Opsworks

Pythian Group - Wed, 2016-04-27 12:51
Introduction

This post is about how to deploy Docker containers on AWS using Opsworks and Docker Composer.
For AWS and Docker, the introduction isn’t required. So, let’s quickly introduce Opsworks and Docker Composer.

Opsworks

Opsworks is a great tool provided by AWS, which runs Chef recipes on your Instances. If the instance is an AWS instance, you don’t pay anything for using Opsworks, but you can also manage instances outside of AWS with a flat cost just by installing the Agent and registering the instance on Opsworks.

Opsworks Instances type

We have three different types of instances on Opsworks:

1. 24x7x365
Run with no stop

2. Time based
Run in a predefined time. Such as work hours.

3. Load based
Scale up and down according to the metrics preconfigured.

You can find more details here.

Custom JSON

Opsworks provides Chef Databags (variables to be used in your recipes) via Custom JSON, and that’s the key to this solution. We will manage everything just changing a JSON file. This file can become a member of your development pipeline easily.

Life cycle

Opsworks has five life cycles:
1. Setup
2. Configure
3. Deploy
4. Undeploy
5. Shutdown
We will use setup, deploy, and shutdown. You can find more details about Opsworks life cycle here.

Docker Compose

Docker Compose was originally developed under the Fig project. Nowadays, the fig is deprecated, and docker-compose is a built-in component of Docker.
Using docker-compose, you can manage all containers and their attributes (links, share volumes, etc.) in a Docker host. Docker-compose can only manage containers on the local host where it is deployed. It cannot orchestrate Docker containers between hosts.
All configuration is specified inside of a YML file.

Chef recipes

Using Opsworks, you will manage all hosts using just one small Chef cookbook. All the magic is in translating Custom JSON file from Opsworks to YML file to be used by docker-compose.
The cookbook will install all components (Docker, pip, and docker-compose), translate Custom JSON to YML file and send commands to docker-compose.

Hands ON

Let’s stop talking and see things happen.

We can split it into five steps:

  1. Resources creation
    1. Opsworks Stack
        1. Log into your AWS account
        2. Go to Services -> Management Tools -> Opsworks
          Accessing Opsworks menu
        3. Click on Add stack (if you already have stacks on Opsworks) or Add your first stack (if it’s the first time you are creating stacks on opsworks)
        4. Select type Chef 12 stack
          Note: The Chef cookbook used in this example only supports Chef12
        5. Fill out stack information
          aws_opsworks_docker_image02
          Note:
          – You can use any name as stack name
          – Make sure VPC selected are properly configured
          – This solution supports Amazon Linux and Ubuntu
          – Repository URL https://bitbucket.org/tnache/opsworks-recipes.git
        6. Click on advanced if you want to change something. Changing “Use OpsWorks security groups” to No can be a good idea when you need to communicate with instances which are running outside of Opsworks
        7. Click on “Add stack”
    2. Opsworks layer
        1. Click on “Add a layer”
        2. Set Name, Short name and Security groups. I will use webserver

      Note:
      Use a simple name because we will use this name in next steps
      The Name web is reserved for AWS internal use

        1. Click on “Add layer”

      aws_opsworks_docker_image03

    3. Opsworks Instance
        1. Click on “Instances” on left painel
        2. Click on “Add an instance”
        3. Select the size (instance type)
        4. Select the subnet
        5. Click on “Add instance”

      aws_opsworks_docker_image05

  2. Resources configuration
    1. Opsworks stack
        1. Click on “Stack” on left painel
        2. Click on “Stack Settings”
        3. Click on “Edit”
        4. Find Custom JSON field and paste the content of the file bellow

      custom_json_1

      1. Click on “Save”
    2. Opsworks layer
        1. Click on “Layers” on left painel
        2. Click on “Recipes”
        3. Hit docker-compose and press enter on Setup
        4. Hit docker-compose::deploy and press enter on Deploy
        5. Hit docker-compose::stop and press enter on Deploy
        6. Click on “Save”

      aws_opsworks_docker_image04

  3. Start
    1. Start instance
        1. Click on start

      aws_opsworks_docker_image06

  4. Tests
    Note: Wait until instance get online state

      1. Open your browser and you should be able to see It works!
      2. Checking running containers

    aws_opsworks_docker_image07

  5. Management
      1. Change custom json to file bellow (See resources configuration=>Opsworks stack)

    custom_json_2

      1. Click on “Deployments” on left painel
      2. Click on “Run Command”
      3. Select “Execute Recipes” as “Command”
      4. Hit “docker-compose::deploy” as “Recipes to execute”
      5. Click on “Execute Recipes”

    Note: Wait until deployment finish

      1. Checking running containers

    aws_opsworks_docker_image08

Categories: DBA Blogs

Percona Live Data Performance Conference 2016 Retrospective

Pythian Group - Tue, 2016-04-26 07:39

 

Last week the annual Percona Live Data Performance Conference was held in Santa Clara, California. This conference is a great time to catch up with the industry, and be exposed to new tools and methods for managing MySQL and MongoDB.

Sessions

The highlights from this year’s sessions and tutorials centered around a few technologies:

  • The typical sessions for Galera Cluster and Performance Schema are always getting better, along with visualization techniques.
  • Oracle MySQL’s new Document Store blurs the line between RDBMS and NoSQL.
  • Facebook’s RocksDB is getting smaller and faster.
  • ProxySQL, the new proxy kid on the block, promises to address MySQL scalability issues.
  • If security is a concern, which it should be, Hashicorp’s Vault project would be something to look into for managing MySQL secrets or encrypting data in transit.
  • MongoDB was a hot topic as well, with a number of sessions addressing management of environments and design patterns.

I expect to see an influx of articles regarding ProxySQL and MySQL’s Document Store in the next few months.

Networking

The evenings were also great events for networking and socializing, giving attendees the chance to rub shoulders with some of the most successful ‘WebScale’ companies to hear stories from the trenches. Events included the Monday Community Networking Reception and Wednesday’s Game Night.

Thank you to all those who attended the Annual Community Dinner at Pedro’s organized by Pythian on Tuesday night! We had a blast and we hope you did as well.

Community Dinner At Pedro's

Thank you!

Pythian sponsored and provided a great range of sessions this year, and we want to thank all those who stopped by our booth or attended our sessions.

I’d like to give a huge shout-out to Percona for continuing to organize a high-quality MySQL user conference focused on solving some of the toughest technical issues that can be thrown at us, and an equal shout-out to the other sponsors and speakers that play a huge part in making this conference happen.

I am looking forward to what PerconaLive Europe will bring this fall, not to mention what we can expect next year when Percona Live Santa Clara rolls around again.

Categories: DBA Blogs