Skip navigation.

Pythian Group

Syndicate content
Official Pythian Blog - Love Your Data
Updated: 24 min 21 sec ago

EM12c : Login to GUI with the correct password causes authentication failure

Thu, 2015-05-21 16:47

So the other day I was trying to log in to my EM12c R4 environment with the SSA_ADMINISTRATOR user, and I got the error:

“Authentication failed. If problem persists, contact your system administrator”

I was quite sure that the password that I had was correct, so I tried with the SYSMAN user and had the same error. I still wanted to verify that I had the correct password , so I tried with the SYSMAN user to log in to the repository database, and was successful, so I know something was wrong there.


SQL> connect sysman/
Enter password:
Connected.

So I went to the<gc_inst>/em/EMGC_OMS1/sysman/log/emoms.log and saw the following error


2015-05-18 21:22:06,103 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] ERROR audit.AuditManager auditLog.368 - Could not Log audit data, Error:java.sql.SQLException: ORA-14400: inserted partition key does not map to any partition
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 492
ORA-06512: at &quot;SYSMAN.MGMT_AUDIT&quot;, line 406
ORA-06512: at line 1

Which led me to believe that the JOB_QUEUE_PROCESSES was set to 0, but it wasn’t the case, since it was set to 50. Though, this is actually an incorrect limit, so I bumped it up to 1000 and tried to rerun the EM12c repository DBMS Scheduler jobs as per the documentation in 1498456.1:


SQL&gt; show parameter JOB_QUEUE_PROCESSES

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 50
SQL&gt; alter system set JOB_QUEUE_PROCESSES=1000 scope = both;

System altered.

SQL&gt; show parameter both
SQL&gt; show parameter job

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 1000
SQL&gt; connect / as sysdba
Connected.
SQL&gt; alter system set job_queue_processes = 0;

System altered.

SQL&gt; connect sysman/alyarog1605
Connected.
SQL&gt; exec emd_maintenance.remove_em_dbms_jobs;

PL/SQL procedure successfully completed.

SQL&gt; exec gc_interval_partition_mgr.partition_maintenance;

PL/SQL procedure successfully completed.

SQL&gt; @$OMS_HOME/sysman/admin/emdrep/sql/core/latest/admin/admin_recompile_invalid.sql SYSMAN
old 11: AND owner = upper('&amp;RECOMPILE_REPOS_USER')
new 11: AND owner = upper('SYSMAN')
old 26: dbms_utility.compile_schema(upper('&amp;RECOMPILE_REPOS_USER'),FALSE);
new 26: dbms_utility.compile_schema(upper('SYSMAN'),FALSE);
old 41: WHERE owner = upper('&amp;RECOMPILE_REPOS_USER')
new 41: WHERE owner = upper('SYSMAN')
old 84: AND owner = upper('&amp;RECOMPILE_REPOS_USER')
new 84: AND owner = upper('SYSMAN')
old 104: AND ds.table_owner = upper('&amp;RECOMPILE_REPOS_USER')
new 104: AND ds.table_owner = upper('SYSMAN')

PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.

SQL&gt; connect / as sysdba
Connected.
SQL&gt; alter system set job_queue_processes = 1000;

System altered.

SQL&gt; connect sysman/
Enter password:
Connected.
SQL&gt; exec emd_maintenance.submit_em_dbms_jobs;

PL/SQL procedure successfully completed.

SQL&gt; commit;

Commit complete.

After this I bounced the OMS, but still kept getting the same error. And though it fixed the scheduler jobs, I was now seeing the following error in the emoms.log:


2015-05-18 22:29:09,573 [[ACTIVE] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)'] WARN auth.EMRepLoginFilter doFilter.450 - InvalidEMUserException caught in EMRepLoginFilter: Failed to login using repository authentication for user: SSA_ADMIN
oracle.sysman.emSDK.sec.auth.InvalidEMUserException: Failed to login using repository authentication for user: SSA_ADMIN

So what I did was an update to the SYSMAN.MGMT_AUDIT_MASTER table and ran the procedure MGMT_AUDIT_ADMIN.ADD_AUDIT_PARTITION as was stated in document id 1493151.1:


oracle $ sqlplus

&amp;nbsp;

Enter user-name: sysman
Enter password:

SQL&gt; update mgmt_audit_master set prepopulate_days=5 where prepopulate_days is null;

1 rows updated.

SQL&gt; select count(1) from mgmt_audit_master where prepopulate_days is null;

COUNT(1)
----------
0

SQL&gt; exec mgmt_audit_admin.add_audit_partition;

PL/SQL procedure successfully completed.

SQL&gt; commit;

Commit complete.

Once I did this, I was able to login with all my EM12c administrators without any issues:


oracle@em12cr4.localdomain [emrep] /home/oracle
oracle $ emcli login -username=ssa_admin
Enter password

Login successful

Conclusion

Even though the JOB_QUEUE_PROCESSES were not set to 0, it was the cause that it was failing, as it was a low value for this parameter. Thus, be careful when setting up this parameter, be sure to follow the latest installation guidelines.

Note– This was originally published on rene-ace.com

Categories: DBA Blogs

Log Buffer #423: A Carnival of the Vanities for DBAs

Wed, 2015-05-20 16:36

This Log Buffer edition covers Oracle, SQL Server and MySQL blog posts from all over the blogosphere!


Oracle:

Hey DBAs:  You know you can  install and run Oracle Database 12c on different platforms, but if you install it on an Oracle Solaris 11 zone, you can take additional advantages.

Here is a video with Oracle VP of Global Hardware Systems Harish Venkat talking with Aaron De Los Reyes, Deputy Director at Cognizant about his company’s explosive growth & how they managed business functions, applications, and supporting infrastructure for success.

Oracle Unified Directory is an all-in-one directory solution with storage, proxy, synchronization and virtualization capabilities. While unifying the approach, it provides all the services required for high-performance enterprise and carrier-grade environments. Oracle Unified Directory ensures scalability to billions of entries. It is designed for ease of installation, elastic deployments, enterprise manageability, and effective monitoring.

Understanding Flash: Summary – NAND Flash Is A Royal Pain In The …

Extracting Oracle data & Generating JSON data file using ROracle.

SQL Server:

It is no good doing some or most of the aspects of SQL Server security right. You have to get them all right, because any effective penetration of your security is likely to spell disaster. If you fail in any of the ways that Robert Sheldon lists and describes, then you can’t assume that your data is secure, and things are likely to go horribly wrong.

How does a column store index compare to a (traditional )row store index with regards to performance?

Learn how to use the TOP clause in conjunction with the UPDATE, INSERT and DELETE statements.

Did you know that scalar-valued, user-defined functions can be used in DEFAULT/CHECK CONSTRAINTs and computed columns?

Tim Smith blogs as how to measure a behavioral streak with SQL Server, an important skill for determining ROI and extrapolating trends.

Pilip Horan lets us know as How to run SSIS Project as a SQL Job.

MySQL:

Encryption is important component of secure environments. While being intangible, property security doesn’t get enough attention when it comes to describing various systems. “Encryption support” is often the most of details what you can get asking how secure the system is. Other important details are often omitted, but the devil in details as we know. In this post I will describe how we secure backup copies in TwinDB.

The fsfreeze command, is used to suspend and resume access to a file system. This allows consistent snapshots to be taken of the filesystem. fsfreeze supports Ext3/4, ReiserFS, JFS and XFS.

Shinguz: Controlling worldwide manufacturing plants with MySQL.

MySQL 5.7.7 was recently released (it is the latest MySQL 5.7, and is the first “RC” or “Release Candidate” release of 5.7), and is available for download

Upgrading Directly From MySQL 5.0 to 5.6 With mysqldump.

One of the cool new features in 5.7 Release Candidate is Multi Source Replication.

 

Learn more about Pythian’s expertise in Oracle , SQL Server and MySQL.

Categories: DBA Blogs

fsfreeze in Linux

Thu, 2015-05-14 10:17

The fsfreeze command, is used to suspend and resume access to a file system. This allows consistent snapshots to be taken of the filesystem. fsfreeze supports Ext3/4, ReiserFS, JFS and XFS.

A filesystem can be frozen using following command:

# /sbin/fsfreeze -f /data

Now if you are writing to this filesystem, the process/command will be stuck. For example, following command will be stuck in D (UNINTERUPTEBLE_SLEEP) state:

# echo “testing” > /data/file

Only after the filesystem is unfreezed using the following command, can it continue:

# /sbin/fsfreeze -u /data

As per the fsfreeze main page, “fsfreeze is unnecessary for device-mapper devices. The device-mapper (and LVM) automatically freezes filesystem on the device when a snapshot creation is requested.”

fsfreeze is provided by the util-linux package in RHEL systems. Along with userspace support, fsfreeze also requires kernel support.

For example, in the following case, fsfreeze was used in the ext4 filesystem of an AWS CentOS node:

# fsfreeze -f /mysql
fsfreeze: /mysql: freeze failed: Operation not supported

From strace we found that ioctl is returning EOPNOTSUPP:

fstat(3, {st_dev=makedev(253, 0), st_ino=2, st_mode=S_IFDIR|0755,
st_nlink=4, st_uid=3076, st_gid=1119, st_blksize=4096, st_blocks=8,
st_size=4096, st_atime=2014/05/20-10:58:56,
st_mtime=2014/11/17-01:39:36, st_ctime=2014/11/17-01:39:36}) = 0
ioctl(3, 0xc0045877, 0) = -1 EOPNOTSUPP (Operation not
supported)

From latest upstream kernel source:

static int ioctl_fsfreeze(struct file *filp)
{
struct super_block *sb = file_inode(filp)->i_sb;if (!capable(CAP_SYS_ADMIN))
return -EPERM;

/* If filesystem doesn’t support freeze feature, return. */
if (sb->s_op->freeze_fs == NULL)
return -EOPNOTSUPP;

/* Freeze */
return freeze_super(sb);
}

EOPNOTSUPP is returned when a filesystem does not support the feature.

On testing to freeze ext4 in CentOs with AWS community AMI, fsfreeze worked fine.

This means that the issue was specific to the kernel of the system. It was found that AMI used to build the system was having a customized kernel without fsfreeze support.

Categories: DBA Blogs

Ingest a Single Table from Microsoft SQL Server Data into Hadoop

Wed, 2015-05-13 15:13
Introduction

This blog describes the best-practice approach in regards to the data ingestion from SQL Server into Hadoop. The case scenario is described as under:

  • Single table ingestion (no joins)
  • No partitioning
  • Complete data ingestion (trash old and replace new)
  • Data stored in Parquet format
Pre-requisites

This example has been tested using the following versions:

  • Hadoop 2.5.0-cdh5.3.0
  • Hive 0.13.1-cdh5.3.0
  • Sqoop 1.4.5-cdh5.3.0
  • Oozie client build version: 4.0.0-cdh5.3.0
Process Flow Diagram process_flow1 Configuration
  • Create the following directory/file structure (one per data ingestion process). For a new ingestion program please adjust the directory/file names as per requirements. Make sure to replace the
    tag with your table name
<table_name>_ingest + hive-<table_name> create-schema.hql + oozie-properties <table_name>.properties + oozie-<table_name>-ingest + lib kite-data-core.jar
kite-data-mapreduce.jar
sqljdbc4.jar coordinator.xml
impala_metadata.sh
workflow.xml
  • The ingestion process is invoked using an oozie workflow. The workflow invokes all steps necessary for data ingestion including pre-processing, ingestion using sqoop and post-processing.
oozie-<table_name>-ingest
This directory stores all files that are required by the oozie workflow engine. These files should be stored in HDFS for proper functioning of oozie oozie-properties
This directory stores the <table_name>.properties. This file stores the oozie variables such as database users, name node details etc. used by the oozie process at runtime. hive-<table_name>
This directory stores a file called create-schema.hql  which contains the schema definition of the HIVE tables. This file is required to be run in HIVE only once.
  • Configure files under oozie-<table_name>-ingest
1.   Download kite-data-core.jar and kite-data-mapreduce.jar files from http://mvnrepository.com/artifact/org.kitesdk
2.  Download sqljdbc4.jar from https://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx 3.  Configure coordinator.xml. Copy and paste the following XML. <coordinator-app name=”<table_name>-ingest-coordinator” frequency=”${freq}” start=”${startTime}” end=”${endTime}” timezone=”UTC” xmlns=”uri:oozie:coordinator:0.2″>
<action>
<workflow>
<app-path>${workflowRoot}/workflow.xml</app-path>
<configuration>
<property>
<name>partition_name</name>
<value>${coord:formatTime(coord:nominalTime(), ‘YYYY-MM-dd’)}</value>
</property>
</configuration>
</workflow>
</action>
</coordinator-app>

4.  Configure workflow.xml. This workflow has three actions:

a) mv-data-to-old – Deletes old data before refreshing new
b) sqoop-ingest-<table_name> – Sqoop action to fetch table from SQL Server
c) invalidate-impala-metadata – Revalidate Impala data after each refresh Copy and paste the following XML. <workflow-app name=”<table_name>-ingest” xmlns=”uri:oozie:workflow:0.2″><start to=”mv-data-to-old” /><action name=”mv-data-to-old”>
<fs>
<delete path=’${sqoop_directory}/<table_name>/*.parquet’ />
<delete path=’${sqoop_directory}/<table_name>/.metadata’ />
</fs><ok to=”sqoop-ingest-<table_name>”/>
<error to=”kill”/>
</action><action name=”sqoop-ingest-<table_name>”>
<sqoop xmlns=”uri:oozie:sqoop-action:0.3″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path=”${nameNode}/user/${wf:user()}/_sqoop/*” />
</prepare><configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration><arg>import</arg>
<arg>–connect</arg>
<arg>${db_string}</arg>
<arg>–table</arg>
<arg>${db_table}</arg>
<arg>–columns</arg>
<arg>${db_columns}</arg>
<arg>–username</arg>
<arg>${db_username}</arg>
<arg>–password</arg>
<arg>${db_password}</arg>
<arg>–split-by</arg>
<arg>${db_table_pk}</arg>
<arg>–target-dir</arg>
<arg>${sqoop_directory}/<table_name></arg>
<arg>–as-parquetfile</arg>
<arg>–compress</arg>
<arg>–compression-codec</arg>
<arg>org.apache.hadoop.io.compress.SnappyCodec</arg>
</sqoop><ok to=”invalidate-impala-metadata”/>
<error to=”kill”/>
</action><action name=”invalidate-impala-metadata”>
<shell xmlns=”uri:oozie:shell-action:0.1″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node><configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${impalaFileName}</exec>
<file>${impalaFilePath}</file>
</shell>
<ok to=”fini”/>
<error to=”kill”/>
</action>
<kill name=”kill”>
<message>Workflow failed with error message ${wf:errorMessage(wf:lastErrorNode())}</message>
</kill><end name=”fini” /></workflow-app>

5. Configure impala_metadata.sh. This file will execute commands to revalidate impala metadata after each restore. Copy and paste the following data.

#!/bin/bash
export PYTHON_EGG_CACHE=./myeggs
impala-shell -i <hive_server> -q “invalidate metadata <hive_db_name>.<hive_table_name>”
  • Configure files under oozie-properties. Create file oozie.properties with contents as under. Edit the parameters as per requirements.
# Coordinator schedulings
freq=480
startTime=2015-04-28T14:00Z
endTime=2029-03-05T06:00Z jobTracker=<jobtracker>
nameNode=hdfs://<namenode>
queueName=<queue_name> rootDir=${nameNode}/user//oozie
workflowRoot=${rootDir}/<table_name>-ingest oozie.use.system.libpath=true
oozie.coord.application.path=${workflowRoot}/coordinator.xml # Sqoop settings
sqoop_directory=${nameNode}/data/sqoop # Hive/Impala Settings
hive_db_name=<hive_db_name>
impalaFileName=impala_metadata.sh
impalaFilePath=/user/oozie/<table_name>-ingest/impala_metadata.sh #impala_metadata.sh # MS SQL Server settings
db_string=jdbc:sqlserver://;databaseName=<sql_server_db_name>
db_username=<sql_server_username>
db_password=<sql_server_password>
db_table=<table_name>
db_columns=<columns>
  • Configure files under hive-<table_name>. Create a new file create-schema.hql with contents as under.
DROP TABLE IF EXISTS ;CREATE EXTERNAL TABLE ()
STORED AS PARQUET
LOCATION ‘hdfs:///data/sqoop/<table_name>'; Deployment
  • Create new directory in HDFS and copy files
$ hadoop fs -mkdir /user/<user>/oozie/<table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/lib /user/<user>/oozie/ <table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ coordinator.xml /user/<user>/oozie/ <table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ impala_metadata.sh /user/<user>/oozie/<table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ workflow.xml /user/<user>/oozie/ <table_name>-ingest
  • Create new directory in HDFS for storing data files
$ hadoop fs -mkdir /user/SA.HadoopPipeline/oozie/<table_name>-ingest
$ hadoop fs -mkdir /data/sqoop/<table_name>
  • Now we are ready to select data in HIVE. Go to URL http://<hive_server>:8888/beeswax/#query.
a. Choose existing database on left or create new.
b. Paste contents of create-schema.hql in Query window and click Execute.
c. You should now have an external table in HIVE pointing to data in hdfs://<namenode>/data/sqoop/<table_name>
  • Create Oozie job
a. Choose existing database on left or create new.
$ oozie job -run -config /home/<user>/<<directory>/<table_name>/oozie-properties/oozie.properties Validation and Error Handling
  • At this point an oozie job should be created. To validate the oozie job creation open URL http://<hue_server>:8888/oozie/list_oozie_coordinators. Expected output as under. In case of error please review the logs for recent runs.
 oozie1
  • To validate the oozie job is running open URL http://<hue_server>:8888/oozie/list_oozie_workflows/ . Expected output as under. In case of error please review the logs for recent runs.
 oozie2
  • To validate data in HDFS execute the following command. You should see a file with *.metadata extension and a number of files with *.parquet extension.
$ hadoop fs -ls /data/sqoop/<table_name>/
  • Now we are ready to select data in HIVE or Impala.
    For HIVE go to URL http://<hue_server>:8888/beeswax/#query
    For Impala go to URL http://<hue_server>:8888/impala
    Choose the newly created database on left and execute the following SQL – select * from <hive_table_name> limit 10
    You should see the the data being outputted from the newly ingested data.
Categories: DBA Blogs

Log Buffer #422: A Carnival of the Vanities for DBAs

Fri, 2015-05-08 06:54

This Log Buffer Edition picks, choose and glean some of the top notch blog posts from Oracle, SQL Server and MySQL.

Oracle:

  • The standard images that come with devstack are very basic
  • Oracle is pleased to announce the release of Oracle VM VirtualBox 5.0 BETA 3
  • Monitoring Parallel Execution using Real-Time SQL Monitoring in Oracle Database 12c
  • Accessing your Cloud Integration API end point from Javascript
  • Are You Ready for The Future of Oracle Database?

SQL Server:

  • SQL Monitor Custom Metric: Plan Cache; Cache Pages Total
  • Generating A Password in SQL Server with T-SQL from Random Characters
  • This article explains how default trace can be used for auditing purposes when combined with PowerShell scripts
  • How to FTP a Dynamically Named Flat File
  • Alan Cooper helped to debug the most widely-used PC language of the late seventies and early eighties, BASIC-E, and, with Keith Parsons, developed C-BASIC. He then went on to create Tripod, which morphed eventually into Visual Basic in 1991.

MySQL:

  • There’s a new kid on the block in the NoSQL world – Azure DocumentDB
  • Spring Cleaning in the GIS Namespace
  • MySQL replication is among the top features of MySQL. In replication data is replicated from one MySQL Server (also knows as Master) to another MySQL Server (also known as Slave). MySQL Binlog Events is a set of libraries which work on top of replication and open directions for myriad of use cases like extracting data from binary log files, building applications to support heterogeneous replication, filtering events from binary log files and much more.
  • New to pstop – vmstat style stdout interface
  • The Perfect Server – Ubuntu 15.04 (Vivid Vervet) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3
Categories: DBA Blogs

OpenTSDB and Google Cloud Bigtable

Wed, 2015-05-06 02:15

Data comes in different shapes. One of the these shapes is called a time series. Time series is basically a sequence of data points recorded over time. If, for example, you measure the height of the tide every hour for 24 hours, then you will end up with a time series of 24 data points. Each data point will consist of tide height in meters and the hour it was recorded at.

Time series are very powerful data abstractions. There are a lot of processes around us that can be described by a simple measurement and a point in time this measurement was taken at. You can discover patterns in your website users behavior by measuring the number of unique visitors every couple of minutes. This time series will help you discover trends that depend on the time of day, day of the week, seasonal trends, etc. Monitoring a server’s health by recording metrics like CPU utilization, memory usage and active transactions in a database at a frequent interval is an approach that all DBAs and sysadmins are very familiar with. The real power of time series is in providing a simple mechanism for different types of aggregations and analytics. It is easy to find, for example, minimum and maximum values over a given period of time, or calculate average, sums and other statistics.

Building a scalable and reliable database for time series data has been a goal of companies and engineers out there for quite some time. With ever increasing volumes of both human and machine generated data the need for such systems is becoming more and more apparent.

OpenTSDB and HBase

There are different database systems that support time series data. Some of them (like Oracle) provide functionality to work with time series that is built on top of their existing relational storage. There are also some specialized solutions like InfluxDB.

OpenTSDB is somewhere in between these two approaches: it relies on HBase to provide scalable and reliable storage, but implements it’s own logic layer for storing and retrieving data on top of it.

OpenTSDB consists of a tsd process that handles all read/write requests to HBase and several protocols to interact with tsd. OpenTSDB can accept requests over Telnet or HTTP APIs, or you can use existing tools like tcollector to publish metrics to OpenTSDB.

OpenTSDB relies on scalability and performance properties of HBase to be able to handle high volumes of incoming metrics. Some of the largest OpenTSDB/HBase installations span over dozens of servers and process ~280k writes per second (numbers from http://www.slideshare.net/HBaseCon/ecosystem-session-6)

There exist a lot of different tools that complete OpenTSDB ecosystem from various metrics collectors to GUIs. This makes OpenTSDB one of the most popular ways to handle large volumes of time series information and one of the major HBase use cases as well. The main challenge with this configuration is that you will need to host your own (potentially very large) HBase cluster and deal with all related issues from hardware procurement to resource management, dealing with Java garbage collection, etc.

OpenTSDB and Google Cloud Bigtable

If you trace HBase ancestry you will soon find out that it all started when Google published a paper on a scalable data storage called Bigtable. Google has been using Bigtable internally for more than a decade as a back end for web index, Google Earth and other projects. The publication of the paper initiated creation of Apache HBase and Apache Cassandra, both very successful open source projects.

Latest release of Bigtable as a publicly available Google Cloud service gives you instant access to all the engineering effort that was put into Bigtable at Google over the years. Essentially, you are getting a flexible, robust HBase-like database that lacks some of the inherited HBase issues, like Java GC stalls. And it’s completely managed, meaning you don’t have to worry about provisioning hardware, handling failures, software installs, etc.

What does it mean for OpenTSDB and time series databases in general? Well, since HBase is built on Bigtable foundation it is actually API compatible with Google Cloud Bigtable. This means that your applications that work with HBase could be switched to work with Bigtable with minimal effort. Be aware of some of the existing limitations though. Pythian engineers are working on integrating OpenTSDB to work with Google Cloud Bigtable instead of HBase and we hope to be able to share results with the community shortly. Having Bigtable as a back end for OpenTSDB opens a lot of opportunities. It will provide you with a managed cloud-based time-series database, which can be scaled on demand and doesn’t require much maintenance effort.

There are some challenges that we have to deal with, especially around a client that OpenTSDB uses to connect to HBase. OpenTSDB uses it’s own implementation of HBase client called AsyncHBase. It is compatible on a wire protocol level with HBase 0.98, but uses a custom async Java library to allow for asynchronous interaction with HBase. This custom implementation allows OpenTSDB to perform HBase operations much faster than using standard HBase client.

While HBase API 1.0.0 introduced some asynchronous behavior using BufferedMutator it is not a trivial task to replace AsyncHBase with a standard HBase client, because it is tightly coupled with the rest of OpenTSDB code. Pythian engineers are working on trying out several ideas on how to make the transition to standard client look seamless from an OpenTSDB perspective. Once we have a standard HBase client working, connecting OpenTSDB to Bigtable should be simple.

Stay tuned.

Categories: DBA Blogs

OEM 12c Silent Installation

Mon, 2015-05-04 13:17

“What’s for lunch today?”, said the newly born ready to run Red Hat 6.4 server.

“Well, I have an outstanding 3-course meal of OEM12c installation.
For the appetizer, a light and crispy ASM 12c,
DB 12c with patching for the main and desert, and to cover everything up, OEM 12c setup and configuration”, replied  the DBA who was really happy to prepare such a great meal for his new friend.

“Ok, let’s start cooking, it won’t take long”, said the DBA and took all his cookware (software), prepared ingredients (disk devices) and got the grid infrastructure cooked:

./runInstaller -silent \
-responseFile /home/oracle/install/grid/response/grid_install.rsp -showProgress \
INVENTORY_LOCATION=/u01/app/oraInventory \
SELECTED_LANGUAGES=en \
oracle.install.option=HA_CONFIG \
ORACLE_BASE=/u01/app/oracle \
ORACLE_HOME=/u01/app/oracle/product/12.1.0/grid \
oracle.install.asm.OSDBA=dba \
oracle.install.asm.OSASM=dba \
oracle.install.crs.config.storageOption=LOCAL_ASM_STORAGE \
oracle.install.asm.SYSASMPassword=sys_pwd \
oracle.install.asm.diskGroup.name=DATA \
oracle.install.asm.diskGroup.redundancy=EXTERNAL \
oracle.install.asm.diskGroup.AUSize=4 \
oracle.install.asm.diskGroup.disks=/dev/asm-disk1 \
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asm* \
oracle.install.asm.monitorPassword=sys_pwd \
oracle.install.config.managementOption=NONE

And added some crumbs:

/u01/app/oracle/product/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/asm.rsp
where /tmp/asm.rsp had:
oracle.assistants.asm|S_ASMPASSWORD=sys_pwd
oracle.assistants.asm|S_ASMMONITORPASSWORD=sys_pwd

“It was a great starter”, said the server finishing the first dish,

“I am getting even more hungry. What’s for the main?”.

“Oh, you will love it! It is Database 12c. It is one of these new meals and it is already very popular”, answered the DBA enthusiastically and continued cooking.

“Looking forward to trying it”, the server decided to have a nap until the dish was ready.

“You asked, you got it”, and the DBA gave the server the dish he never tried:

./runInstaller -silent -showProgress \
-responseFile /home/oracle/install/database/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_BASE=/u01/app/oracle \
ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 \
oracle.install.db.InstallEdition=EE oracle.install.db.DBA_GROUP=dba \
oracle.install.db.BACKUPDBA_GROUP=dba \
oracle.install.db.DGDBA_GROUP=dba \
oracle.install.db.KMDBA_GROUP=dba \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true

The topping ingredient was of course a brand new database:

./dbca -silent -createDatabase -gdbName em12 \
-templateName General_Purpose.dbc \
-emConfiguration none \
-sysPassword sys_pwd \
-systemPassword sys_pwd \
-storageType ASM \
-asmsnmpPassword sys_pwd \
-diskGroupName DATA \
-redoLogFileSize 100 \
-initParams log_buffer=10485760,processes=500,\
session_cached_cursors=300,db_securefile=PERMITTED \
-totalMemory 2048

“Delicious! That’s what I dreamt of! Where did you find it?”, the server could not hide his admiration.

“Well, you have not tried desert yet. When you have it, you will forget all those dishes that you had before.”

“Hmm, you intrigue me. Definitely I will have it!”

“Anything for you, my friend”, and the DBA cooked his famous, rich and delicious desert:

./runInstaller -silent \
-responseFile /home/oracle/install/em/response/new_install.rsp \
-staticPortsIniFile /tmp/ports.ini \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true \
ORACLE_MIDDLEWARE_HOME_LOCATION=/u01/em12 \
AGENT_BASE_DIR=/u01/agent12c \
WLS_ADMIN_SERVER_USERNAME=weblogic \
WLS_ADMIN_SERVER_PASSWORD=Sun03day03 \
WLS_ADMIN_SERVER_CONFIRM_PASSWORD=Sun03day03 \
NODE_MANAGER_PASSWORD=Sun03day03 \
NODE_MANAGER_CONFIRM_PASSWORD=Sun03day03 \
ORACLE_INSTANCE_HOME_LOCATION=/u01/gc_inst \
CONFIGURE_ORACLE_SOFTWARE_LIBRARY=true \
SOFTWARE_LIBRARY_LOCATION=/u01/sw_lib \
DATABASE_HOSTNAME=oem12c.home \
LISTENER_PORT=1521 \
SERVICENAME_OR_SID=em12 \
SYS_PASSWORD=sys_pwd \
SYSMAN_PASSWORD=Sun03day03 \
SYSMAN_CONFIRM_PASSWORD=Sun03day03 \
DEPLOYMENT_SIZE="SMALL" \
MANAGEMENT_TABLESPACE_LOCATION="+DATA" \
CONFIGURATION_DATA_TABLESPACE_LOCATION="+DATA" \
JVM_DIAGNOSTICS_TABLESPACE_LOCATION="+DATA" \
AGENT_REGISTRATION_PASSWORD=Sun03day03 \
AGENT_REGISTRATION_CONFIRM_PASSWORD=Sun03day03

“You made my day!” exclaimed the server when nothing was left on his plate.

“Anytime my friend!” smiled DBA in response.

He was as happy as any chef that the cooking went the way it was planned and the final product was just as the recipe had said.

Have a good day!

Categories: DBA Blogs

Log Buffer #421: A Carnival of the Vanities for DBAs

Mon, 2015-05-04 11:29

As always, this fresh Log Buffer Edition shares some of the unusual yet innovative and information-rich blog posts from across the realms of Oracle, SQL Server and MySQL.

Oracle:

A developer reported problems when running a CREATE OR REPLACE TYPE statement in a development database. It was failing with an ORA-00604 followed by an ORA-00001. These messages could be seen again and again in the alert log.

  • Few Random Solaris Commands : intrstat, croinfo, dlstat, fmstat for Oracle DBA
  • When to use Oracle Database In-Memory?
  • Oracle Linux and Oracle VM at EMCWorld 2015
  • SQLcl connections – Lazy mans SQL*Net completion

SQL Server:

  • SQL Server expert Wayne Sheffield looks into the new T-SQL analytic functions coming in SQL Server 2012.
  • The difference between the CONCAT function and the STUFF function lies in the fact that CONCAT allows you to append a string value at the end of another string value, whereas STUFF allows you insert or replace a string value into or in between another string value.
  • After examining the SQLServerCentral servers using the sp_Blitz™ script, Steve Jones now looks at how we will use the script moving forward.
  • Big data applications are not usually considered mission-critical: while they support sales and marketing decisions, they do not significantly affect core operations such as customer accounts, orders, inventory, and shipping. Why, then, are major IT organizations moving quickly to incorporating big data in their disaster recovery plans?
  • There are no more excuses for not having baseline data. This article introduces a comprehensive Free Baseline Collector Solution.

MySQL:

  • MariaDB 5.5.43 now available
  • Testing MySQL with “read-only” filesystem
  • There are tools like pt-kill from the percona tool kit that may print/kill the long running transactions at MariaDB, MySQL or at Percona data instances, but a lot of backup scripts are just some simple bash lines.
  • Optimizer hints in MySQL 5.7.7 – The missed manual
  • Going beyond 1.3 MILLION SQL Queries/second
Categories: DBA Blogs

Quick Tip : Oracle User Ulimit Doesn’t Reflect Value on /etc/security/limits.conf

Mon, 2015-05-04 11:21

So the other day I was trying to do a fresh installation of a new Oracle EM12cR4 in a local VM,  and as I was doing it with the DB 12c, I decided to use the Oracle preinstall RPM to ease my installation of the OMS repository database. Also I was doing both the repository and EM12c OMS install in the same VM, that is important to know.

[root@em12cr4 ~]# yum install oracle-rdbms-server-12cR1-preinstall -y

I was able to install the DB without any issues, but when I was trying to do the installation of EM12cR4, an error in the pre-requisites popped up:

WARNING: Limit of open file descriptors is found to be 1024.

For proper functioning of OMS, please set “ulimit -n” to be at least 4096.

And if I checked the soft limit for the user processes , it was set to 1024:

oracle@em12cr4.localdomain [emrep] ulimit -n
1024

So if you have been working with Oracle DBs for a while you know that this has to be checked and modified in/etc/security/limits.conf , but it was my surprise that the limit has been set correctly for the oracle user to at least 4096:

[root@em12cr4 ~]# cat /etc/security/limits.conf | grep -v "#" | grep  nofile
oracle   soft   nofile   4096
oracle   hard   nofile   65536

So my next train of thought was to verify the user bash profile settings, as if the ulimits are set there, it can override the limits.conf, but again it was to my surprise that there was nothing in there, and that is were I was perplexed:

[oracle@em12cr4 ~]# cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH

So what I did next was open a root terminal and do a trace of the login of the Oracle user:

[root@em12cr4 ~]# strace -o loglimit su - oracle

And in another terminal was verify what was the user reading regarding the user limits, and this is where I hit the jackpot. I was able to see here that it was reading the pam_limits.so and the /etc/security/limits.conf as it should, but it was also reading another configuration file called oracle-rdbms-server-12cR1-preinstall.conf,  (Does this look familiar to you ? :) ) and as you can see the RLIMIT_NOFILE was being set to 1024:

[root@em12cr4 ~]# grep "limit" loglimit
getrlimit(RLIMIT_STACK, {rlim_cur=10240*1024, rlim_max=32768*1024}) = 0
open("/lib64/security/pam_limits.so", O_RDONLY) = 6
...
open("/etc/security/limits.conf", O_RDONLY) = 3
read(3, "# /etc/security/limits.conf\n#\n#E"..., 4096) = 2011
open("/etc/security/limits.d", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/etc/security/limits.d/90-nproc.conf", O_RDONLY) = 3
read(3, "# Default limit for number of us"..., 4096) = 208
open("/etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf", O_RDONLY) = 3
setrlimit(RLIMIT_STACK, {rlim_cur=10240*1024, rlim_max=32768*1024}) = 0
setrlimit(RLIMIT_NPROC, {rlim_cur=16*1024, rlim_max=16*1024}) = 0
setrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=64*1024}) = 0

So I went ahead and checked the file /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf and evidently, that is where the limit was set to 1024, so the only thing I did was change the value there to 4096:

[root@em12cr4 ~]# cat /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf | grep -v"#" | grep nofile
oracle   soft   nofile    1024
oracle   hard   nofile    65536
[root@em12cr4 ~]# vi /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf
[root@em12cr4 ~]# cat /etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf | grep -v"#" | grep nofile
oracle   soft   nofile    4096
oracle   hard   nofile    65536

Once I did that change, and logged out and logged back in, I was able to see the values that I had set in the first place in /etc/security/limits.conf and now I was able to proceed with the installation of EM12cR4:

oracle@em12cr4.localdomain [emrep] ulimit -n
4096

Conclusion

So when you install the RPM oracle-rdbms-server-12cR1-preinstall, be sure that if you are to change any future user limits, there might be another configuration file that can be setting other values than the ones desired and set in /etc/security/limits.conf

Note.- This was originally published in rene-ace.com

Categories: DBA Blogs

Thanks Oracle for R12.AD.C.DELTA.6

Wed, 2015-04-29 06:18

When reading through the release notes of the latest Oracle E-Business Suite R12.2 AD.C.Delta.6 patch in note 1983782.1, I wondered what they meant by “Simplification and enhancement of adop console messages”. I realized what I was missing after I applied the AD.C.Delta6 patch. The format of the console messaged changed drastically. To be honest, the old console messages printed by adop command reminded me of a program where somebody forgot to turn off the debug feature. The old adop console messages are simply not easily readable and looked more like debug messages of a program. AD.C.Delta6 brought in a fresh layout to the console messages, it’s now more readable and easy to follow. You can see for your self by looking at the below snippet:

### AD.C.Delta.5 ###

$ adop phase=apply patches=19197270 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

 Please wait. Validating credentials...


RUN file system context file: /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml

PATCH file system context file: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Execute SYSTEM command : df /u01/install/VISION/fs1

************* Start of  session *************
 version: 12.2.0
 started at: Fri Apr 24 2015 13:47:58

APPL_TOP is set to /u01/install/VISION/fs2/EBSapps/appl
[START 2015/04/24 13:48:04] Check if services are down
  [STATEMENT]  Application services are down.
[END   2015/04/24 13:48:09] Check if services are down
[EVENT]     [START 2015/04/24 13:48:09] Checking the DB parameter value
[EVENT]     [END   2015/04/24 13:48:11] Checking the DB parameter value
  Using ADOP Session ID from currently incomplete patching cycle
  [START 2015/04/24 13:48:23] adzdoptl.pl run
    ADOP Session ID: 12
    Phase: apply
    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log
    [START 2015/04/24 13:48:30] apply phase
        Calling: adpatch  workers=4   options=hotpatch     console=no interactive=no  defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs_ne/EBSapps/patch/19197270 driver=u19197270.drv logfile=u19197270.log
        ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/19197270/log
        [EVENT]     [START 2015/04/24 13:59:45] Running finalize since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:00:10] Running finalize since in hotpatch mode
          Calling: adpatch options=hotpatch,nocompiledb interactive=no console=no workers=4 restart=no abandon=yes defaultsfile=/u01/install/VISION/fs2/EBSapps/appl/admin/VISION/adalldefaults.txt patchtop=/u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/patch/115/driver logfile=cutover.log driver=ucutover.drv
          ADPATCH Log directory: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/log
        [EVENT]     [START 2015/04/24 14:01:32] Running cutover since in hotpatch mode
        [EVENT]     [END   2015/04/24 14:01:33] Running cutover since in hotpatch mode
      [END   2015/04/24 14:01:36] apply phase
      [START 2015/04/24 14:01:36] Generating Post Apply Reports
        [EVENT]     [START 2015/04/24 14:01:38] Generating AD_ZD_LOGS Report
          [EVENT]     Report: /u01/install/VISION/fs2/EBSapps/appl/ad/12.0.0/sql/ADZDSHOWLOG.sql

          [EVENT]     Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_134739/VISION_ebs/adzdshowlog.out

        [EVENT]     [END   2015/04/24 14:01:42] Generating AD_ZD_LOGS Report
      [END   2015/04/24 14:01:42] Generating Post Apply Reports
    [END   2015/04/24 14:01:46] adzdoptl.pl run
    adop phase=apply - Completed Successfully

    Log file: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_134739.log

adop exiting with status = 0 (Success)
### AD.C.Delta.6 ###

$ adop phase=apply patches=19330775 hotpatch=yes

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials...

Initializing...
    Run Edition context  : /u01/install/VISION/fs2/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
    Patch edition context: /u01/install/VISION/fs1/inst/apps/VISION_ebs/appl/admin/VISION_ebs.xml
Reading driver file (up to 50000000 bytes).
    Patch file system freespace: 181.66 GB

Validating system setup...
    Node registry is valid.
    Application services are down.
    [WARNING]   ETCC: The following database fixes are not applied in node ebs
                  14046443
                  14255128
                  16299727
                  16359751
                  17250794
                  17401353
                  18260550
                  18282562
                  18331812
                  18331850
                  18440047
                  18689530
                  18730542
                  18828868
                  19393542
                  19472320
                  19487147
                  19791273
                  19896336
                  19949371
                  20294666
                Refer to My Oracle Support Knowledge Document 1594274.1 for instructions.

Checking for pending adop sessions...
    Continuing with the existing session [Session id: 12]...

===========================================================================
ADOP (C.Delta.6)
Session ID: 12
Node: ebs
Phase: apply
Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/adop_20150424_140643.log
===========================================================================

Applying patch 19330775 with adpatch utility...
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/19330775/log/u19330775.log

Running finalize actions for the patches applied...
    Log: @ADZDSHOWLOG.sql "2015/04/24 14:15:09"

Running cutover actions for the patches applied...
    Spawning adpatch parallel workers to process CUTOVER DDLs in parallel
    Log: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/log/cutover.log
    Performing database cutover in QUICK mode

Generating post apply reports...

Generating log report...
    Output: /u01/install/VISION/fs_ne/EBSapps/log/adop/12/apply_20150424_140643/VISION_ebs/adzdshowlog.out

adop phase=apply - Completed Successfully


adop exiting with status = 0 (Success)

So what are you waiting for fellow Apps DBAs? Go ahead, apply the new AD Delta update to your R12.2 EBS instances. I am really eager to try out other AD.C.Delta6 new features, especially “Online Patching support for single file system on development or test systems”

Categories: DBA Blogs

Weblogic patch rollback issues in Oracle EBS R12.2

Tue, 2015-04-28 06:36

When you try to rollback a weblogic patch in Oracle EBS R12.2, you might run into issues similar to below:

$ ./bsu.sh -remove -patchlist=YIJF -prod_dir=/u01/install/VISION/fs2/FMW_Home/wlserver_10.3 -verbose
Checking for conflicts..
No conflict(s) detected

Starting removal of Patch ID: YIJF
Restoring /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar
from /u01/install/VISION/fs2/FMW_Home/patch_wls1036/backup/backup.jar
Result: Failure
Failure condition follows:
An error occured while removing patches – run with logging to obtain detailed information
java.io.FileNotFoundException: /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar (No such file or directory)
Result: Failure
Failure condition follows:
An error occured while removing patches – run with logging to obtain detailed information
java.io.FileNotFoundException: /d1/V1223B5B/fs1/FMW_Home/modules/com.bea.core.datasource6.binding_1.10.0.0.jar (No such file or directory)

If you observe the errors, it’s trying to restore the jar file to a location that is not present in the current filesystem. This error was captured in a VISION instance created from Oracle VM Template. So you can see that the file path seems to be a path used by an Oracle internal system where the VM template is created. There is not much harm caused by the above issue to the instance.

Now consider this hypothetical situation, where you are trying to rollback a weblogic patch in patch fs ( fs2 ) that was previously applied to fs1 in Production Oracle E-Business Suite R12.2. So the patch history in the patch fs ( fs2) will still have file paths of fs1. So when you rollback a patch it will restore the files to RUN FS ( fs1). This can cause a havoc and  a big outage to online users.

As of now bsu utility doesn’t seem to handle issues this kind of situation. So all Oracle EBS Apps DBAs out there, make sure to check the below file for correct paths, before you try to rollback any weblogic patch

$ vi $FMW_HOME/patch_wls1036/registry/patch-backup.xml

A permanent solution to this problem would be to update bsu patch utility to not restore or touch files outside the weblogic home. Hope Oracle Development notices this issue before it effects some customers EBS production instance.

Categories: DBA Blogs

Log Buffer #420: A Carnival of the Vanities for DBAs

Mon, 2015-04-27 06:05

This Log Buffer Editions brings few of the very insightful blog posts from the arena of Oracle, SQL Server and MySQL.

Oracle:

  • How to Increase Performance With Business Events in Fusion Applications
  • Advanced Oracle Troubleshooting Guide – Part 12: control file parallel reads causing enq: SQ – contention waits?
  • Changing REVERSE Transformations in Oracle Data Miner
  • Refresh Multiple Materialized Views in One Go: No Data Found
  • Error deploying Oracle Composite with wrong encoding wsdl

SQL Server:

  • How to Recover a SQL Server Login Password
  • Understanding Cross-Database Transactions in SQL Server
  • Adding Slicers to a Reporting Services Report
  • Continue a Foreach loop after an error in a SQL Server Integration Services package
  • Automating Image-Based Deployment of SQL Server on Azure IaaS VMs – Preparing OS Image

MySQL:

  • What Should I Monitor, and How Should I Do It?
  • How MySQL will act if there is no space left on hard disk? To answer this question, let’s test it.
  • Configuring PAM Authentication and User Mapping with MariaDB
  • MySQL Enterprise Database Firewall — Control and Monitor SQL Statement Executions
  • MariaDB : Bug when add index on Partition table
Categories: DBA Blogs

Pillars of PowerShell: Profiling

Fri, 2015-04-24 06:53
Introduction

This is the fourth blog post continuing the series on the Pillars of PowerShell. The previous post in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
Profiles

This is something I mentioned in the second post and can be a great way to keep up with those one-liners you use most often in your work. A profile with PowerShell is like using start up scripts in an Active Directory environment. You can “pre-run” things on a domain computer at start up or when a user logs into the machine. In a PowerShell profile you can “pre-load” information, modules, custom functions, or any command you want to execute in the PowerShell console. There is a separate profile for the console and then for PowerShell ISE. Your profile is basically a PowerShell script saved into a specific location under your Documents folder. The path to this profile is actually kept within a system variable, most notably called, $PROFILE.

Output of the $PROFILE variable

Output of the $PROFILE variable

I am using a Windows Azure VM that I just built, so I have not created any profiles on this machine. The path is kept within this variable but that does not mean it actually exists. We will need to create this file and the easiest method to do this is to actually use a cmdlet, New-Item. You can use this cmdlet to create files or folders. You can execute this one-liner to generate the PowerShell script in the path shown above:

New-Item $PROFILE -ItemType File -Force
New-Item $PROFILE

New-Item $PROFILE

Now, from here you can use another cmdlet to open the file within the default editor set to open any “.ps1″ file on your machine, Invoke-Item. This might be Notepad or you can set it to be the PowerShell ISE as well. Just execute this cmdlet followed by the $PROFILE variable (e.g. Invoke-Item $PROFILE).

One of the things I picked up on when I started using my profile more often was you can actually format your console. More specifically, I like to shorten the “PS C:\Users\melton_admin” portion. If you start working in directories that are 3 or 4 folders deep this can take up a good portion of your prompt. I came across a function that I truthfully cannot find the original poster, so sorry for not attributing it.

function prompt
{
if($host.UI.RawUI.CursorPosition.Y -eq 0) { "<$pwd> `n`r" + "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "} else { "PS["+$host.UI.RawUI.CursorPosition.Y+"]> "}
}

Any function you save in your profile that performs an action you can call anytime in the PowerShell console, once it is loaded. However if I want that action to take effect when it loads the profile I simply need to call the function at the end of the profile script. I just add these two lines and ensure they are always the last two lines of my profile, anything added will go between the function above and these two lines:

prompt;
clear;
Profile_format

I use the clear command (just like using cls in the DOS prompt) to just get rid of any output a function or command I have may cause; just starts me out on a fresh clean slate.

If you want to test your profile script you can force it to load into your current session by doing this: .$profile. That is enter “period $profile” and just hit enter. You will need to take note that since I use the clear command in my profile if any cmdlet or one-liner I add outputs an error you will not see it. So when I have issues like this I simply comment the line out of my profile. You can put comments into your script using the pound sign (#), putting that before a command will allow it to be ignored or not run.

Set-ExecutionPolicy

PowerShell is a security product by default, so in certain operating system environments when you try to run your profile script above you may have gotten an error like this:

ExecutionPolicyError

 

 

 

This means pretty much what it says, execution of scripts is disabled. To enable this you need to use the Set-ExecutionPolicy cmdlet with a few parameters. You can find the documentation for this if you want by looking at the “about_Execution_Policies” in PowerShell or follow the link in the error. The documentation will explain the various options and policies you can set. The command below will allow you to execute scripts in your console and let it load your profile scripts:

Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
Summary

In this post I pointed out the following cmdlets and concepts:

  • New-Item
  • Invoke-Item
  • Commenting in your script
  • Set-ExecutionPolicy

These are fairly basic areas of PowerShell and putting each one into your favorite search engine should lead you to a plentiful list of reading material. This post by no means encompassed all the potential you can do with Profiles, but was simply meant to get you started, and hopefully excited about what can be done.

Categories: DBA Blogs

Keeping Cassandra Alive

Mon, 2015-04-20 12:28

Troubleshooting Cassandra under pressure

This is the second blog post in the series. This is a bit more technical than the first one. I will explain some things that can be made to keep a cluster/server running when you are having problems in that cluster.

There were a lot of changes in Cassandra over the last 4 years (from 0.8 to 2.1), so I will refrain from discussing troubleshooting problems that affect some specific versions. Also, this is the kind of troubleshooting you need when you can’t “add a node”.

Why can’t I just add a node? Well, if you aren’t on vnodes, and you didn’t pre-calculate the token ranges, adding a node is a big effort. Other constrains may also apply, like budget or deliver time for hardware (if you are on bare metal). Plus, rack capacity, power constrains, etc…

Now you may say:

“Ok, we can’t add a node! What should we do?! We have a storm coming!”

So, I did navigate over that storm and it’s not an easy task, but it’s doable! First thing, you have to know what you have, that is critical! You also need to know where you can take more damage.

Let’s assume you have the following situation, and what I recommend for it:

  • Heavy Write Cluster, Low Read

Now let’s define “storm”: A storm is not when when Cassandra fails, it’s about an unanticipated load increase or a disaster. What happens is that you have more load than your planned capacity (Either because of failure of nodes or because of a sudden traffic increase). This will increase your resource usage to a point where your machines will start to die.

Let’s understand what can cause a Cassandra process to die, and a probably the machine (If you OOM and you didn’t configure swap… I warned you!) for the scenario described above.

  1. More data to the commitlog = more I/O pressure (Discard if you have commitlog on a different HDD)
  2. Data is written to memtables = Memory is used
  3. Memtables reach thresholds faster, get flushed to disk = I/O pressure
  4. Compaction starts faster and frequently = I/O pressure, CPU pressure
  5. Too many I/O compaction can’t compact fast enough and the memtables aren’t flushing fast enough = Memory not being released.
  6. Too much memory usage, JVM triggers GC more frequently = CPU pressure
  7. JVM can’t release memory = OOM
  8. OOM = PUM! Node dies (if you are “lucky” kernel will kill Cassandra)

And I didn’t go trough the hints that would be stored as nodes became unresponsive and send out once they get back online.

So now we know where our pain points are. Let’s understand them and see what we can do about it:

  • Commitlog – Let’s just assume you have this on separate HDD, and don’t do anything about it (after all it’s your safeguard).
  • Memtables – We can control how often they are flushed. It is a possible tweak point. Although it requires a Cassandra restart for the changes to produce an effect.
  • Compaction – This we can control via nodetool, inclusive we can disable it in the later versions.
  • JVM GC – We can change settings, but difficult to tweak and a restart is needed.
  • Swap – We can play a bit here if we do have a swap partition.
  • Dirty_ratio – How often the data is actually written to the HDD. This can put your data at risk, but also can help.
  • Replication Factor – this can be changed on the fly, will help by having less pressure on the nodes.

So, what do to? Where to start? It depends on a case by case scenario. I would probably make my Read performance suffer to keep the writes getting in. To allow that, the easiest way should be making the reads CL = ONE. That sometimes does look like the fast and easy option. But if you’re writes are not using Quorum or/and you have read_repair… You will spread more writes (And RF>1). I would pick compaction as my first target, you can always try to get it up to pace (re-enable, increase compaction throughput). Another option would be increase dirty_ratio and risk losing data (trusting the commitlogs + RF>1 helps not losing data) but this will give your HDD more room until the cluster recovers.

But every case is a case. I will talk about my own case, problems and resolutions this Wednesday at the Datastax Day in London! Fell free to join me!

 

Categories: DBA Blogs

Pillars of PowerShell: Debugging

Mon, 2015-04-20 12:09
Introduction

The is the third blog post continuing the series on the Pillars of PowerShell. The first two post are:

  1. Interacting
  2. Commanding

We are going to take a bit of a jump and dig into a particular topic that I think is better to go over up front, instead of later. In this post I want to go over a few things of how you can debug scripts or just issues in PowerShell. This is a topic that can get very advanced and make for a very long blog post. In place of trying to put all that in one blog post, I have a few links that I am going to share at the end of this post that will point you to some of the more deep dive material on debugging.

Pillar 3: Debugging

When it comes to writing scripts or developing T-SQL procedures you will generally see folks use print statements to either check where the processing is at in the script, or output some “additional” information. PowerShell is no different and offers cmdlets that you can output it to with various destinations and to even use it to make a decision. One of the main ones I like to use when I write scripts is Write-Verbose. You may see some folks use Write-Host in their scripts, and all I can say to that is, “be kind to puppies”. The basic gist of it is Write-Host outputs plain text, and will always output text unless you comment it out or remove it from your script. In using Write-Verbose you can actually have that information only output when a parameter switch is used, rightly called “-verbose”. This switch is included in most built-in cmdlets for modules provided by Microsoft. If you want to include it in your script or function you simply need to include this at the top:

[CmdletBinding()]
Param()

So in the example below you can see that both functions will never output the Write-Verbose cmdlet when they are called:

p3_function_verbose_example

The difference you will see is that Test-NoVerbose does not do anything when you include the verbose switch, where Test-WithVerbose will:

p3_function_verbose_example_2

So in cases where other people may be using your scripts this feature will help keep output clean, unless you need it for debugging. I tend to use this most often when I am working on long scripts that I want to initially know what is going on as it runs. If I ever have to come back to the script for debugging I can just use the switch, versus the normal execution which doesn’t need all that output.

Errors

They are going to happen, it is inevitable in your scripting journey that at some point you are going to have an error. You cannot always prepare for every error but you can help in collecting as much information about an error to help in debugging. Just like you would handle errors in T-SQL using a TRY/CATCH block, you can do the same in PowerShell.

PowerShell offers a variable that is available in every session you open or run called $Error. [The dollar sign in PowerShell denotes a variable.] This variable holds records of the errors that have occurred in your session. This variable is going to hold those errors that can occur in your scripts. There are other errors or exceptions that can also be thrown by .NET objects that can work a bit different in how you capture them; I will refer you to Allen White’s post on Handling Errors in PowerShell to see a good example.

Summary

Debugging is one of those topics that can go into a 3-day course so one blog post is obviously not going to cover all the information you might need. I came across a good blog post by the PowerShell Team on Advanced Debugging in PowerShell that should be a read for anyone wanting to get involved with PowerShell scripting.

Categories: DBA Blogs

Log Buffer #419: A Carnival of the Vanities for DBAs

Fri, 2015-04-17 10:51

This Log Buffer Edition covers Oracle, MySQL, SQL Server blog posts from around the world.

Oracle:

  • Why the Internet of Things should matter to you
  • Modifying Sales Behavior Using Oracle SPM – Written by Tyrice Johnson
  • SQLcl: Run a Query Over and Over, Refresh the Screen
  • Data Integration Tips: ODI 12.1.3 – Convert to Flow
  • JRE 1.8.0_45 Certified with Oracle E-Business Suite

SQL Server:

  • What’s this, a conditional WHERE clause that doesn’t use dynamic SQL?
  • The job of a DBA requires a fusion of skill and knowledge. To acquire this requires a craftsman mindset. Craftsmen find that the better they get at the work, the more enjoyable the work gets, and the more successful they become.
  • Using SQL to perform cluster analysis to gain insight into data with unknown groups
  • There are times when you don’t what to return a complete set of records. When you have this kind of requirement to only select the TOP X number of items Transact SQL (TSQL) has the TOP clause to meet your needs.
  • Spatial Data in SQL Server has special indexing because it has to perform specialised functions.

MySQL:

Profiling MySQL queries from Performance Schema

How to Easily Identify Tables With Temporal Types in Old Format!

The Perfect Server – CentOS 7.1 with Apache2, Postfix, Dovecot, Pure-FTPD, BIND and ISPConfig 3

Database Security – How to fully SSL-encrypt MySQL Galera Cluster and ClusterControl

MDX: retrieving the entire hierarchy path with Ancestors()

Categories: DBA Blogs

Ever Wondered How Pythian is Kind of Like a Fire Truck?

Tue, 2015-04-14 06:10
pierce____enforcerg____-54ee03db38d10

 

I have.

Coming from the world of selling fire trucks I’m used to selling necessary solutions to customers in need. The stakes are high. If the truck doesn’t perform best case scenario it’s a false alarm. Worst case scenario someone, many people, die.

Let me tell you a bit about fire trucks.

A lot of people think that a fire truck is a fire truck. That there is some factory where fire trucks are made, carbon copies of one another, varying only in what they carry – water, a pump, a ladder. That’s not the case. Every truck is custom engineered, designed, and manufactured from scratch. Things can go wrong. In a world where response time is everything, you don’t want something to go wrong. Not with the fire truck. Not when everything else is going wrong. Not when someone is trapped in their vehicle. Not when a house is burning down.

For the past five years I have been selling disaster management systems. There has been a clear, immediate, pressing need from my customers. I loved the urgency, I fed off that energy, helping people in charge of saving lives come up with solutions that help them do just that. When first walking into Pythian, I didn’t understand the importance of data, I didn’t comprehend the stakes. But they are present and the analogy can be made.

Pythian’s services are like a fire truck.

Data is like your house, your car, your life. When your business is dependent on your data and your data fails, your business fails. Data failures are serious. Downtime causes huge revenue losses as well as loss of trust and reputation. Identity theft, loss of security, these disasters are pressing threats in our digitized society.

Pythian’s FIT-ACER program is like your Fire Marshall.

We don’t just prepare for disasters, we help prevent them. Modeled after the Mayo Clinic’s patient checklist, Pythian’s FIT-ACER human reliability check acknowledges that no matter how intelligent our DBAs are (http://www.pythian.com/experts/) they can still make mistakes:

FIT-ACER: Pythian Human Reliability Checklist

F

Focus (SLOW DOWN! Are you ready?)

A

Assess the command (SPEND TIME HERE!)

I

Identify server/DB name, time, authorization

C

Check the server / database name again

T

Type the command (do not hit enter yet)

E

Execute the command

R

Review and document the results

We don’t just hire the best to do the best work, we hire the best, make sure they’re at their best, check their best, and apply their best. Every time we interact with your data we do so at a high level to improve your system, to prevent disaster.  And we answer our phones when disaster hits.

The average response time for a fire crew in Ontario is 6 minutes. The average response time for Pythian is under 4.

Take it from someone who knows disaster,

Pythian’s the best fire truck around.

Categories: DBA Blogs

Community dinner @ Pedro’s

Mon, 2015-04-13 08:36

Folks, as usual Pythian is organizing the community dinner. After many years, food, (responsible) drinking and photos, this event has become an important moment for all of us, to know each other better, discuss and have fun.

This year is also the 20th year for MySQL so … YEAAAH let us celebrate, with more food, fun and responsible drinking.

If you had not done it yet … register yourself here: https://www.eventbrite.com/e/the-pythian-mysql-community-pay-your-own-way-dinner-tickets-15692805604

Info about the event:

When: Tuesday April 14, 2015 – 7:00 PM at Pedro’s (You are welcome to show up later, too!)
Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

 

I know, I know … we are that kind of people that decide where to go at the last minute, and every year we do the same, but if you could register, that will help us to organize it better … and c’mon the dinner is on Tuesday … so we are almost there!!!

 

Anyhow, hope to see all of you there, all of you!

Some reference: Menu Eventbrite Pedro

Categories: DBA Blogs

Technology for the Non-Technical

Mon, 2015-04-13 08:33

I am potentially one of the least technical people in my generation. I’m 30 and I am afraid of my cellphone, my laptop, Netflix, the microwave…. Okay, afraid is maybe a strong word, but baffled by them at the very least.

In high school, while my classmates wrote most of their papers and assignments on the computer, I insisted on writing everything out by hand and only typed it out afterwards if absolutely required. It wasn’t that I had issues with typing – my mom who worked as an administrator for many years made sure that I learned to type from a very young age and I type quickly with a reasonable amount of accuracy. I just felt that writing by hand kept me more “connected” to the words I penned. Simply, my name is Sarah and I am a Luddite.

After high school I studied journalism for a couple of years and then entered the workforce into a number of different jobs, such as in sales and marketing and it became necessary for me to “engage” with technology a little more heavily. Typing articles and assignments slowly became second nature but grocery lists, thank you notes, birthday cards all continued to be written by hand.

For the last few years I’ve been working for technology and IT organizations, and for the last 14 months I’ve been working with Pythian, a leading IT services provider specializing in data infrastructure management. That was a big leap for me. Not only was I required to use technology constantly in my day-to-day (Smartphone, CRM system, soft phone, multiple email interfaces ACK!), but I also needed to do a lot more than dip my toes into some fairly intense technical knowledge to gain an understanding of our client base and what solutions would be most appropriate for the people I speak to every day. These people are Chief Information Officers, Chief Technology Officers’s and Vice Presidents of Information Technology for companies that are incredibly data-dependent. The quality and security of their data management directly affects their revenue and it’s critical that it is handled with a great amount of expertise and attention to detail. Kind of intimidating.

I have spent the last year wrapping myself in terms like NoSQL, non-relational database, Hadoop, MongoDB, SQL Server and Oracle. Do I have a perfect understanding of the benefits and draw-backs of each of these yet? No. What I do have is a great network of technical geniuses who work with me who have spent their careers becoming experts in their respective technologies. I know who the best resources are and how to connect with them to get the best answers and solutions. I’m very lucky to work at company that is incredibly transparent – questions are always welcomed and answered. I sit sandwiched between the offices of the Chief Revenue Officer and the CEO and Founder of our organization and while both are incredibly busy people, they are also happy to answer questions and share their insights and energy with anyone here.

All of our technical resources are just an instant message away and can often answer my questions in a few concise lines. So, while I am still monstrously uncomfortable with tasks like defragging (sounds like organized Fraggle removal to me) my computer or resetting my smartphone when it acts up, I am coming along slowly, in baby steps – an IT late-bloomer you could say – and it’s all much less painful than I ever feared it would be.

Categories: DBA Blogs

My thoughts on the Resilience of Cassandra

Mon, 2015-04-13 06:32

This blog is a part 1 of a 2 in a series. This will be different from my previous blogs, as this is more about some decisions you can make with Cassandra regarding the resilience of your system. I will talk deeply about this topic in the upcoming Datastax Days in London (https://cassandradaylondon2015.sched.org/), this is more of an introduction!

TL;DR: Cassandra is tough!

Cassandra presents itself as a “Cassandra delivers continuous availability, linear scalability, and operational simplicity across many commodity servers with no single point of failure, along with a powerful data model designed for maximum flexibility and fast response times.“ (http://docs.datastax.com/en/cassandra/2.0/cassandra/gettingStartedCassandraIntro.html). In a production system, having your persistence layer failure tolerant is a big thing. Even more so when you can make it resilient to full locations failure through geographic replication (and easily).

As in any production system you need to plan for failure. Should we blindly trust in Cassandra resilience and forget about the plan because “Cassandra can handle it”? By reading the documentation, some may think that by having several data centers and a high enough replication factor we are covered. In part this is true. Cassandra will handle servers down, even a full DC (or several!) down. But, anyway, you should always prepare for chaos! Failure will increase pressure on your remaining servers, latency will increase, etc. And when things get up again, will it just work? Getting all data in sync, are you ready for that? Did you forgot about gc_grace_seconds? There are lots of variables and small details that can be forgotten if you don’t plan ahead. And then in the middle of a problem, it will not help having those details forgotten!

My experience tells me that you must take Cassandra failures seriously, and plan for them! Having a B plan is never a bad thing, and a C even. Also, make sure those plans work! So for this short introduction I will leave a couple of recommendations:

  • Test your system against Cassandra delivering a bad service (timeouts, high latency, etc).
  • Set a “bare minimum” for your system to work (how low can we go on consistency, for example).
  • Test not only your system going down, but also prepare for the coming up!
  • Keep calm! Cassandra will help you!

Overall, Cassandra is a tough and robust system. I’ve had major problems with network, storage, Cassandra itself, etc. And in the end Cassandra not only survived, it gave me no downtime. But with every problem I had, it increased my knowledge and awareness of what I could expect. This lead to planning for major problems (which did happen) and this combined with the natural resilience of Cassandra made me go through those events without downtime.

Fell free to comment/discuss about it, in the comment section below! Juicy details will be left for London!

Categories: DBA Blogs