Skip navigation.

DBA Blogs

Pythian at Percona Live London 2014

Pythian Group - Wed, 2014-10-29 14:29

Percona Live London takes place next week from November 3-4 where Pythian is a platinum sponsor—visit us at our booth during the day on Tuesday, or at the reception in the evening. Not only are we attending, but we’re taking part in exciting speaking engagements, so be sure to check out our sessions and hands-on labs. Find those details down below.

 

MySQL Break/Fix Lab by Miklos Szel, Alkin Tezuysal, and Nikolaos Vyzas
Monday November 3 — 9:00AM-12:00PM
Cromwell 3 & 4

Miklos, Alkin, and Nikolaos will be presenting a hands-on lab by demonstrating an evaluation of operations errors and issues in MySQL 5.6, and recovering from them. They will be covering instance crashes and hangs, troublesehooting and recovery, and significant performance issues. Find out more about the speakers below.

About Miklos: Miklos Szel is a Senior Engineer at Pythian, based in Budapest. With greater than 10 years’ experience in system and network administration, he has also worked for Walt Disney International as its main MySQL DBA. Miklos specializes in MySQL-based high availability solutions, performance tuning, and monitoring, and has significant experience working with large-scale websites.

About Alkin: Alkin Tezuysal has extensive experience in enterprise relational databases, working in various sectors for large corporations. With greater than 19 years’ of industry experience, he has been able to work on large projects from the group up to production. In recent years, he has been focusing on eCommerce, SaaS, and MySQL technologies.

About Nikolaos: Nik Vyzas is a Lead Database Consultant at Pythian, and an avid open source engineer. He began his career as a software developer in South Africa, and moved into technology consulting firms for various European and US-based companies. He specializes in MySQL, Galera, Redis, MemcacheD, ad MongoDB on many OS platforms.

 

Setting up Multi-Source Replication in MariaDB 10 by Derek Downey
Monday November 3 — 2:00-5:00PM
Cromwell 3 & 4

For a long time, replication in MySQL was limited to only a single master. When MariaDB 10.0 became generally available, the ability to allow multiple masters became a reality. This has opened up the door to previously impossible architectures. In this hands-on tutorial, Derek will discuss some of the features in MariaDB 10.0, demonstrate establishing a four-node environment running on participants’ computer using Vagrant annd VirtualBox, and even discuss some limitations associated with  10.0. Check out Derek’s blog post for more detailed info about his session.

About Derek:Derek began his career as a PHP application developer, working out of Knoxville, Tennessee. Now a Principal Consultant in Pythian’s MySQL practice, Derek is sought after for his deep knowledge of Galera and diagnosing replication issues.

 

Understanding Performance Through Measurement, Benchmarking, and Profiling by René Cannaò
Monday November 3 — 2:00-5:00PM
Orchard 2

It is essential to understand how your system performs at different workloads to measure the impacts of changes and growth and to understand how those impacts will manifest. Measuring the performance of current workloads is not trivial and the creation of a staging environment where different workloads need to be tested has it’s own set of challenges. Performing capacity planning, exploring concerns about scalability and response time and evaluating new hardware or software configurations are all operations requiring measurement and analysis in an environment appropriate to your production set up. To find bottlenecks, performance needs to be measured both at the OS layer and at the MySQL layer: an analysis of OS and MySQL benchmarking and monitoring/measuring tools will be presented. Various benchmark strategies will be demonstrated for real-life scenarios, as well as tips on how to avoid common mistakes.

About René: René has 10 years of working experience as System, Network and Database Administrator mainly on Linux/Unix platform. In recent years, he has been focused mainly on MySQL, previously working as Senior MySQL Support Engineer at Sun/Oracle and now as Senior Operational DBA at Pythian (formerly Blackbird, acquired by Pythian.)

 

Low-Latency SQL on Hadoop — What’s Best for Your Cluster? by Danil Zburivsky
Tuesday November 4 — 11:20AM-12:10PM
Cromwell 3 & 4

Low-latency SQL is the Holy Grail of Hadoop platforms, enabling new use cases and better insights. A number of open-source projects have sprung up to provide fast SQL querying; but which one is best for your cluster? This session will present results of Danil’s in-depth research and benchmarks of Facebook Presto, Cloudera Impala and Databricks Shark. Attendees will look at performance across multiple storage formats, query profiles and cluster configurations to find the best engine for a variety of use cases. This session will help you to pick the right query engine for new cluster or get most out of your existing Hadoop deployment.

About Danil: Danil Zburivsky is a Big Data Consultant/Solutions Architect at Pythian. Danil has been working with databases and information systems since his early years in university, where he received a Master’s Degree in Applied Math. Danil has 7 years of experience architecting, building and supporting large mission-critical data platforms using various flavors of MySQL, Hadoop and MongoDB. He is also the author of the book Hadoop Cluster Deployment.

 

Scaling MySQL in Amazon Web Services by Mark Filipi and Laine Campbell
Tuesday November 4 — 5:30-6:20PM
Cromwell 3 & 4

Mark Filipi, MySQL Team Lead at Pythian, will explain the options for running MySQL at high volumes at Amazon Web Services, exploring options around database as a service, hosted instances/storages and all appropriate availability, performance and provisioning considerations. He will be using real-world examples from companies like Call of Duty, Obama for America, and many more.

Laine will demonstrate how to build highly available, manageable, and performant MySQL environments that scale in AWS—how to maintain them, grow them, and deal with failure.

About Mark: With years of experience as a MySQL DBA, Mark Felipi has direct experience administrating everything from multinational corporations to tiny web start-ups. He leads a global team of talented DBAs to identify performance bottlenecks and provide consistent daily operations.

About Laine: Laine is currently the Co-Founder and Associate Vice President of Pythian’s open source database practice—the result of the acquisition of Blackbird.io by Pythian in June 2014. Blackbird.io itself was the product of a merger that involved PalominoDB, a company that Laine founded in January 2006. Prior to that, Laine spent her career working in various corporate environments, including working at Travelocity for nearly a decade building out their database team. Laine is passionate about supporting members of underserved populations to gain experience, skills, and jobs in technology.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s MySQL expertise.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 7

Pythian Group - Wed, 2014-10-29 08:09

Welcome to part 7, the final blog post in my series, Deploying Private Cloud at Home, where I will be sharing the scripts to configure controller and computer nodes. In my previous post, part six, I demonstrated how to configure the controller and compute nodes.

Kindly update the script with the password you want and then execute. I am assuming here that this is a fresh installation and no service is configured on the nodes.

Below script configures controller node, and has two parts

  1. Pre compute node configuration
  2. Post compute node configuration

The “config-controller.sh -pre” will run the pre compute node configuration and prepare the controller node and OpenStack services. “config-controller.sh -post” will run the post compute node configuration of the controller node as these services are dependant of compute node services.

config-controller.sh

#!/bin/bash
#Configure controller script v 4.4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Admin user password
ADMIN_PASS=YOUR_PASSWORD
#Demo user password
DEMO_PASS=YOUR_PASSWORD
#Keystone database password
KEYSTONE_DBPASS=YOUR_PASSWORD
#Admin user Email
ADMIN_EMAIL=YOUR_EMAIL
#Demo user Email
DEMO_EMAIL=YOUR_EMAIL
#Glance db user pass
GLANCE_DBPASS=YOUR_PASSWORD
#Glance user pass
GLANCE_PASS=YOUR_PASSWORD
#Glance user email
GLANCE_EMAIL=YOUR_EMAIL
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#Nova user Email
NOVA_EMAIL=YOUR_EMAIL
#Neutron db user pass
NEUTRON_DBPASS=YOUR_PASSWORD
#Neutron user pass
NEUTRON_PASS=YOUR_PASSWORD
#Neutron user email
NEUTRON_EMAIL=YOUR_EMAIL
#Metadata proxy pass
METADATA_SECRET=YOUR_PASSWORD
#IP to be declared for controller
MY_IP=192.168.1.140
#FQDN for controller hostname or IP
CONTROLLER=controller
#MYSQL root user pass
MYSQL_PASS=YOUR_PASSWORD
#Heat db user pass
HEAT_DBPASS=YOUR_PASSWORD
#Heat user pass
HEAT_PASS=YOUR_PASSWORD
#Heat user email
HEAT_EMAIL=YOUR_EMAIL
#IP range for VM Instances
RANGE=192.168.1.16\\/28
#Secure MySQL
MYSQL_ROOT_PASSWORD=YOUR_PASSWORD
#Current MySQL root password leave blank if you have not configured MySQL
CURNT_PASS=""



# Get versions:
SCRIPT_VER="v4.4"
if [ "$1" = "--version" -o "$1" = "-v" ]; then
	echo "`basename $0` script version $SCRIPT_VER"
  exit 0
elif [ "$1" = "" ] || [ "$1" = "--help" ]; then
  echo "Configures controller node with pre compute and post compute deployment settings"
  echo "Usage:"
  echo "       `basename $0` [--help | --version | -pre | -post]"
  exit 0

elif [ "$1" = "-pre" ]; then

echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "controller configuration started"
echo "============================================="

echo "Installing MySQL packages"
yum install -y mysql mysql-server MySQL-python
echo "Installing RDO OpenStack repo"
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
echo "Installing openstack keystone, qpid Identity Service, and required packages for controller"
yum install -y yum-plugin-priorities openstack-utils mysql mysql-server MySQL-python qpid-cpp-server openstack-keystone python-keystoneclient expect


echo "Modification of qpid config file"
perl -pi -e 's,auth=yes,auth=no,' /etc/qpidd.conf
chkconfig qpidd on
service qpidd start


echo "Configuring mysql database server"
cat > /etc/my.cnf <&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone

echo "Define users, tenants, and roles"
export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://$CONTROLLER:35357/v2.0

echo "keystone admin creation"
keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin


echo "keystone demo creation"
keystone user-create --name=demo --pass=$DEMO_PASS --email=$DEMO_EMAIL
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo
keystone tenant-create --name=service --description="Service Tenant"

echo "Create a service entry for the Identity Service"
keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://$CONTROLLER:5000/v2.0 \
--internalurl=http://$CONTROLLER:5000/v2.0 \
--adminurl=http://$CONTROLLER:35357/v2.0

echo "Verify Identity service installation"
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
echo "Request a authentication token by using the admin user and the password you chose for that user"
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-auth-url=http://$CONTROLLER:35357/v2.0 token-get
keystone --os-username=admin --os-password=$ADMIN_PASS \
  --os-tenant-name=admin --os-auth-url=http://$CONTROLLER:35357/v2.0 \
  token-get

cat > /root/admin-openrc.sh <<EOF
export OS_USERNAME=admin
export OS_PASSWORD=$ADMIN_PASS
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0
EOF

source /root/admin-openrc.sh
echo "keystone token-get"
keystone token-get
echo "keystone user-list"
keystone user-list
echo "keystone user-role-list --user admin --tenant admin"
keystone user-role-list --user admin --tenant admin

echo "Install the Image Service"
yum install -y openstack-glance python-glanceclient
openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:$GLANCE_DBPASS@$CONTROLLER/glance

echo "configure glance database"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE glance;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';"

echo "Create the database tables for the Image Service"
su -s /bin/sh -c "glance-manage db_sync" glance

echo "creating glance user"
keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL
keystone user-role-add --user=glance --tenant=service --role=admin


echo "glance configuration"
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password $GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone


echo "Register the Image Service with the Identity service"
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ image / {print $2}') \
  --publicurl=http://$CONTROLLER:9292 \
  --internalurl=http://$CONTROLLER:9292 \
  --adminurl=http://$CONTROLLER:9292
  
echo "Start the glance-api and glance-registry services"
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

echo "Testing image service"
echo "Download the cloud image"
wget -q http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img -O /root/cirros-0.3.2-x86_64-disk.img
echo "Upload the image to the Image Service"
source /root/admin-openrc.sh
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 \
--container-format bare --is-public True \
--progress  < /root/cirros-0.3.2-x86_64-disk.img

echo "Install Compute controller services"
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
source /root/admin-openrc.sh

echo "Configure compute database"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova

echo "configuration keys to configure Compute to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

source /root/admin-openrc.sh

echo "Set the my_ip, vncserver_listen, and vncserver_proxyclient_address configuration options"
echo "to the management interface IP address of the $CONTROLLER node"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP

echo "Create a nova database user"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "CREATE DATABASE nova;"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS';"
mysql -uroot -p$MYSQL_PASS -hlocalhost -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';"

echo "Create the Compute service tables"
su -s /bin/sh -c "nova-manage db sync" nova

echo "Create a nova user that Compute uses to authenticate with the Identity Service"
keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL
keystone user-role-add --user=nova --tenant=service --role=admin

echo "Configure Compute to use these credentials with the Identity Service running on the controller"
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Register Compute with the Identity Service"
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create \
  --service-id=$(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --internalurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s \
  --adminurl=http://$CONTROLLER:8774/v2/%\(tenant_id\)s
  
echo "Start Compute services and configure them to start when the system boots"
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on  

echo "To verify your configuration, list available images"
echo "nova image-list"
sleep 5
source /root/admin-openrc.sh
nova image-list

fi


if [ "$1" = "-post" ]; then
#set variables used in the configuration

source /root/admin-openrc.sh
############OpenStack Networking start here##############
echo "configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova 

echo "Restart the Compute services"
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

echo "Create the network"
source /root/admin-openrc.sh
nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 $RANGE

echo "Verify creation of the network"
nova net-list

############OpenStack Legacy ends##############
echo "Install the dashboard"
yum install -y mod_wsgi openstack-dashboard

echo "Configure openstack dashborad"
sed -i 's/horizon.example.com/\*/g' /etc/openstack-dashboard/local_settings
echo "Start the Apache web server and memcached"
service httpd start
chkconfig httpd on

fi

Below is the config-compute.sh script which configures compute node

config-compute.sh

#!/bin/bash
#configure comutue script v4
#############################################
# Rohan Bhagat             ##################
# Email:Me at rohanbhagat.com ###############
#############################################
#set variables used in the configuration
#Nova user pass
NOVA_PASS=YOUR_PASSWORD
#NEUTRON user pass
NEUTRON_PASS=YOUR_PASSWORD
#Nova db user pass
NOVA_DBPASS=YOUR_PASSWORD
FLAT_INTERFACE=eth0
PUB_INTERFACE=eth0
#FQDN for $CONTROLLER hostname or IP
CONTROLLER=controller
#IP of the compute node
MY_IP=192.168.1.142


echo "============================================="
echo "This installation script is based on OpenStack icehouse guide"
echo "Found http://docs.openstack.org/icehouse/install-guide/install/yum/content/index.html"
echo "============================================="

echo "============================================="
echo "compute configuration started"
echo "============================================="

echo "Install the MySQL Python library"
yum install -y MySQL-python


echo "Install the Compute packages"
yum install -y openstack-nova-compute openstack-utils

echo "Edit the /etc/nova/nova.conf configuration file"
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:$NOVA_DBPASS@$CONTROLLER/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://$CONTROLLER:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host $CONTROLLER
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password $NOVA_PASS

echo "Configure the Compute service to use the Qpid message broker"
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname $CONTROLLER

echo "Configure Compute to provide remote console access to instances"
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address $MY_IP
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://$CONTROLLER:6080/vnc_auto.html

echo "Specify the host that runs the Image Service"
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host $CONTROLLER

echo "Start the Compute service and its dependencies. Configure them to start automatically when the system boots"
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on

echo "kernel networking functions"
perl -pi -e 's,net.ipv4.ip_forward = 0,net.ipv4.ip_forward = 1,' /etc/sysctl.conf
perl -pi -e 's,net.ipv4.conf.default.rp_filter = 1,net.ipv4.conf.default.rp_filter = 0,' /etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >> /etc/sysctl.conf
sysctl -p

echo "Install legacy networking components"
yum install -y openstack-nova-network openstack-nova-api
sleep 5
echo "Configure legacy networking"
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
openstack-config --set /etc/nova/nova.conf DEFAULT network_manager nova.network.manager.FlatDHCPManager
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT network_size 254
openstack-config --set /etc/nova/nova.conf DEFAULT allow_same_net_traffic False
openstack-config --set /etc/nova/nova.conf DEFAULT multi_host True
openstack-config --set /etc/nova/nova.conf DEFAULT send_arp_for_ha True
openstack-config --set /etc/nova/nova.conf DEFAULT share_dhcp_address True
openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release True
openstack-config --set /etc/nova/nova.conf DEFAULT flat_network_bridge br0
openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface $FLAT_INTERFACE
openstack-config --set /etc/nova/nova.conf DEFAULT public_interface $PUB_INTERFACE

echo "Start the services and configure them to start when the system boots"
service openstack-nova-network start
service openstack-nova-metadata-api start
chkconfig openstack-nova-network on
chkconfig openstack-nova-metadata-api on

echo "Now restart networking"
service network restart

echo "Compute node configuration competed"
echo "Now you can run config-congroller.sh -post on the controller node"
echo "To complete the OpenStack configuration"

Categories: DBA Blogs

Oracle Trivia Quiz

Iggy Fernandez - Wed, 2014-10-29 07:36
All the answers can be found in the November 2014 issue of the NoCOUG Journal. I am the editor of the NoCOUG Journal. What’s NoCOUG, you ask? Only the oldest and most active Oracle users group in the world. If you live in the San Francisco bay area and have never ever attended a NoCOUG […]
Categories: DBA Blogs

Results of the NoCOUG SQL Mini-Challenge

Iggy Fernandez - Tue, 2014-10-28 15:55
As published in the November 2014 issue of the NoCOUG Journal The inventor of the relational model, Dr. Edgar Codd, was of the opinion that “[r]equesting data by its properties is far more natural than devising a particular algorithm or sequence of operations for its retrieval. Thus, a calculus-oriented language provides a good target language […]
Categories: DBA Blogs

OGG-01742 when trying to start extract process

DBASolved - Tue, 2014-10-28 13:57

Every once in awhile I do something in my test environments to test something else, then I go back to test core functions of the product; in this case I was testing a feature of Oracle GoldenGate 12c.  Earlier in the day, I had set the $ORACLE_HOME enviornment variable to reference the Oracle GoldenGate home.  Instead of closing my session and restarting, I just started GGSCI and the associated manager.  To my surprise, the extracts that I had configured wouldn’t start.  GGSCI was issuing an OGG-01742 error about the “child process is no longer alive” (Image 1).

Image 1:

As I was trying to figure out what was going on, I checked the ususal files (ggserr.log and report files) for any associated errors.  I didn’t find anything that was out of place or lead me to believe there was a  problem with Oracle GoldenGate.  The next thing I needed to identify was if the environment was configured correctly.  In doing this, I first checked the session environment variables (Image 2) related to the Oracle database.

Image 2:

As you can see, I had set the ORACLE_HOME equal to my OGG_HOME.  After seeing this, I remembered that I had set it this way for a specific test.  Since the ORACLE_HOME was set to OGG_HOME, the Oracle GoldenGate Extract (capture) process didn’t know where to get the needed libraries for the database.  

The Fix:

To fix this issue, I just needed to reset my environment to reference the ORACLE_HOME and ORACLE_SID correctly.  In Linux enviornment, I like to use “. oraenv” to do this (Image 3).

Image 3:

Now that I have reset my Oracle environment to point to the database, I should be able to start my extracts and not get any error messages related to OGG-01742 (Image 4).

Note: It appears that if the manager process has AUTOSTART and/or AUTORESTART set, the manager process needs to be restarted before the OGG-01742 message will go away.

Image 4:

With my extracts started now, I’ve got all my Oracle GoldenGate processes back up and running (Image 5).

Image 5:

In a nutshell, if you get OGG-01742 error while trying to start or restart an extract (capture) process; just reset your Oracle environment parameters!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Use OPatch to check Oracle GoldenGate version

DBASolved - Tue, 2014-10-28 11:08

Recently on I was strolling the OTN message boards and came across a question about identifying the version of Oracle GoldenGate using OPatch.  This was the second time I came across this question; with that I decided to take a look and see if Oracle GoldenGate information could be retrieved using opatch.

Initially I thought that identifing the Oracle GoldenGate version could only be done by logging into GGSCI and reviewing the header information.  To do this, just setup the Oracle environment using “. oraenv”.

Note: “. oraenv” will use the /etc/oratab file to set the ORACLE_HOME and ORACLE_SID parameters and ensure that Oracle GoldenGate has access to the library files needed.

Once the enviornment is set, the GGSCI can be used to start the interface.

[oracle@db12cgg ogg]$ . oraenv
ORACLE_SID = [oragg] ?
The Oracle base has been set to /u01/app/oracle
[oracle@db12cgg ogg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (db12cgg.acme.com) 1>

Notice in the code above, that the version of Oracle GoldenGate being ran is 12.1.2.1.0 for Linux x64.

How can this be done through OPatch?  The same information can be gathered using the opatch utility.  Ideally, you will want to use opatch from the $GG_HOME/OPatch directory.

Note: $ORACLE_HOME needs to be set to $OGG_HOME before correct opatch inventory will be listed.  If $ORACLE_HOME is set for the database, the opatch will return information the database not Oracle GoldenGate.

After making sure that the $ORACLE_HOME directory is pointed to the correct $GG_HOME,  the inventory for Oracle GoldenGate can be retrieved using “./opatch lsinventory”.

[oracle@db12cgg ogg]$ pwd
/u01/app/oracle/product/12.1.2/ogg
[oracle@db12cgg ogg]$cd OPatch
[oracle@db12cgg OPatch]$./opatch lsinventory
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home      : /u01/app/oracle/product/12.1.2/ogg
Central Inventory : /u01/app/oraInventory
  from          : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version      : 11.2.0.3.0
Log file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/opatch2014-10-28_11-18-49AM.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/lsinv/lsinventory2014-10-28_11-18-49AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle GoldenGate Core                                              12.1.2.1.0

There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

--------------------------------------------------------------------------------

OPatch succeeded.

As you can tell, I was able to find the same information using OPatch without having to go to the GGSCI utility.

Note: I have not had a chance to check this against Oracle GoldenGate 11g and earlier. This may be something specific to Oracle GoldenGate 12c.  Will verify at a later time.

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

How To Change The Priority Of Oracle Background Processes

How To Change The Priority Of Oracle Background Processes
Before you get in a huf, it can be done! You can change an Oracle Database background process

priority through an instance parameter! I'm not saying it's a good idea, but it can be done.
In this post I explore how to make the change, just how far you can take it and when you may want to consider changing an Oracle background process priority.
To get your adrenaline going, check out the instance parameter _high_priority_processes from one of your production Oracle system with a version of 11 or greater. Here is an example using my OSM tool, ipx.sql on my Oracle Database version 12.1.0.2.0.
SQL> @ipx _highest_priority_processes
Database: prod40 27-OCT-14 02:22pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_highest_priority_processes = VKTM Highest Priority TRUE
Process Name Mask
Then at the Linux prompt, I did:
$ ps -eo pid,class,pri,nice,time,args | grep prod40
2879 TS 19 0 00:00:00 ora_pmon_prod40
2881 TS 19 0 00:00:01 ora_psp0_prod40
2883 RR 41 - 00:02:31 ora_vktm_prod40
2889 TS 19 0 00:00:01 ora_mman_prod40
2903 TS 19 0 00:00:00 ora_lgwr_prod40
2905 TS 19 0 00:00:01 ora_ckpt_prod40
2907 TS 19 0 00:00:00 ora_lg00_prod40
2911 TS 19 0 00:00:00 ora_lg01_prod40
...
Notice the "pri" for priority of the ora_vktm_prod40 process? It is set to 41 while all the rest of the Oracle background processes are set to the default of 19. Very cool, eh?

Surprised By What I Found
Surprised? Yes, surprised because changing Oracle process priority is a pandoras box. Just imagine if an Oracle server (i.e., foreground) process has its priority lowered just a little and then attempts to acquire a latch or a mutex? If it doesn't get the latch quickly, I might never ever get it!

From a user experience perspective, sometimes performance really quick and other times the application just hangs.

This actually happened to a customer of mine years ago when the OS started reducing a process's priority after it consumed a certain amount of CPU. I learned that when it comes to Oracle processes, they are programed to expect an even process priority playing field. If you try to "game" the situation, do so at your own risk... not Oracle's.

Then why did Oracle Corporation allow background process priority to be changed. And why did Oracle Corporation actually change a background processes priority?!

Doing A Little Exploration
It turns out there are a number of "priority" related underscore instance parameters! On my 11.2.0.1.0 system there 6 "priority" parameters. On my 12.1.0.1.0 system there are 8 "priority" parameters. On my 12.1.0.2.0 system there are 13 "priority" parameters! So clearly Oracle is making changes! In all cases, the parameter I'm focusing on, "_high_priority_processes" exists.

In this posting, I'm going to focus on my Oracle Database 12c version 12.1.0.2.0 system. While you may see something different in your environment, the theme will be the same.

While I'll be blogging about all four of the below parameters, in this posting my focus will be on the _high_priority_processes parameter. Below are the defaults on my system:
_high_priority_processes        LMS*
_highest_priority_processes VKTM
_os_sched_high_priority 1
_os_sched_highest_priority 1

Messing With The LGWR Background Processes
I'm not testing this on a RAC system, so I don't have an LMS background process. When I saw the "LMS*" I immediately thought, "regular expression." Hmmm... I wonder if I can change the LGWR background process. So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:36pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LGWR High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5521 RR 41 - 00:00:00 ora_vktm_prod40
5539 TS 19 0 00:00:00 ora_dbw0_prod40
5541 RR 41 - 00:00:00 ora_lgwr_prod40
5545 TS 19 0 00:00:00 ora_ckpt_prod40
5547 TS 19 0 00:00:00 ora_lg00_prod40
5551 TS 19 0 00:00:00 ora_lg01_prod40
...

How Far Can I Take This?
At this point in my journey, my mind was a blaze! The log file sync wait event can be really difficult to deal with and especially so when there is a CPU bottleneck. Hmmm... Perhaps I can increase the priority of all the log writer background processes?

So I made the instance parameter change and recycled the instance. Below shows the instance parameter change:
SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:44pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = LMS*|LG* High Priority FALSE
Process Name Mask

Below is an operating system perspective using the ps command:

ps -eo pid,class,pri,nice,time,args | grep prod40
...
5974 TS 19 0 00:00:00 ora_psp0_prod40
5976 RR 41 - 00:00:00 ora_vktm_prod40
5994 TS 19 0 00:00:00 ora_dbw0_prod40
5996 RR 41 - 00:00:00 ora_lgwr_prod40
6000 TS 19 0 00:00:00 ora_ckpt_prod40
6002 RR 41 - 00:00:00 ora_lg00_prod40
6008 RR 41 - 00:00:00 ora_lg01_prod40
6014 TS 19 0 00:00:00 ora_lreg_prod40
...

So now all the log writer background processes have a high priority. My hope would be that if there is an OS CPU bottleneck and the log writer background processes wanted more CPU, I now have the power to give that to them! Another tool in my performance tuning arsenal!

Security Hole?
At this point, my exuberance began to turn into paranoia. I thought, "Perhaps I can increase the priority of an Oracle server process or perhaps any process." If so, that would be a major Oracle Database security hole.

With fingers trembling, I changed the instance parameters to match an Oracle server process and recycled the instance. Below shows the instance parameter change:

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:52pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|oracleprod40 Process Name Mask

Below is an operating system perspective using the ps command:

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6360 TS 19 0 00:00:00 ora_psp0_prod40
6362 RR 41 - 00:00:00 ora_vktm_prod40
6366 TS 19 0 00:00:00 ora_gen0_prod40
6382 RR 41 - 00:00:00 ora_lgwr_prod40
6386 TS 19 0 00:00:00 ora_ckpt_prod40
6388 RR 41 - 00:00:00 ora_lg00_prod40
6394 RR 41 - 00:00:00 ora_lg01_prod40
6398 TS 19 0 00:00:00 ora_reco_prod40
...
6644 TS 19 0 00:00:00 oracleprod40...
...

OK, that didn't work so how about this?

SQL> @ipx _high_priority_processes
Database: prod40 27-OCT-14 02:55pm
Report: ipx.sql OSM by OraPub, Inc. Page 1
Display ALL Instance Parameters

Instance Parameter and Value Description Dflt?
-------------------------------------------------- -------------------- -----
_high_priority_processes = High Priority FALSE
LMS*|LG*|*oracle* Process Name Mask

Let's see what happened at the OS.

$ ps -eo pid,class,pri,nice,time,args | grep prod40
...
6701 RR 41 - 00:00:00 ora_vktm_prod40
6705 RR 41 - 00:00:00 ora_gen0_prod40
6709 RR 41 - 00:00:00 ora_mman_prod40
6717 RR 41 - 00:00:00 ora_diag_prod40
6721 RR 41 - 00:00:00 ora_dbrm_prod40
6725 RR 41 - 00:00:00 ora_vkrm_prod40
6729 RR 41 - 00:00:00 ora_dia0_prod40
6733 RR 41 - 00:00:00 ora_dbw0_prod40
...
6927 RR 41 - 00:00:00 ora_p00m_prod40
6931 RR 41 - 00:00:00 ora_p00n_prod40
7122 TS 19 0 00:00:00 oracleprod40 ...
7124 RR 41 - 00:00:00 ora_qm02_prod40
7128 RR 41 - 00:00:00 ora_qm03_prod40

Oh Oh... That's not good! Now EVERY Oracle background process has a higher priority and my Oracle server process does not.

So my "*" wildcard caused all the Oracle processes to be included. If all the processes a high prioirty, then the log writer processes have no advantage over the others. And to make matters even worse, my goal of increasing the server process priority did not occur.

However, this is actually very good news because it appears this is not an Oracle Database security hole! To me, it looks like the priority parameter is applied during the instance startup for just the background processes. Since my server process was started after the instance was started and for sure not included in the list of background processes, its priority was not affected. Good news for security, not as good of news for a performance optimizing fanatic such as myself.

Should I Ever Increase A Background Process Priority?
Now that we know how to increase an Oracle Database background process priority, when would we ever want to do this? The short answer is probably never. But the long answer is the classic, "it depends."

Let me give you an example. Suppose there is an OS CPU bottleneck and the log writer background processes are consuming lots of CPU while handling all the associated memory management when server process issues a commit. In this situation, performance may benefit by making it easier for the log writer processes to get CPU cycles, therefore improving performance. But don't even think about doing this unless there is a CPU bottleneck. And even then, be very very careful.

In my next block posting, I'll detail an experiment where I changed the log writer background processes priority.

Thanks for reading!

Craig.



Categories: DBA Blogs

MariaDB 10.0 Multi-source Replication at Percona Live UK 2014

Pythian Group - Mon, 2014-10-27 12:12

Percona Live UK is upon us and I have the privilege to present a tutorial on setting up multi-source replication in MariaDB 10.0 on Nov 3, 2014.

If you’re joining me at PLUK14, we will go over setting up two different topologies that incorporates the features in MariaDB. The first is a mirrored topology:

Replication Topologies - Mirrored

Replication Topologies – Mirrored

This basically makes use of an existing DR environment by setting it up to be able to write to either master. Please be advised, this is normally not recommended due to the complexity of making your application able to resolve conflicts and data sync issues that might arise from writing to multiple masters.

The second topology is a basic fan-in topology:

Replication Topologies - Fan-in

Replication Topologies – Fan-in

This use-case is more common, especially for unrelated datasets that can be gathered into a single machine for reporting purposes or as part of a backup strategy. It was also previously available in MySQL only through external tools such as Tungsten Replicator

As promised in the description of the tutorial, I am providing a Vagrantfile for the tutorial. This can be downloaded/cloned from my PLUK14 repository

The vagrant environment requires at least Vagrant 1.5 to make use of Vagrant Cloud.

I hope to see you next week!

Categories: DBA Blogs

The Power of the Oracle Database Proxy Authenticated Connections

Pythian Group - Mon, 2014-10-27 08:46

We recently received this inquiry from a client:

“Can an Oracle database account support two passwords at once so we can roll out updated credentials to application servers gradually rather than having to change them all at the same time? Then once all of the application servers have been configured to use the new/second password we can change or remove the first one?

The short answer is no. Like most computer technologies, an Oracle database user has only one password that is valid at any given time. However, a very powerful and under-appreciated feature of the Oracle database could be used in this case: It is called proxy authentication.

 

How proxy authenication works

Introduced with Oracle Database 10g, the basic premise of proxy authentication is that a user with the required permission can connect to the Oracle database using their own credentials, but proxy into another user in the database.  To put it more plainly: connect as USER_A but using the password of USER_B !

The proxy permission is granted through the “CONNECT THROUGH” privilege.  Interestingly, it is granted through an ALTER USER command as really it’s an “authorization” and property of the user and not truly a privilege like the traditional privileges we’re used to:

SQL> connect / as sysdba
Connected.
SQL> alter user USER_A grant connect through USER_B;

User altered.

SQL>

 

Now USER_B who may not know the password for USER_A can connect as USER_A by specifying the proxy account in square brackets in the connection string:

SQL> connect USER_B[USER_A]/passw0rd
Connected.
SQL> show user
USER is "USER_A"
SQL>

 

The password specified was the one for USER_B, not USER_A.  Hence the credentials for USER_B were used but the end result is that the session is connected as USER_A!

Specifically when a proxy authenticated connection is made the USERENV namespace parameters are updated as follows:

  • The “SESSION_USER” becomes USER_A
  • The “SESSION_SCHEMA” also becomes USER_A
  • The “PROXY_USER” remains USER_B who initiated the connection and who’s credentials were used

Since the syscontext(‘USERENV’,’PROXY_USER’) remains unchanged, the connection is properly audited and information on who made the initial connection can still recorded in audit records.  However for all other purposes, USER_B has effectively connected to the database as USER_A without having to know USER_A’s password.

So back to the original question, a possible approach to their problem would be to create a second USER_B that has permission to proxy into their application user account APP_USER.  Then they could gradually roll out the credential change to use the new USER_B and proxy into APP_USER to all of their app servers.  Once all app servers have been updated it would then be safe to change the password on the base application account APP_USER.

 

A similar feature is the ability to change the current session’s schema. For example as USER_B issuing:

alter session set current_schema = USER_A;

 

This is a very quick and simple approach, but isn’t quite the same. Doing this only changes the “CURRENT_SCHEMA” which is the currently active default schema. Hence any queries issued without specifying the schema name will default to “CURRENT_SCHEMA”. But there are many cases when actually connecting to another user is required. For example, if the DBA needs to drop and re-create a database link then the “current_schema” approach will not suffice.  But the proxy authenticated connection alternative will work perfectly.

Another case where the “current_schema” approach may be an issue is if the application is user aware.  What I mean by this is that possibly the application has some logic such as “if user = USER_A then do suff“.  If you connect as USER_B and simply changes the current schema then the boolean logic of this condition will evaluate to FALSE.  However if you use a proxy authenticated connection user USER_A, the condition will evaluate to TRUE.

Previously if the DBA needed to connect to the database as a specific user (maybe to re-create a DB link for example) they might employ the old trick of temporarily changing the user’s password, quickly connecting, and then quickly changing it back using the extracted/saved password hash.  However there are numerous serious problems with this approach:

  1. The schema may be locked
  2. The password may be controlled by a PROFILE that may also need to be adjusted.
  3. Account intrusion detection tools may detect the connection.
  4. The connection may not be properly audited via Oracle or external auditing tools.
  5. The application may unsuccessfully try to connect while the password is temporarily changed causing an application failure!

Hence that approach should never be used. The proxy authenticated connection alternative doesn’t have any of those issues and is perfectly safe.

 

A simple example

Putting it all together into a small example to show how the userenv properties are affected:

SQL> alter user USER_A grant connect through USER_B;

User altered.

SQL> connect USER_B[USER_A]/passw0rd
Connected.
SQL> alter session set current_schema = SCOTT;

Session altered.

SQL> select sys_context('USERENV','SESSION_USER') as session_user,
  2  sys_context('USERENV','SESSION_SCHEMA') as session_schema,
  3  sys_context('USERENV','CURRENT_SCHEMA') as current_schema,
  4  sys_context('USERENV','PROXY_USER') as proxy_id,
  5  user
  6  from dual;

SESSION_USER   SESSION_SCHEMA CURRENT_SCHEMA PROXY_ID       USER
-------------- -------------- -------------- -------------- ------------
USER_A         SCOTT          SCOTT          USER_B         USER_A

SQL>

 

As can be seen above, for all intensive purposes the connection has been made to USER_A but using the credentials of USER_B.  USER_A’s password did not need to be known nor was the USER_A account affected or adjusted in any way.

FAQs

What if USER_A’s password is locked or expired?

The answer is that the connection will still report the same error as it would have if a direct connection to USER_A was made:

SQL> connect / as sysdba
Connected.
SQL> alter user USER_A account lock;

User altered.

SQL> connect USER_B[USER_A]/passw0rd
ERROR:
ORA-28000: the account is locked


Warning: You are no longer connected to ORACLE.
SQL>

Can this be used with other tools such as Data Pump?

(Importing as the actual user instead of a DBA user was necessary with Oracle 10g under specific circumstances such as importing JOBs and REFRESH GROUPS).  The answer is yes it works with Data Pump and other similar tools:

$ impdp dumpfile=temp.dmp nologfile=y include=JOB

Import: Release 11.2.0.4.0 - Production on Wed Oct 15 19:10:13 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: USER_B[USER_A]/passw0rd

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "USER_A"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "USER_A"."SYS_IMPORT_FULL_01":  USER_B[USER_A]/******** dumpfile=temp.dmp nologfile=y include=JOB
Processing object type SCHEMA_EXPORT/JOB
Job "USER_A"."SYS_IMPORT_FULL_01" successfully completed at Wed Oct 15 19:10:22 2014 elapsed 0 00:00:01

$

Notice that the Data Pump master table is created in the USER_A schema even though we connected using the USER_B credentials.

Is proxy authentication supported by JDBC/JDBC thin driver?

Yes, it works through almost any OCI connection including JDBC connections.

What about Oracle Wallets?

The answer again is yes, they can support it too! See below for an example using an Oracle Wallet:

$ mkstore -wrl "/u01/app/oracle/wallet" -createCredential ORCL USER_A passw0rd
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
Create credential oracle.security.client.connect_string1
$ mkstore -wrl "/u01/app/oracle/wallet" -listCredential
Oracle Secret Store Tool : Version 11.2.0.4.0 - Production
Copyright (c) 2004, 2013, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
1: ORCL USER_A
$

$ sqlplus /@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 15 13:45:04 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show user
USER is "USER_A"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

$ sqlplus [app_user]/@ORCL

SQL*Plus: Release 11.2.0.4.0 Production on Wed Oct 15 13:45:14 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show user
USER is "APP_USER"
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$

 

Reporting and Revoking

Finally, how can we report on what proxy authentication authorizations we’ve granted as we need to clean them up (revoke them)?  Or perhaps we just need to report on or audit what’s out there?  Fortunately, it’s as simple as querying a catalog view to see what’s been set and we can remove/revoke through another simple ALTER USER command:

SQL> select * from PROXY_USERS;

PROXY        CLIENT       AUT FLAGS
------------ ------------ --- -----------------------------------
USER_B       USER_A       NO  PROXY MAY ACTIVATE ALL CLIENT ROLES

SQL> alter user USER_A revoke connect through USER_B;

User altered.

SQL> select * from PROXY_USERS;

no rows selected

SQL>

 

Conclusion

Since the introduction of the GRANT ANY OBJECT privilege with Oracle9i, the number of times that the DBA needs to actually connect as other users has been reduced.  However, there still are some distinct situations such as those mentioned in the examples above when the connection as another user may be absolutely necessary.

Thanks to the proxy authenticated connection capabilities introduced with Oracle Database 10g, connecting as another user when you don’t know the other account’s password has become a breeze.  And even if you do know the password, connecting through proxy authentication can still add value with the additional audit information.

Have any other situations where connecting as another user is absolutely necessary? Share them in the comments section below.

 

References

http://docs.oracle.com/cd/E25054_01/network.1111/e16543/authentication.htm
http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions184.htm

Categories: DBA Blogs

Log Buffer #394, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2014-10-27 08:22

This week’s log buffer edition collects some of the insightful blog posts from Oracle, SQL Server, and MySQL.

Oracle:

Oracle StorageTek T10000D Tape Drive achieves FIPS 140-2 Validation.

Oracle is a Leader in IDC MarketScape for Global Trade Management.

The Benefits of Integrating a Google Search Appliance with an Oracle WebCenter or Liferay Portal.

Maintenance Windows is too small? Autotask Jobs fail.

SOA Suite 12c: Querying LDAP directories using the LDAP Adapter.

SQL Server:

Regaining access to SQL server after changing the domain.

NuGet has transformed the ease of getting and installing the latest version of .NET packages, tools and frameworks.

The Mindset of the Enterprise DBA: Delegating Work.

Stairway to SQL PowerShell Level 8: SQL Server PowerShell Provider.

Finding a table name from a page ID.

MySQL:

A few weeks ago, Gharieb received an interesting Galera Cluster support case from one of customers that the application is not working well and they face a lot of troubles in their Galera Cluster setup.

Resources for Database Clusters: 9 DevOps Tips, ClusterControl 1.2.8 Release, HAProxy Webinar Replay & More.

Refactoring replication topology with Pseudo GTID.

Improvements to STRICT MODE in MySQL.

Monitoring progress and temporal memory usage of Online DDL in InnoDB.

Categories: DBA Blogs

OCP 12C – Oracle Data Redaction

DBA Scripts and Articles - Mon, 2014-10-27 07:00

What is Oracle Data Redaction ? Oracle Data Redaction is meant to mask (redact) sensitive data returned from application queries. Oracle Data Redaction doesn’t make change to data on disk, the sensitive data is redacted on the fly before it is returned to the application. You can redact column data by using one of the following methods: Full […]

The post OCP 12C – Oracle Data Redaction appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell; O'Reilly Media

Surachart Opun - Sat, 2014-10-25 11:52
Impala is open source and a query engine that runs on Apache Hadoop. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. If you are looking for a book getting start with it - Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell (@max_webster). Assist readers to write, tune, and port SQL queries and other statements for a Big Data environment, using Impala. The SQL examples in this book start from a simple base for easy comprehension, then build toward best practices that demonstrate high performance and scalability. For readers, you can download QuickStart VMs and install. After that, you can use it with examples in a book.
In a book, it doesn't assist readers to install Impala or how to solve the issue from installation or configuration. It has 5 chapters and not much for the number of pages, but enough to guide how to use Impala (Interactive SQL) and has good examples. With chapter 5 - Tutorials and Deep Dives, that it's highlight in a book and the example in a chapter that is very useful.
Free Sampler.

This book assists readers.
  • Learn how Impala integrates with a wide range of Hadoop components
  • Attain high performance and scalability for huge data sets on production clusters
  • Explore common developer tasks, such as porting code to Impala and optimizing performance
  • Use tutorials for working with billion-row tables, date- and time-based values, and other techniques
  • Learn how to transition from rigid schemas to a flexible model that evolves as needs change
  • Take a deep dive into joins and the roles of statistics
[test01:21000] > select "Surachart Opun" Name,  NOW() ;
Query: select "Surachart Opun" Name,  NOW()
+----------------+-------------------------------+
| name           | now()                         |
+----------------+-------------------------------+
| Surachart Opun | 2014-10-25 23:34:03.217635000 |
+----------------+-------------------------------+
Returned 1 row(s) in 0.14sBook: Getting Started with Impala Interactive SQL for Apache HadoopAuthor: John Russell (@max_webster)Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Recap of yesterday’s Arizona Oracle User Group (AZORA) meeting

Bobby Durrett's DBA Blog - Fri, 2014-10-24 11:15

Yesterday was the first meeting of the newly restarted Arizona Oracle User Group, AZORA.  It was a good first meeting to kick off what I hope will turn into an active club.

We met in the Tempe Public Library in a large downstairs room with a projector screen and plenty of seating with tables so it was easy to take notes.  We also had snacks.  I got a Paradise Bakery chocolate chip cookie and bottled water so that put me in a good mood for the afternoon meeting. They also had some giveaways and so I picked up an Oracle pen and notebook to use to take notes during the talk.  They also gave away three or four Rich Niemiec books and some gift cards as prizes, but, as usual, I didn’t win anything.

The first talk was about “Big Data”.  I’m skeptical about any buzzword, including big data, but I enjoyed hearing from a technical person who was actually doing this type of work.  I would rather hear an honest perspective from someone in the midst of the battle than get a rosy picture from someone who is trying to sell me something.  Interestingly, the talk was about open source big data software and not about any Oracle product, so it gave me as an Oracle database specialist some insight into something completely outside my experience.

The second talk was about another buzzword – “The Cloud”.  The second talk was as helpful as the first because the speaker had exposure to people from a variety of companies that were actually doing cloud work.  This talk was more directly related to my Oracle database work because you can have Oracle databases and applications based on Oracle databases deployed in the cloud.

It was a good first meeting and I hope to attend and help support future meetings.  Hopefully we can spread the news about the club and get even more people involved and attending so it will be even more useful to all of us.  I appreciate those who put in the effort to kick off this first meeting.

– Bobby





Categories: DBA Blogs

OCP 12C – Resource Manager and Performance Enhancements

DBA Scripts and Articles - Fri, 2014-10-24 07:16

Use Resource Manager for a CDB and a PDB Managing Resources between PDBs The Resource Manager uses Shares ans Utilization limit to manage resources allocated to PDBs. The more “Shares” you allocate to a PDB, the more resource it will have. Shares are allocated through DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE. One directive can only concern one PDB and you can’t […]

The post OCP 12C – Resource Manager and Performance Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

OCP 12C – SQL Enhancements

DBA Scripts and Articles - Thu, 2014-10-23 08:20

Extended Character Data Type Columns In this release Oracle changed the maximum sixe of three data types  In Oracle 12c if you set a VARCHAR2 to 4000 bytes or less it is stored inline, if you set it to more than 4000 bytes then it is transformed in extended character data type and stored out […]

The post OCP 12C – SQL Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 6

Pythian Group - Thu, 2014-10-23 07:35

Today’s blog post is part six of seven in a series dedicated to Deploying a Private Cloud at Home, where I will be demonstrating how to configure controller node with legacy networking ad OpenStack dashboard for webgui. Feel free to check out part five where we configured compute node with OpenStack services.

  1. First load the admin variables admin-openrc.sh
    source /root/admin-openrc.sh
  2. Enable legacy networking
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
  3. Restart the Compute services
    service openstack-nova-api restart
    service openstack-nova-scheduler restart
    service openstack-nova-conductor restart
  4. Create the IP pool which will be assigned to the instances we will launch later. My network is 192.168.1.0/24. I took a subpool of the range and I am using that subnet to assign IPs to the VMs. As the VMs will be on my shared network I want the ip in the same range my other systems on the network.
    Here I am using the subnet of 192.168.1.16/28
  5. Create a network
    nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4 192.168.1.16/28
  6. Verify networking by listing the network
    nova net-list
  7. Install dashboard. Dashboard gives you webui to manage OpenStack instances and services. As we will be using the default configuration I am not going in detail with this.
    yum install -y mod_wsgi openstack-dashboard
  8. Update the ALLOWED_HOSTS in local_settings to include the addresses you wish to access the dashboard from. I am running these in my Intranet so I allowed every host in my network. But you can specify which hosts you want to give access.
    ALLOWED_HOSTS = ['*']
  9. Start and enable Apache web server
    service httpd start
    chkconfig httpd on
  10. You can now access the dashboard at http://controller/dashboard

 

This completes the configuration of OpenStack private cloud. We can use the same guide for RackSpace private cloud as it too is based on OpenStack Icehouse, but that is for another time.

Now that we have a working PaaS cloud, we can configure any SaaS on top of it, but that will require another series altogether.

Stay tuned for part seven, our final post in the series Deploying Private Cloud at Home, where I will be sharing scripts that will automate the installation and configuration of controller and compute nodes.

Categories: DBA Blogs

Reminder: first Arizona Oracle User Group meeting tomorrow

Bobby Durrett's DBA Blog - Wed, 2014-10-22 14:02

Fellow Phoenicians (citizens of the Phoenix, Arizona area):

This is a reminder that tomorrow is the first meeting of the newly reborn (risen from the ashes) Arizona Oracle User Group.  Here are the meeting details: url

I hope to meet some of my fellow Phoenix area DBAs tomorrow afternoon.

– Bobby

 


Categories: DBA Blogs

OCP 12C – DataPump, SQL*Loader, External Tables Enhancements

DBA Scripts and Articles - Wed, 2014-10-22 13:57

Oracle DataPump Enhancements Full Transportable Export and Import of Data In Oracle 12c you now have the possibility to create full transportable exports and imports. A full transportable export contains all objects and data needed to create a copy of the database. To create a fully transportable export of your database you need to specify […]

The post OCP 12C – DataPump, SQL*Loader, External Tables Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

MySQL: Troubleshooting an Instance for Beginners

Pythian Group - Wed, 2014-10-22 09:17
IMG_1299

So as you may know, my new position involves the MySQL world, so I’m in the task of picking up the language and whereabouts of this DBMS, and my teamate Alkin Tezuysal (@ask_dba on Twitter) has a very cool break and fix lab which you should check out if you are going to Percona Live London 2014, he will be running this lab, so be sure to don’t miss out.

So the first thing I tried was to bring up the service, but to my surprise, the MySQL user didn’t exist. So the first thing I did was create the user.

Note: Whenever you see “…”, it is to shorten the output.

[user-lab@ip-10-10-10-1 ~]$ service mysqld start
touch: cannot touch ‘/var/log/mysqld.log’: Permission denied
chown: invalid user: ‘mysql:mysql’
chmod: changing permissions of ‘/var/log/mysqld.log’: Operation not permitted
mkdir: cannot create directory ‘/var/lib/msql’: Permission denied
[user-lab@ip-10-10-10-1 ~]$ id mysql
id: mysql: no such user
[user-lab@ip-10-10-10-1 ~]$ sudo useradd mysql

So now that the user exists, I try to bring it up and we are back at square one as the initial configuration variable in the .cnf file is incorrect. But there is another problem, as there is more than one .cnf file.

[user-lab@ip-10-10-10-1 ~]$ sudo su -
Last login: Thu Jul 31 11:37:21 UTC 2014 on pts/0
Last failed login: Tue Oct 14 05:45:47 UTC 2014 from 60.172.228.40 on ssh:notty
There were 1269 failed login attempts since the last successful login.
[root@ip-10-10-10-1 ~]# service mysqld start
Initializing MySQL database: Installing MySQL system tables...
141014 17:05:46 [ERROR] /usr/libexec/mysqld: unknown variable 'tmpd1r=/var/tmp'
141014 17:05:46 [ERROR] Aborting

141014 17:05:46 [Note] /usr/libexec/mysqld: Shutdown complete

Installation of system tables failed! Examine the logs in
/var/lib/msql for more information.

...

 [FAILED]

In the Oracle world, it is easier to troubleshoot. Here in the MySQL world, the best way to see which .cnf file is being used, we do it with an strace command.

[root@ip-10-10-10-1 ~]# strace -e trace=open,stat /usr/libexec/mysqld
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libaio.so.1", O_RDONLY|O_CLOEXEC) = 3
...
stat("/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=255, ...}) = 0
open("/etc/my.cnf", O_RDONLY)           = 3
stat("/etc/mysql/my.cnf", 0x7fffe4b38120) = -1 ENOENT (No such file or directory)
stat("/usr/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=25, ...}) = 0
open("/usr/etc/my.cnf", O_RDONLY)       = 3
stat("/root/.my.cnf", {st_mode=S_IFREG|0644, st_size=33, ...}) = 0
open("/root/.my.cnf", O_RDONLY)         = 3
...
141014 17:12:05 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!

So now I can see that the /usr/etc/my.cnf is the one with the incorrect wording variable, so we modify it to have it the correct one.

[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[mysqld]
tmpd1r=/var/tmp
[root@ip-10-10-10-1 ~]# sed -i -e 's/tmpd1r/tmpdir/' /usr/etc/my.cnf
[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[mysqld]
tmpdir=/var/tmp

Another try, but again the same result — but even worse this time, as there is no output. After digging around, I found that the place to look is the /var/log/mysqld.log and the problem was that some libraries belonged to root user, instead of the MySQL user.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
[root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
141014 17:16:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/msql
141014 17:16:33 [Note] Plugin 'FEDERATED' is disabled.
/usr/libexec/mysqld: Table 'mysql.plugin' doesn't exist
141014 17:16:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
141014 17:16:33 InnoDB: The InnoDB memory heap is disabled
141014 17:16:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:16:33 InnoDB: Compressed tables use zlib 1.2.7
141014 17:16:33 InnoDB: Using Linux native AIO
/usr/libexec/mysqld: Can't create/write to file '/var/tmp/ib1rikjr' (Errcode: 13)
141014 17:16:33  InnoDB: Error: unable to create temporary file; errno: 13
141014 17:16:33 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:16:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:16:33 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:16:33 [ERROR] Aborting

141014 17:16:33 [Note] /usr/libexec/mysqld: Shutdown complete
[root@ip-10-10-10-1 ~]# perror 13
Error code 13: Permission denied
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/mysql/plugin.*
-rw-rw---- 1 root root 8586 Mar 13  2014 /var/lib/mysql/mysql/plugin.frm
-rw-rw---- 1 root root    0 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYD
-rw-rw---- 1 root root 1024 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYI
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql/mysql/

So I think, yey, I’m set and it will come up! I give it one more shot and, you guessed it, same result and different error :( This time around the problem seemed to be that the memory assigned is incorrect and we don’t have enough on the machine, so we change it.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
141014 17:36:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141014 17:36:15 [Note] Plugin 'FEDERATED' is disabled.
141014 17:36:15 InnoDB: The InnoDB memory heap is disabled
141014 17:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:36:15 InnoDB: Compressed tables use zlib 1.2.7
141014 17:36:15 InnoDB: Using Linux native AIO
141014 17:36:15 InnoDB: Initializing buffer pool, size = 100.0G
InnoDB: mmap(109890764800 bytes) failed; errno 12
141014 17:36:15 InnoDB: Completed initialization of buffer pool
141014 17:36:15 InnoDB: Fatal error: cannot allocate memory for the buffer pool
141014 17:36:15 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:36:15 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:36:15 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:36:15 [ERROR] Aborting
[root@ip-10-10-10-1 ~]# grep 100 /etc/my.cnf
innodb_buffer_pool_size=100G
[root@ip-10-10-10-1 ~]# sed -i -e 's/100G/256M/' /etc/my.cnf
[root@ip-10-10-10-1 ~]# grep innodb_buffer_pool_size /etc/my.cnf
innodb_buffer_pool_size=256M

Now, I’m not even expecting this instance to come up, and I am correct — It seems a filename has incorrect permissions.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
...
141014 17:37:15 InnoDB: Initializing buffer pool, size = 256.0M
141014 17:37:15 InnoDB: Completed initialization of buffer pool
141014 17:37:15  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
141014 17:37:15 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/ibdata1
-rw-rw---- 1 27 27 18874368 Mar 13  2014 /var/lib/mysql/ibdata1
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql
total 83980
-rw-rw---- 1    27    27 18874368 Mar 13  2014 ibdata1
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile0
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile1
drwx------ 2 mysql mysql     4096 Mar 13  2014 mysql
drwx------ 2 root  root      4096 Mar 13  2014 performance_schema
drwx------ 2 root  root      4096 Mar 13  2014 test
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql

Now, I wasn’t even expecting the service to come up, but to my surprise it came up!

[root@ip-10-10-10-1 ~]# service mysqld start
Starting mysqld:                                           [  OK  ]

So now, what I wanted to do was connect and start working, but again, there was another error! I saw that it was related to the socket file mysql.sock, so I changed it to the correct value in our .cnf file

[root@ip-10-10-10-1 ~]# mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[root@ip-10-10-10-1 ~]# service mysql status
mysql: unrecognized service
[root@ip-10-10-10-1 ~]# service mysqld status
mysqld (pid  5666) is running...
[root@ip-10-10-10-1 ~]# ls -l /tmp/mysql.sock
ls: cannot access /tmp/mysql.sock: No such file or directory
[root@ip-10-10-10-1 ~]# grep socket /var/log/mysqld.log | tail -n 1
Version: '5.5.36'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
[root@ip-10-10-10-1 ~]# lsof -n | grep mysqld | grep unix
mysqld    5666    mysql   12u     unix 0xffff880066fbea40       0t0     981919 /var/lib/mysql/mysql.sock
[root@ip-10-10-10-1 ~]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql

innodb_data_file_path=ibdata1:18M
innodb_buffer_pool_size=256M
innodb_log_file_size=32M
sort_buffer_size=60M

[client]
socket=/tmp/mysql.sock
[root@ip-10-10-10-1 ~]# vi /etc/my.cnf
[root@ip-10-10-10-1 ~]# service mysqld restart
Stopping mysqld:                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@ip-10-10-10-1 ~]# mysql -p
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Conclusion

As you can see, there are different ways to troubleshoot the startup of a MySQL instance, so hope this helps you out in your journey when you are starting to use this DBMS and also if you know of another way, let me know in the comment section below.

Please note that this blog post was originally published on my personal blog.

Categories: DBA Blogs

OCP 12C – Partitioning Enhancements

DBA Scripts and Articles - Wed, 2014-10-22 09:13

Online Partition operations Table Partitions and subpartitions can now be moved online. [crayon-547411b39ec05116195423/] Compression options can also be added during an online partition move. [crayon-547411b39ec10799997252/] Reference Partitioning Enhancements Truncate or Exchange Partition with Cascade option With Oracle 12c, it is now possible to use the CASCADE option to cascade operations to a child-referenced table when […]

The post OCP 12C – Partitioning Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs