Skip navigation.

Feed aggregator

Going to Oracle Open World? PeopleSoft Your Primary Interest?

PeopleSoft Technology Blog - Thu, 2014-08-28 15:33
We look forward to Oracle Open World every year for a number of reasons.  Chief among them is the opportunity to interact with customers and partners in person.  We also relish the opportunity to show you the latest PeopleSoft applications and tools--the stuff we've been working on over the past year.  If you are attending the conference and building your schedule, there is a handy document on-line that provides information on most or all of the PeopleSoft-focused activities at the conference, including sessions/presentations, meet the experts, demos, exhibition schedules, SIG meetings, user group gatherings and receptions, and more.  It's going to be a great week!  Hope to see you there.

Cal State Online: Public records shed light on what happened

Michael Feldstein - Thu, 2014-08-28 14:35

Last month I shared the system announcement that the Cal State Online (CSO) initiative is finished. Despite the phrasing of “re-visioning” and the retention of the name, the concept of a standalone unit to deliver and market online programs for the system is gone. Based on documents obtained by e-Literate through a public records request:[1]

  • The original concept of “a standardized, centralized, comprehensive business, marketing and outreach support structure for all aspects of online program delivery for the Cal State University System” was defined in summer 2011, formally launched in Spring 2013, and ultimately abandoned in Fall 2013;
  • CSO was only able to enroll 130 full-time equivalent students (FTES) in CY2013 despite starting from pre-existing campus-based online programs and despite minimum thresholds of 1,670 FTES in the Pearson contract;
  • CSO was able to sign up only five undergraduate degree-completion programs and two master’s programs offered at four of the 23 Cal State campuses;
  • Faculty groups overtly supported investments in online education but did not feel included in the key decision processes;
  • Pearson’s contract as a full-service Online Service Provider was in place for less than one year before contract renegotiations began, ultimately leading to LMS services only; and
  • The ultimate trigger to abandon the original model was the $10 million state funding for online education to address bottleneck courses.

That last one might seem counter-intuitive without the understanding that CSO did not even attempt to support matriculated Cal State students in state-funded programs.

Terminology note: CSO measured course enrollments as “one student registered in one online course”, such that one student taking two courses would equal two course enrollments, etc. Internally CSO calculated 10 course enrollments = 1 FTES.

Below is a narrative of the key milestones and decisions as described by the public documents. I’ll share more of my thoughts in a future post.

2011

Based on foundational work done in 2010 by the Technology Steering Committee (TSC), a group of nine campus presidents along with six Chancellor’s Office staff, a contract is awarded to a consultant (Richard Katz and Associates) to produce five reports on online learning (link will download zip file) and Cal State Universities work to date. TSC then produced an overview document for what would become CSO in June 2011, including 10 guiding principles and the first schedule estimate. An October 2011 update document further clarified the plans. Some key decisions made in 2011 included forming a separate 501(c)3 organization owned by Cal State University and funding the creation of CSO by the contribution of $50,000 from each of the 23 CSU campuses.

Two key decisions from this period are worth highlighting, as they explain much of the trajectory of CSO in retrospect. The first one defined the need for an Online Service Provider (ultimately chosen as Pearson).

A business partner for CSU Online might be needed in order to provide the necessary student support services, including, for example, advising, financial aid, career services, and tutoring. In addition, a business partner could provide the 24/7/365 help desk support absolutely critical for CSU Online. Market research and marketing of programs are other potential areas for the contributions of a business partner. Instructional design support for faculty is another potential area, as is technological support for the effort.

The second decision defined a strategy in terms of which types of online programs to add in which order.

Following from the bedrock of our Principles, the TSC supported a tactical entrance into CSU Online by focusing on those areas in which CSU campuses are already strong and proficient. We believe that it is imperative to start from a position of program strength rather than to straggle into the market in areas as yet not fully defined or ready for implementation. Accordingly, the TSC recommends that CSU Online address six areas, with two ready for immediate roll out.

  1. The 60 or so Masters level programs that exist throughout the CSU should comprise our initial effort with an eye toward serving the extensive mid-career professional and unemployed adults who are in need of this level of education to advance their careers.
  2. Our second focus should entail the presentation of two or three degree completion programs in an effort to enhance workforce development.

An important note on both of these areas is that they are both self-support, offered through continued or extended education groups and not eligible for state funding. These self-support programs do not have the same constraints on setting tuition and tend to it significantly higher than state-support mainline programs.

The overview also estimated the timeline to include an RFP for commercial partner (OSP) to be released in Fall 2011.

By late 2011 there were already signs of faculty discontent with the inclusion of faculty in CSO decision-making and with the planned usage of a commercial partner. The Cal State Dominguez Hills faculty senate resolved in November:

Growing faculty concerns about the minimal faculty input in the development of the Online Initiative, as well as the direction the Initiative may be taking have led three Academic Senates (CSUSB, CSU Stanislaus, and Sonoma State) to pass resolutions calling for the suspension of the Initiative until basic issues are addressed and approved by campus senates. In addition a “CSU Online Faculty Task Force,” consisting of over 80 faculty across the CSU, has been actively questioning features of the Initiative and has written an open letter to Chancellor Reed expressing opposition to outsourcing to for‐profit online providers or attempts to circumvent collective bargaining.

The task force open letter can be found here.

2012

The RFP was actually released in April 2012. To my reading, the document was unorganized and lacked enough structure to let bidders know what to expect or what was needed. On schedule and enrollments, the RFP advised the following:

1.5 Cal State Online expects to officially launch in January 2013, with as many as ten degree programs. For the late fall 2012 term (beginning in late October 2012) Cal State Online anticipates offering two to three courses in several programs in a live beta test term.

1.6 ENROLLMENT PROJECTIONS Vendors should base proposals on 1,000 three unit course enrollments in year one and 3,000 three unit course enrollments in year two.

The RFP evaluation process was described in the first CSO Advisory Board meeting notes from June 2012, showing the final decision to select between Pearson and Academic Partnerships. Pearson was selected as the partner, and their contract[2] has an unexplained change in enrollments.

The spending amounts detailed below (which may also be increased as appropriate, in Pearson’s discretion) are dependent on Cal State Online meeting the defined Enrollment thresholds for the prior calendar year. If Cal State Online does not meet such thresholds, the spending amounts for the then-current calendar year will be adjusted to reflect the actual number of enrollments achieved during the previous calendar year.

Pearson Thresholds

I do not know how the numbers went from an estimate of 1,000 course enrollments for 2013 in the RFP to a minimum of 16,701 course enrollments for 2013 in the contract. In retrospect, this huge increase can be described as wishful thinking, perhaps with the goals of making the financial case work for both CSO and Pearson.

The Advisory Board also decided in the June 2012 meeting to set standardized tuition for CSO at $500 per unit (compared with approximately $270 per unit for traditional campus student with 12 units per semester).

By October CSO had identified the specific campus programs interested in participating, document in the Launch Programs Report. The first page called out two of the first programs bringing in 200 students and 20 students – in other words, CSO migrated several hundred students to get started.

Launch_Programs_Report_October_2012_pdf__page_1_of_3_

2013: Winter and Spring

In the Spring 2013 term, CSO kicked off with the Launch Programs described in the February 2013 Advisory Board meeting minutes.

Launch Programs: 6 Programs from 3 Campuses

  • CSU Fullerton launched 3 courses in their online Business BA program January 14th 2013; marketing and recruiting of next group of students in progress. 35 + 18 Existing Students.
  • CSU Dominguez Hills will launch their BA MBA and PA MPA online programs in spring 2013; marketing and recruiting students is in progress. BA Applied Studies will launch in summer 2013; first CSU reconnect program.
  • CSU Monterey Bay will launch two new masters programs, Technology and MS in IT Management in spring 2013 and MS in Instructional Science and Technology will launch in summer 2013. Marketing to begin ASAP.

The notes also call out a financial model (document not shared with Advisory Board but notes taken) with three scenarios.

Three scenarios:

  • Scenerio [sic] 1: Baseline Growth Modeling where projected enrollments grom [sic] from 188 to 7500; programs grom from 3 to 25; revenues from to over $11 million and additional investment required $2.2 million. Break even in FY 12/14.
  • Scenario 2: Break Even in fiscal year 2012/14 Modeling where enrollments from from 188 to 15,750, programs grom from 3 to 30, revenues grom to over 23 million and additional investment required is $1 million.
  • Scenario 3: Best/Strong Growth where enrollments grow from 254 to 36,250, programs grow from 3 to 50, revenues grow to over $54 million and additional investment required is $1 million.

The budget planning seems to fall on fiscal years (Jul 1 – Jun 30), whereas all other CSO planning was based on calendar years. Note that the best case scenario included an additional $1 million in CSU investment, and the baseline scenario estimated 7,500 course enrollments from Fall 13 thru Spring 14. Based on an email exchange with CSU Public Affairs, Fall 13 saw almost 1,200 course enrollments, which would have required a six-fold increase in Spring 14 just to make the baseline scenario.

Update: Also in February, CSO executive director Ruth Claire Black testified at the Little Hoover Commission (an independent state oversight board in California) describing the CSO initiative as part of discussion on state needs in higher education.

By the April Advisory Board meeting, CSO was seeing some positive interest from campuses, although the numbers were fairly modest compared to previous CSO estimations.

April Launch Report

  • Fullerton business degree completion program is making good progress; 83 applications pending, 17 admitted for fall. Heavily oversubscribed for Fullerton. Good review from stundents on coaching. 50% of inquiries are for Fullerton program.
  • Dominguez Hills BS Applied Studies program starts May 4. Large cohort of existing students. 13 students admitted for summer; fall 17 students admitted.
  • The next undergraduate program will be the Northridge Reconnect program. In the next 30 days website will be updated to reflect Reconnect.
  • Fresno MBA 60 inquiries; 1 applicant and 1 admission
  • Other 4 grad programs slow build; redirect marketing resources towards masters programs
  • Fresno Homeland Security Certificate website and Humboldt Golden Four are up on website. We are seeing equal demand across the courses (3 GE courses)
  • Interest list has grown significantly; campuses who are not currently participating Cal State Online is full for fall. If existing Cal State Online campus may have capacity. Sociology at Fullerton. Dominguez Hills QA for fall start. Taking advantage of launch financial model.

The notes showed the group watching new activity from the California state legislature regarding online education, including the infamous SB 520.[3] This raised the question of what Cal State Online’s role should be with this new emphasis. [emphasis added below]

Can Cal State Online fulfill the role of putting all online? Where should we focus? State side or Cal State Online. Chancellor wants this to happen. Ruth and Marge are working on a plan. Need to be cautious to not cause confusion to students and not diminish Cal State Online.

Requirement of bill is that courses must be articulated statewide. Makes sense for Cal State Online to take ownership.

In May the CSU faculty senate passed a resolution calling on Cal State Online to promote all online programs and not just the six run through CSO.

RESOLVED: That all online degree programs offered by CSU campuses be given the same degree of prominence on the Calstateonline.com and Calstateonline.net websites as the online degree programs offered through Cal State Online; and be it further

RESOLVED: That there should be no charge for listing state­support online degree programs on the Calstateonline.com and Calstateonline.net websites;

By the June Advisory Board meeting, there was some progress for Fall enrollments, and there was concern that the state legislature did not understand the bottleneck problem.

Legislature thinks that if students knew about online courses our bottleneck problem would be solved. State is not funding FTES. Enrolling students online will need state subsidy. There is a belief that we can educate students online cheaply. There is a disconnect in Sacramento. Enrollment caps are more the issue, not bottlenecks.

There was also an enrollment presentation for the June meeting:

Download (PDF, 221KB)

2013: Summer and Fall

Despite planned meetings every two months, the CSO Advisory Board did not meet again until October, and in this interim the decision was made to abandon the original concept and to change the Pearson contract. Advisory Board members were not pleased with the process.

In early summer Pearson requested changes in the CSU/Pearson contract; wanted to increase CSU costs for services. The quality of the marketing provided by Pearson was not adequate. There were multiple meetings between Pearson and Cal State Online to resolve concerns resulting in changes to the contract.

The new marketing firm for Cal State Online is DENT; replaces Pearson; started in July 2013. So far there is a high level of satisfaction

A communication was distributed to the Advisory Board and CSU system stakeholders on October 17th regarding the Pearson/Cal State Online contract changes. The communication can be found on the Cal State Online CSYOU site [ed. no longer available].

Discussion/Comments: 

  • Members of the Advisory Board stated that there was little to no communication to them about the changes taking place. The last board meeting was a telelconference call in June and the August in-person meeting was cancelled.
    • There was a need to keep only a small number of people involved during the complicated negotiation process

The CSO entity was never formed as a 501(c)3 organization, and with the summer changes CSO would now report to Academic Affairs. The meeting notes further describe the changes.

The current Cal State Online business model will be in place until the end of 2013 and will then change. The Advisory Board will help identify opportunities and provide direction. It is anticipated that this will result in some changes in current program participation but hope that the current campuses will continue. Since campuses now have the option to use the LMS platform of their choice some campuses may elect to change to their own platform. [snip]

The Governor contributed $10 million to increase online education within the CSU. AB 386 Levine. Public postsecondary education: cross-enrollment: online education at the California State University was approved by the Governor on September 26, 2013 [emphasis added].

  • With the changes in the Pearson relationship and the passing of AB 386 we are now taking a much broader view of Cal State Online; will be used as a store front for CSU online courses. All online courses and programs in system will have Cal State Online as the store front.

The CSU faculty senate unanimously passed another resolution related to CSO in November. The resolution applauded the movement of CSO to report to Academic Affairs and the allowance for campus selection of LMS, but the real focus was the lack of faculty input in the decision-making.

RESOLVED: That the Academic Senate of the California State University (ASCSU) express its dismay that recent changes to Cal State Online were announced to system constituencies without review or input from the Cal State Online Advisory Board; and be it further [snip]

RESOLVED: That the ASCSU contend that the dissolution of the Cal State Online Board should not occur until a plan for a new governance structure that includes faculty is established, and be it further

RESOLVED: That the ASCSU recommend the establishment of a newly configured Cal State Online system­ wide advisory committee to include at least 5 faculty members, and the creation of a charge, in a partnership between the ASCSU and the Academic Affairs division of the Chancellor’s Office;

This issue – involvement in decision-making – was continued at the final Advisory Board meeting just three days after the senate resolution.

Ephraim Smith (VP Academic Affairs): The Cal State Online Board was originally created for a 501c3 organization but there was a change in direction and did not pursue 501c3; board then acted as advisory. Now that Cal State Online hase moved to Academic Affairs the question is how should it interact with constituencies; work through existing committees? Need to discuss.

There are three full pages of notes on the resultant discussion, ended in a plan to form a Commission that looks broadly at online education across the CSU.

2014

Despite the decision being made in Fall 2013 on the major changes to Cal State Online, the systemwide communication listed in my July post was not made until June 2014. The above description is mostly based on CSO documentation, but I plan to add a few of my own thoughts of the lessons learned from this short-lived online initiative in a future post.

  1. CSU officials did not respond to requests to be interviewed for this story. The offer is still open if someone would like to comment.
  2. The contract is no longer available in public, so I will only share one excerpt here.
  3. Disclosure: Michael and I wrote a white paper for 20 Million Minds Foundation calling out how Cal State Online did not attempt to address relieving bottleneck courses for matriculated students, which was the purported goal of much of the state legislative debate.

The post Cal State Online: Public records shed light on what happened appeared first on e-Literate.

Oracle EBS Techno Functional Support: Additional Services Series Pt. 4 [VIDEO]

Chris Foot - Thu, 2014-08-28 13:37

Transcript

Welcome back to our Additional Services series. Today we’re highlighting our Oracle EBS Techno Functional Support, a feature we offer to help customers to make sure their Oracle applications are running properly.

At RDX we offer full Oracle EBS support from

a team of experts, ensuring your mission-critical environments are available 24×7. Our team helps you customize your applications to meet business needs, and even provides advice about the best features to use so you can take advantage of advanced functionality. When problems do occur, RDX assigns experts to work Severity 1 issues around the clock.

Our dedicated EBS experts have cross-functional experience and adhere to industry best practices. We’ll also assign project managers to ensure we are on time and on budget with projects.

For more information on the full breadth of our Oracle EBS techno functional support, follow the link below! We’ll see you next time.

The post Oracle EBS Techno Functional Support: Additional Services Series Pt. 4 [VIDEO] appeared first on Remote DBA Experts.

Tungsten Replicator: MariaDB Master-Master and Master-Slave Topologies

Pythian Group - Thu, 2014-08-28 12:45

A common concern in the MySQL community is how to best implement high availability for MySQL. There are various built-in mechanisms to accomplish this such as Master/Master and Master/Slave replication using binary logs as well as FOSS solutions such as Galera and Tungsten, just to name a few. Often times, IT Managers and DBAs alike opt to avoid implementing a third party solution due to the added administrative overhead without fully evaluating the available solutions. In today’s blog post, I would like to describe the process for configuring a Master/Slave topology and switching to a Master/Master topology with Tungsten Replicator.

Tungsten Replicator is a well known tool that has gained much acclaim in the area of MySQL Enterprise database implementation, however, many teams tend to stay away from the implementation to avoid over-complicating the replication topology. I have listed and described all of the steps required to configure a replication topology for 1 to N nodes (today’s how-to guide serves for a 2-node implementation but I will described the additional steps that would be required to implement these topologies for N nodes).

The 2 nodes I will be using are vm128-142 and vm129-117, the first part of the document contains the steps that need to be performed on both nodes and the latter describes the steps to be performed on either one of the two nodes. As soon as Tungsten Replicator has been installed on both nodes with the same configuration files the switch is as simple as “one, two, three” – all it requires is running the script that configures the topology of your choice. The main topologies that are available are :

  • Master – Slave: Replication flowing from 1 .. N nodes using Tungsten Replicator
  • Master – Master: Bi-directional replication for 1 .. N nodes
  • Star Topology: A central node acts as a hub and all spokes are Master nodes
  • Fan-in Topology: A single slave node with replication from 1 .. N Master nodes

(Check out https://code.google.com/p/tungsten-replicator/wiki/TRCMultiMasterInstallation for further details)

So, let’s continue with the actual steps required (please note I’m using the “root” account with SSH passwordless authentication for the purposes of this article, it is best to define another user on production systems). The parameters and values in red text require customization for your system / topology. The configuration files are all indented in the text is royal blue:

### The following commands should be executed on all nodes (vm128-142 & vm129-117 in this how-to)

su - root
cd /root # or alternatively to a place like /opt/ or /usr/local/
vi /etc/yum.repos.d/MariaDB.repo

 # MariaDB 5.5 CentOS repository list - created 2014-08-25 16:59 UTC
 # http://mariadb.org/mariadb/repositories/
 [mariadb]
 name = MariaDB
 baseurl = http://yum.mariadb.org/5.5/centos6-amd64
 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
 gpgcheck=1

vi /etc/security/limits.conf

 # add the following line
 * - nofile 65535

yum update

yum install wget MariaDB-server MariaDB-client ruby openssh-server rsync 
yum install java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el6_5.x86_64 
yum install http://www.percona.com/downloads/XtraBackup/LATEST/binary/redhat/6/x86_64/percona-xtrabackup-2.2.3-4982.el6.x86_64.rpm
ln -s /usr/bin/innobackupex /usr/bin/innobackupex-1.5.1

wget http://downloads.tungsten-replicator.org/download.php?file=tungsten-replicator-2.2.1-403.tar.gz
tar -xzvf download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
rm download.php\?file\=tungsten-replicator-2.2.1-403.tar.gz
cd tungsten-replicator-2.2.1-403/

vi cookbook/COMMON_NODES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 export NODE1=vm128-142.dlab.pythian.com
 export NODE2=vm129-117.dlab.pythian.com
 #export NODE3=host3
 #export NODE4=host4
 #export NODE5=host5
 #export NODE6=host6
 #export NODE7=host7
 #export NODE8=host8

vi cookbook/USER_VALUES.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 # User defined values for the cluster to be installed.

 cookbook_dir=$(dirname $0 )

 # Where to install Tungsten Replicator
 export TUNGSTEN_BASE=/opt/tungsten-replicator/installs/cookbook

 # Directory containing the database binary logs
 export BINLOG_DIRECTORY=/var/lib/mysql

 # Path to the script that can start, stop, and restart a MySQL server
 export MYSQL_BOOT_SCRIPT=/etc/init.d/mysql

 # Path to the options file
 export MY_CNF=/etc/my.cnf

 # Database credentials
 export DATABASE_USER=tungsten
 export DATABASE_PASSWORD=tungsten
 export DATABASE_PORT=3306

 # Name of the service to install
 export TUNGSTEN_SERVICE=cookbook

 # Replicator ports
 export RMI_PORT=10000
 export THL_PORT=2112

 # If set, replicator starts after installation
 [ -z "$START_OPTION" ] && export START_OPTION=start

 ##############################################################################
 # Options used by the "direct slave " installer only
 # Modify only if you are using 'install_master_slave_direct.sh'
 ##############################################################################
 export DIRECT_MASTER_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_SLAVE_BINLOG_DIRECTORY=$BINLOG_DIRECTORY
 export DIRECT_MASTER_MY_CNF=$MY_CNF
 export DIRECT_SLAVE_MY_CNF=$MY_CNF
 ##############################################################################

 ##############################################################################
 # Variables used when removing the cluster
 # Each variable defines an action during the cleanup
 ##############################################################################
 [ -z "$STOP_REPLICATORS" ] && export STOP_REPLICATORS=1
 [ -z "$REMOVE_TUNGSTEN_BASE" ] && export REMOVE_TUNGSTEN_BASE=1
 [ -z "$REMOVE_SERVICE_SCHEMA" ] && export REMOVE_SERVICE_SCHEMA=1
 [ -z "$REMOVE_TEST_SCHEMAS" ] && export REMOVE_TEST_SCHEMAS=1
 [ -z "$REMOVE_DATABASE_CONTENTS" ] && export REMOVE_DATABASE_CONTENTS=0
 [ -z "$CLEAN_NODE_DATABASE_SERVER" ] && export CLEAN_NODE_DATABASE_SERVER=1
 ##############################################################################


 #
 # Local values defined by the user.
 # If ./cookbook/USER_VALUES.local.sh exists,
 # it is loaded at this point

 if [ -f $cookbook_dir/USER_VALUES.local.sh ]
 then
 . $cookbook_dir/USER_VALUES.local.sh
 fi

service iptables stop 

 # or open ports listed below:
 # 3306 (MySQL database)
 # 2112 (Tungsten THL)
 # 10000 (Tungsten RMI)
 # 10001 (JMX management)

vi /etc/my.cnf.d/server.cnf

 # These groups are read by MariaDB server.
 # Use it for options that only the server (but not clients) should see
 #
 # See the examples of server my.cnf files in /usr/share/mysql/
 #

 # this is read by the standalone daemon and embedded servers
 [server]

 # this is only for the mysqld standalone daemon
 [mysqld]
 open_files_limit=65535
 innodb-file-per-table=1
 server-id=1 # make server-id unique per server
 log_bin
 innodb-flush-method=O_DIRECT
 max_allowed_packet=64M
 innodb-thread-concurrency=0
 default-storage-engine=innodb
 skip-name-resolve

 # this is only for embedded server
 [embedded]

 # This group is only read by MariaDB-5.5 servers.
 # If you use the same .cnf file for MariaDB of different versions,
 # use this group for options that older servers don't understand
 [mysqld-5.5]

 # These two groups are only read by MariaDB servers, not by MySQL.
 # If you use the same .cnf file for MySQL and MariaDB,
 # you can put MariaDB-only options here
 [mariadb]

 [mariadb-5.5]

service mysql start
mysql -uroot -p -e"CREATE USER 'tungsten'@'%' IDENTIFIED BY 'tungsten';"
mysql -uroot -p -e"GRANT ALL PRIVILEGES ON *.* TO 'tungsten'@'%' WITH GRANT OPTION;"
mysql -uroot -p -e"FLUSH PRIVILEGES;"

ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_rsa.pub | ssh vm129-117 'cat >> ~/.ssh/authorized_keys' # from vm128-142
cat ~/.ssh/id_rsa.pub | ssh vm128-142 'cat >> ~/.ssh/authorized_keys' # from vm129-117
chmod 600 authorized_keys

cookbook/validate_cluster # this is the command used to validate the configuration

vi cookbook/NODES_MASTER_SLAVE.sh

 #!/bin/bash
 # (C) Copyright 2012,2013 Continuent, Inc - Released under the New BSD License
 # Version 1.0.5 - 2013-04-03

 CURDIR=`dirname $0`
 if [ -f $CURDIR/COMMON_NODES.sh ]
 then
 . $CURDIR/COMMON_NODES.sh
 else
 export NODE1=
 export NODE2=
 export NODE3=
 export NODE4=
 export NODE5=
 export NODE6=
 export NODE7=
 export NODE8=
 fi

 export ALL_NODES=($NODE1 $NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)
 # indicate which servers will be masters, and which ones will have a slave service
 # in case of all-masters topologies, these two arrays will be the same as $ALL_NODES
 # These values are used for automated testing

 #for master/slave replication
 export MASTERS=($NODE1)
 export SLAVES=($NODE2 $NODE3 $NODE4 $NODE5 $NODE6 $NODE7 $NODE8)

## The following commands should be performed on just one of the nodes
## In my case either vm128-142 OR 129-117

cookbook/install_master_slave # to install master / slave topology
cookbook/show_cluster # here we see master - slave replication running

 --------------------------------------------------------------------------------------
 Topology: 'MASTER_SLAVE'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 cookbook [master] seqno: 1 - latency: 0.514 - ONLINE
 # node vm129-117.dlab.pythian.com
 cookbook [slave] seqno: 1 - latency: 9.322 - ONLINE

cookbook/clear_cluster # run this to destroy the current Tungsten cluster 

cookbook/install_all_masters # to install master - master topology 
cookbook/show_cluster # and here we've switched over to master - master replication

 --------------------------------------------------------------------------------------
 Topology: 'ALL_MASTERS'
 --------------------------------------------------------------------------------------
 # node vm128-142.dlab.pythian.com
 alpha [master] seqno: 5 - latency: 0.162 - ONLINE
 bravo [slave] seqno: 5 - latency: 0.000 - ONLINE
 # node vm129-117.dlab.pythian.com
 alpha [slave] seqno: 5 - latency: 9.454 - ONLINE
 bravo [master] seqno: 5 - latency: 0.905 - ONLINE

Categories: DBA Blogs

PostgreSQL vs. MySQL: Part One

Chris Foot - Thu, 2014-08-28 11:50

PostgreSQL and MySQL are both recognized as the world's most popular open source database architectures, but there are some key differences between the two.

Database administration professionals often favor both environments for their raw, customizable formats. For those who are unfamiliar with the term, open source means the code used to create these architectures is divulged to the public, allowing IT experts of every ilk to reconstruct the program to fit specific needs. While MySQL and PostgreSQL are similar in this respect, there are some key differences.

A quick history: PostgreSQL

Carla Schroder, a contributor to OpenLogic, acknowledged PostgreSQL as the older solution, having been developed at the University of California, Berkeley in 1985. Thousands of enthusiasts from around the world have participated in the development and support of this architecture. DigitalOcean labeled the solution an objective relational database management system capable of handling mission-critical applications and high-frequency transactions. Here are some other notable traits:

  • Fully complaint with atomicity, consistency, isolation and durability
  • Uses Keberos and OpenSSL for robust protection features
  • Point-in-time recovery enables users to implement warm standby servers for quick failover

A quick history: MySQL
As for MySQL, Schroder noted this particular system is about nine years younger than its predecessor

– having been created by MySQL AB in 1994. It provides a solid foundation for Web developers, as it's part of a software bundle comprised of Linux, Apache HTTP Server, MySQL and PHP. MySQL was first blueprinted to be a reliable Web server backend because it used an expedited indexed sequential access method. Over the years, experts have revised MySQL to support a variety of other storage engines, such as the MEMORY architecture that provides temporary tables.

Although open sourced, because it isn't community-based some versions (all of which are now owned and distributed by Oracle) cost a small amount of capital.

Part Two will dig deeper into these two architectures, describing use cases, their respective capabilities and more.

The post PostgreSQL vs. MySQL: Part One appeared first on Remote DBA Experts.

adopreports utility in R12.2

Vikram Das - Thu, 2014-08-28 11:01
I discovered a utility in R12.2 when I was looking for the directory of adop :

which adop
$NE_BASE/EBSapps/appl/ad/bin/adop

cd $NE_BASE/EBSapps/appl/ad/bin/
$ ls
adop  adopreports

Curious I executed adopreports
$ adopreports

Enter the APPS username: apps
Enter the APPS Password:



    Online Patching Diagnostic Reports Main Menu
    --------------------------------------------

    1.  Run edition reports
    2.  Patch edition reports
    3.  Other generic reports
    4.  Exit

    Enter your choice [4]: 3




    Other Generic Reports Sub Menu
    ------------------------------

    1.  Editions summary
    2.  Editioned objects summary
    3.  Free space in important tablespaces
    4.  Status of critical AD_ZD objects
    5.  Actual objects in current edition
    6.  Objects dependencies
    7.  Objects dependency tree
    8.  Editioning views column mappings
    9.  Index details for a table
    10.  Inherited objects in the current edition
    11.  All log messages
    12.  Materialized view details
    13.  Database sessions by edition
    14.  Table details (Synonyms, EV, etc.)
    15.  Count and status of DDL execution by phase
    16.  Back to main menu

This is a great utility for R12.2
Categories: APPS Blogs

12c: How to Restore/Recover a Small Table in a Large Database

Pythian Group - Thu, 2014-08-28 09:35

As a DBA, you will receive requests from developers or users, indicating that they deleted some data in a small table in a large database a few hours prior. They will probably want you to recover the data as soon as possible, and it will likely be a critical production database. Flashback will not be enabled, and the recycle bin will have been purged. Restoring a full database using RMAN might take you over 10 hours, and you will need a spare server with big storage. Looks like it’s going to be a difficult and time consuming task for you.

In Oracle Database 12c, there is a method available which allows us to recover the table more efficiently, and at a lower cost. The method is to create a second database (often called a stub database) using the backup of the first database. In this situation, we restore the SYSTEM, SYSAUX, and UNDO tablespaces and the the individual tablespaces that contain the data that we want to restore. After the restore is complete, we alter any tablespaces that we did not restore offline. We then apply the archived redo logs to the point in time that we want to restore the table to. Having restored the database to the appropriate point in time, we then use Oracle Data Pump to export the objects, and then you import them into the original database, again using Oracle Data Pump. Oracle Database 12c introduces new functionality in RMAN that supports point-in-time restore of individual database tables and individual table partitions.

Here is an example of when I tested this new feature:

1. The database TEST has 9 tablespaces and a schema called Howie. I created a table with 19377 records called TEST1 which is in the tablespace DATA_HOWIE.

SQL> select * from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME                                                        VERSION           STARTUP_T STATUS       PAR    THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS     SHU DATABASE_STATUS   INSTANCE_ROLE      ACTIVE_ST BLO  CON_ID INSTANCE_MO EDITION FAMILY
--------------- ---------------- ---------------------------------------------------------------- ----------------- --------- ------------ --- ---------- ------- --------------- ---------- --- ----------------- ------------------ --------- --- ---------- ----------- ------- --------------------------------------------------------------------------------
1 TEST             12cServer1                                                       12.1.0.1.0        17-AUG-14 OPEN         NO           1 STARTED                 ALLOWED    NO  ACTIVE            PRIMARY_INSTANCE   NORMALNO            0 REGULAR     EE

SQL> select tablespace_name from dba_tablespaces order by tablespace_name;

TABLESPACE_NAME
------------------------------
DATA_HOWIE
DATA_TB1
DATA_TB2
DATA_TB3
SYSAUX
SYSTEM
TEMP
UNDOTBS1
USERS

9 rows selected.

SQL> conn howie
Enter password:
Connected.
SQL> create table test1 as select * from dba_objects;

Table created.

SQL> select count(*) from test1;

COUNT(*)
----------
19377

SQL> select table_name,tablespace_name from user_tables where table_name='TEST1';

TABLE_NAME                                                                                                                       TABLESPACE_NAME
-------------------------------------------------------------------------------------------------------------------------------- ------------------------------
TEST1                                                                                                                            DATA_HOWIE

2. The database is in archivelog mode, and I took a full backup of the database.

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 20:16:17 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> run
{
allocate channel d1 type disk format '/u01/app/oracle/RMAN/rmn_%d_t%t_p%p';
backup
incremental level 0
tag backup_level0
filesperset 1
(database)
plus archivelog ;
release channel d1;
}2> 3> 4> 5> 6> 7> 8> 9> 10> 11>

3. The data in the table howie.test1 has been deleted.

SQL> select sysdate,current_scn from v$database;

SYSDATE             CURRENT_SCN
------------------- -----------
08/17/2014 21:01:15      435599

SQL> delete test1;

19377 rows deleted.

SQL> commit;

Commit complete.

4. I ran following scripts to recover the data to an alternative table howie.test1_temp to the point in time “08/17/2014 21:01:15″

[oracle@12cServer1 RMAN]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Sun Aug 17 21:01:35 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2146502230)

RMAN> recover table howie.test1
until time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')"
auxiliary destination '/u01/app/oracle/aux'
remap table howie.test1:test1_temp;2> 3> 4>

5. The scripts above will take care of everything and you will see the data has been restored to howie.test1_temp

SQL> select count(*) from TEST1_TEMP;

COUNT(*)
----------
19377

SQL> select count(*) from TEST1;

COUNT(*)
----------
0

Let’s take a look at the log of RMAN recovery and find out how it works.

1. Creation of the auxiliary instance

Creating automatic instance, with SID='ktDA'

initialization parameters used for automatic instance:
db_name=TEST
db_unique_name=ktDA_pitr_TEST
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/app/oracle/aux
log_archive_dest_1='location=/u01/app/oracle/aux'
#No auxiliary parameter file used

2. Restore of the control file for the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
}

3. A list of datafiles that will be restored, followed by their restore and recovery in the auxiliary instance

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}

contents of Memory Script:
{
# set requested point in time
set until  time "to_date('08/17/2014 21:01:15','mm/dd/yyyy hh24:mi:ss')";
# online the datafiles restored or switched
sql clone "alter database datafile  8 online";
# recover and open resetlogs
recover clone database tablespace  "DATA_HOWIE", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}

4. Export of tables from the auxiliary instance via Oracle Data Pump

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_ktDA_BAkw":
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 3 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported "HOWIE"."TEST1"                             1.922 MB   19377 rows
EXPDP> Master table "SYS"."TSPITR_EXP_ktDA_BAkw" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_ktDA_BAkw is:
EXPDP>   /u01/app/oracle/aux/tspitr_ktDA_70244.dmp
EXPDP> Job "SYS"."TSPITR_EXP_ktDA_BAkw" successfully completed at Sun Aug 17 21:03:53 2014 elapsed 0 00:00:14
Export completed

5. Import of tables, constraints, indexes, and other dependent objects into the target database from the Data Pump export file

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_ktDA_lube" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_ktDA_lube":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "HOWIE"."TEST1_TEMP"                        1.922 MB   19377 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_ktDA_lube" successfully completed at Sun Aug 17 21:04:19 2014 elapsed 0 00:00:19
Import completed

6. Clean-up of the auxiliary instance

Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_temp_9z2yqst6_.tmp deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_3_9z2yrkqm_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_2_9z2yrj35_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/onlinelog/o1_mf_1_9z2yrh2r_.log deleted
auxiliary instance file /u01/app/oracle/aux/KTDA_PITR_TEST/datafile/o1_mf_data_how_9z2yrcnq_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_sysaux_9z2yptms_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_undotbs1_9z2yq9of_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/datafile/o1_mf_system_9z2yp0mk_.dbf deleted
auxiliary instance file /u01/app/oracle/aux/TEST/controlfile/o1_mf_9z2yos1l_.ctl deleted
auxiliary instance file tspitr_ktDA_70244.dmp deleted
Finished recover at 17-AUG-14
Categories: DBA Blogs

Numbers: Administrative Costs Soaring? Maybe not

Michael Feldstein - Thu, 2014-08-28 09:19

August 27, 2014

There’s just a mind-boggling amount of money per student that’s being spent on administration

Andrew Gillen, quoted in “New Analysis Shows Problematic Booming Higher Ed Administrators,” Huffington Post, August 26, 2014

 Administrative growth drives up costs at state-owned universities

Debra Edrleu, TribLive, July 28, 2013

 Across U.S. higher education, nonclassroom costs have ballooned, administrative payrolls being a prime example.

Wall Street Journal as quoted by Phil Hill, e-Literate, January 2, 2013

 Administrative costs on college campuses are soaring.

J. Paul Robinson, quoted in “Bureaucrats Paid $250,000 Feed OutcryOver College Costs, Bloomberg News, November 14, 2012

 Administrative Costs Mushrooming

Georget Leff , John William Pope Center for Higher Education Policy, September 15, 2010

 

Are these true, or generalizations that lack the rigor of research? What does the data say?

Since 2004 The National Center for Education Statistics (NCES) Integrated Postsecondary Education Data System (IPEDS) financial survey of colleges and universities has reported the costs of Institutional Support in a standard form. This broad category includes “general administrative services, central executive-level activities concerned with management, legal and fiscal operations, space management, employee personnel and records, … and information technology.” In business this is often called “administration.”

Data from NCES’s Digest of Education Statistics 2012 shows decreases in cost per student from 2003-2004 through 2010-2011 except for public 4 year colleges and universities that increased expenses by 4.1% as shown in Table 1.

Institutional Support per Student

2003-04

2010-11

Change

Public 4 year

$2,212

$2,302

4.1%

Private 4 year

4,611

3,887

-15.7%

Public 2 year

1,045

875

-16.3%

Private 2 year

783

401

-48.8%

Table 1 – Cost of “administration” per enrolled student

These data are expressed in July 2014 dollars adjusted using the Consumer Price Index CPI-U so the results would be unaffected by inflation. The year 2003-2004 was selected for comparison because the data definitions and formats were the first consistent with 2010-11. Because private colleges and universities do not report operation of plant, that cost was omitted from the percentage computations of both. Headcount was used since administrative expenses are more closely related to enrollment of real students than to a mythical full-time equivalent (FTE).

These data are shown graphically in Figure 1.

Figure1

Figure 1 – Comparative Administrative Expenses 2003-2004 and 2010-2011

Data showing administration as a percent of institutional expenses omitting independent organizations, hospitals, and auxiliary enterprises, is shown in Figure 2.

Figure2

Figure 2 – Administration Expenses as a Percent of Institutional Expenses

The percentages are near equal for the two years though the administration expenses per student declined during this period except for the public 4 year colleges and universities. This reduction, likely true also for the cost of instruction, is influenced by increased enrollment and institutional budget that was typically less or about the same as 2003-2004.

The IPEDS revision introduced in the late 70s early 80s was based on program budgeting. The mission of the college or university was considered to be a combination of instruction, research, and public service—sometimes call direct costs. The library and computing was consolidated into academic support upon the belief that books would transition into electronic documents. Student services was another indirect category that includes admissions, registrar, and activities that contribute to students emotional and physical well-being, intramural athletics, and student organizations. Intercollegiate athletics and student health services may be included “except when operated as self-supporting auxiliary enterprises.”

IPEDS tried to avoid financial aid in institutional expenses of mission-based programs since, for example, it is a transfer payment of one student (tuition paid) to another (tuition discount).

NCES now makes the data from these surveys available using several different statistical tools (software).

The NCES data are very useful in analysis and in communicating with the public that seem to be receiving more opinions than facts.

This analysis is an example of verifying assertions that administration expenses are mushrooming, soaring, or ballooning.

Are administrative expenses soaring? The evidence is “no.” But that doesn’t make a sensational headline.

The post Numbers: Administrative Costs Soaring? Maybe not appeared first on e-Literate.

Building a MariaDB Galera Cluster with Docker

Pythian Group - Thu, 2014-08-28 08:13

There’s been a lot of talk about Docker for running processes in isolated userspace (or the cloud for that matter) lately. Virtualization is a great way to compartmentalise applications  and processes however the overhead of virtualization isn’t always worth it – in fact, without directly attached storage IO degradation can seriously impact performance. The solution? Perhaps Docker… With its easy to use CLI as well as the lightweight implementation of cgroups and kernel namespaces.

Without further ado, I present a step-bystep guide on how to build a MariaDB 5.5 Galera Cluster on Ubuntu 14.04. The same guide can probably be applied for MariaDB versions 10+ however I’ve stuck with 5.5 since the latest version of MariaDB Galera Cluster is still in beta.

So we start off with modifying the “ufw” firewall policy to accept forwarded packets and perform a “ufw” service restart for good measure:

root@workstation:~# vi /etc/default/ufw

DEFAULT_FORWARD_POLICY="ACCEPT"

root@workstation:~# service ufw restart
ufw stop/waiting
ufw start/running

I’m assuming you already have docker installed – this is available as a package within the Ubuntu repositories and also available in the Docker repositories (see http://docs.docker.com/installation/ubuntulinux/). You’ll also need to have LXC installed (“apt-get install lxc” should suffice) in order to attach to the Linux Containers / Docker Images.

The next step is pulling the Docker / Ubuntu repository in order to customize an image for our purposes

root@workstation:~# docker pull ubuntu
Pulling repository ubuntu
c4ff7513909d: Pulling dependent layers 
3db9c44f4520: Download complete 
c5881f11ded9: Download complete 
c4ff7513909d: Download complete 
463ff6be4238: Download complete 
822a01ae9a15: Download complete 
75204fdb260b: Download complete 
511136ea3c5a: Download complete 
bac448df371d: Download complete 
dfaad36d8984: Download complete 
5796a7edb16b: Download complete 
1c9383292a8f: Download complete 
6cfa4d1f33fb: Download complete 
f127542f0b61: Download complete 
af82eb377801: Download complete 
93c381d2c255: Download complete 
3af9d794ad07: Download complete 
a5208e800234: Download complete 
9fccf650672f: Download complete 
fae16849ebe2: Download complete 
b7c6da90134e: Download complete 
1186c90e2e28: Download complete 
0f4aac48388f: Download complete 
47dd6d11a49f: Download complete 
f6a1afb93adb: Download complete 
209ea56fda6d: Download complete 
f33dbb8bc20e: Download complete 
92ac38e49c3e: Download complete 
9942dd43ff21: Download complete 
aa822e26d727: Download complete 
d92c3c92fa73: Download complete 
31db3b10873e: Download complete 
0ea0d582fd90: Download complete 
cc58e55aa5a5: Download complete

After the download is complete, we can check the Ubuntu images available for customization with the following command:

root@workstation:~# docker images
 REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
 ubuntu              14.04.1             c4ff7513909d        12 days ago         225.4 MB
 ubuntu              trusty              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              14.04               c4ff7513909d        12 days ago         225.4 MB
 ubuntu              latest              c4ff7513909d        12 days ago         225.4 MB
 ubuntu              utopic              75204fdb260b        12 days ago         230.1 MB
 ubuntu              14.10               75204fdb260b        12 days ago         230.1 MB
 ubuntu              precise             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04               822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.04.5             822a01ae9a15        12 days ago         108.1 MB
 ubuntu              12.10               c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              quantal             c5881f11ded9        9 weeks ago         172.2 MB
 ubuntu              13.04               463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              raring              463ff6be4238        9 weeks ago         169.4 MB
 ubuntu              13.10               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              saucy               195eb90b5349        9 weeks ago         184.7 MB
 ubuntu              lucid               3db9c44f4520        4 months ago        183 MB
 ubuntu              10.04               3db9c44f4520        4 months ago        183 MB

Now that we’ve downloaded our images lets create a custom Dockerfile for our customized MariaDB / Galera Docker image, I’ve added a brief description for each line of the file:

root@workstation:~# vi Dockerfile
 # # MariaDB Galera 5.5.39/Ubuntu 14.04 64bit
 FROM ubuntu:14.04
 MAINTAINER Pythian Nikolaos Vyzas <vyzas@pythian.com>

 RUN echo "deb http://archive.ubuntu.com/ubuntu trusty main universe" > /etc/apt/sources.list # add the universe repo
 RUN apt-get -q -y update # update apt
 RUN apt-get -q -y install software-properties-common # install software-properties-common for key management
 RUN apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db # add the key for Mariadb Ubuntu repos
 RUN add-apt-repository 'deb http://ftp.cc.uoc.gr/mirrors/mariadb/repo/5.5/ubuntu trusty main' # add the MariaDB repository for 5.5
 RUN apt-get -q -y update # update apt again
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password password root | debconf-set-selections # configure the default root password during installation
 RUN echo mariadb-galera-server-5.5 mysql-server/root_password_again password root | debconf-set-selections # confirm the password (as in the usual installation)
 RUN LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -qqy install mariadb-galera-server galera mariadb-client # install the necessary packages
 ADD ./my.cnf /etc/mysql/my.cnf # upload the locally created my.cnf (obviously this can go into the default MariaDB path
 RUN service mysql restart # startup the service - this will fail since the nodes haven't been configured on first boot
 EXPOSE 3306 4444 4567 4568 # open the ports required to connect to MySQL and for Galera SST / IST operations

We’ll also need our base configuration for MariaDB, I’ve included the base configuration variable for Galera – obviously there are more however these are good enough for starting up the service:

root@workstation:~# vi my.cnf
 [mysqld]
 wsrep_provider=/usr/lib/galera/libgalera_smm.so
 wsrep_cluster_address=gcomm://
 wsrep_sst_method=rsync
 wsrep_cluster_name=galera_cluster
 binlog_format=ROW
 default_storage_engine=InnoDB
 innodb_autoinc_lock_mode=2
 innodb_locks_unsafe_for_binlog=1

So far so good, we have Docker installed and our Dockerfile as well as our “my.cnf” file ready to go. Now its time to build our Docker image, check that the image exists and startup 3x separate Docker images for each of our Galera nodes:

root@workstation:~# docker build -t ubuntu_trusty/mariadb-galera .
root@workstation:~# docker images |grep mariadb-galera
 ubuntu_trusty/mariadb-galera   latest              afff3aaa9dfb        About a minute ago   412.5 MB
docker run --name mariadb1 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb2 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash
docker run --name mariadb3 -i -t -d ubuntu_trusty/mariadb-galera /bin/bash

We’ve started up our Docker images, now lets verify that they are in fact up and retrieve the process information we need to connect. We’ll need two pieces of information, the IP-Address and the Docker image name which can be received using the combination the the “docker ps” and the “docker inspect” commands:

}]root@workstation:~# docker ps
 CONTAINER ID        IMAGE                                 COMMAND             CREATED             STATUS              PORTS                                    NAMES
 b51e74933ece        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb3
 03109c7018c0        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb2
 1db2a9a520f8        ubuntu_trusty/mariadb-galera:latest   /bin/bash           About an hour ago   Up About an hour    3306/tcp, 4444/tcp, 4567/tcp, 4568/tcp   mariadb1
root@workstation:~# docker ps |cut -d' ' -f1 |grep -v CONTAINER | xargs docker inspect |egrep '"ID"|IPAddress'
 "ID": "b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0",
 "IPAddress": "172.17.0.3",
 "ID": "03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74",
 "IPAddress": "172.17.0.4",
 "ID": "1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f",
 "IPAddress": "172.17.0.2",

Time to use lxc-attach to connect to our Docker images using the Docker image name, add the mounts to “/etc/mtab” to keep them MariaDB friendly and customize the “gcomm://” address as we would for a usual Galera configuration (the Docker image name is a generated when the instance fires up so make sure to use your own instance name in the following commands):

root@workstation:~# lxc-attach --name b51e74933ece2f3f457ec87c3a4e7b649149e9cff2a4705bef2a070f7adbafb0
 root@b51e74933ece:~# cat /proc/mounts > /etc/mtab
 root@b51e74933ece:~# service mysql restart
 * Starting MariaDB database mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.

root@b51e74933ece:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4

root@b51e74933ece:~# exit
 exit

root@workstation:~# lxc-attach --name 03109c7018c03ddd8448746437346f080a976a74c3fc3d15f0191799ba5aae74
 root@03109c7018c0:~# cat /proc/mounts > /etc/mtab
 root@03109c7018c0:~# vi /etc/mysql/my.cnf
 #wsrep_cluster_address=gcomm://
 wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.17.0.4
 root@03109c7018c0:~# service mysql start
 * Starting MariaDB database server mysqld                            [ OK ]
 * Checking for corrupt, not cleanly closed and upgrade needing tables.
 root@03109c7018c0:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 30
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 2                                    |
 | wsrep_cluster_size       | 2                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec_

MariaDB [(none)]> exit
 Bye
 root@03109c7018c0:~# exit
 exit

root@workstation:~# lxc-attach --name 1db2a9a520f85d2cef6e5b387fa7912890ab69fc0918796c1fae9c1dd050078f
 root@1db2a9a520f8:~# cat /proc/mounts > /etc/mtab
 root@1db2a9a520f8:~# vi /etc/mysql/my.cnf
 root@1db2a9a520f8:~# service mysql start
 * Starting MariaDB database server mysqld                                                                                                                                                     [ OK ]
 root@1db2a9a520f8:~# mysql -uroot -proot
 Welcome to the MariaDB monitor.  Commands end with ; or \g.
 Your MariaDB connection id is 34
 Server version: 5.5.39-MariaDB-1~trusty-wsrep mariadb.org binary distribution, wsrep_25.10.r4014

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like 'wsrep_cluster%';
 +--------------------------+--------------------------------------+
 | Variable_name            | Value                                |
 +--------------------------+--------------------------------------+
 | wsrep_cluster_conf_id    | 3                                    |
 | wsrep_cluster_size       | 3                                    |
 | wsrep_cluster_state_uuid | 42bc375b-2bc0-11e4-851c-1a7627c0624c |
 | wsrep_cluster_status     | Primary                              |
 +--------------------------+--------------------------------------+
 4 rows in set (0.00 sec)

MariaDB [(none)]> exit
 Bye
 root@1db2a9a520f8:~# exit
 exit

Now be honest… Wasn’t that easier than creating multiple virtual machines and configuring the OS for each?

Enjoy your new MariaDB Galera Cluster and happy Dockering!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 3

Pythian Group - Thu, 2014-08-28 07:58

This blog post is the last of this series and which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. This was followed by another post about how to deal with the certificate. Today we will learn how to configure the VPN client.

CONFIGURE THE VPN CLIENT
1. In the Management Portal, navigate to virtual network page; in the “quick glance” you have the links to download the VPN package.

Choose the one appropriate to your architecture (x86 or x64).

Screen Shot 2014-07-31 at 14.10.48

2. After successfully download, copy the file to your servers and execute the setup.
Screen Shot 2014-07-31 at 14.49.34

3. Click Yes when it asks if you want to install the VP and let it run.
Screen Shot 2014-07-31 at 15.09.26

4. After successful installation, it will be visible in your network connections.
Screen Shot 2014-07-31 at 15.46.07

5. In Windows 2012 you can click in the network icon, in the notification area icons (close to the clock), and it will show the right-side bar with all the network connections. You can connect from there.
The other option is right-click the connection in the “Network Connections” window (previous step) and click “Connect / Disconnect”.

6. A window will be shown, click Connect.

Screen Shot 2014-07-31 at 15.58.23

7. Now check the box near to “Do not show this message again for this Connection” and click on “Continue”.

If everything is ok, the connection will succeed.

Screen Shot 2014-07-31 at 16.07.04

8. To confirm that you are connected, execute the command “ipconfig /all” in the command line, and you should see and entry for the VPN with an IP assigned.

Screen Shot 2014-07-31 at 16.24.01

9. After a while, you will be also able to see the connection in you vNet dashboard. As you can see in the image you have data in/out in the vNet.

Screen Shot 2014-07-31 at 16.26.39

After this last part, you are done with the point-to-site VPN configuration. You can test the connectivity by executing the “ping” command and also using the “telnet” client to test if some specific port is opened and reachable.

The point-to-site VPN is recommended if you want connect users/devices to your Azure infrastructure, for few different reasons. If you need to connect the entire or part of your on-premises infrastructure, the way to go is configure a Site-to-Site VPN. Stay tuned for a blog post on how it works.

Thank you for reading!

Categories: DBA Blogs

Monitoring the Filesystem for READONLY mounts using Metric Extension in OEM12c

Arun Bavera - Thu, 2014-08-28 07:29

Our Client faced many times the mounted  filesystem going into READONLY status.

We created this User Defined Metrics or now called as Metric Extesnion to monitor and send alert.

image

 

image

 

image

 

#!/bin/sh

#echo "SlNo MountPoint MountStatus"
nl  /etc/mtab |/bin/awk '{print $1"|" $3"|"substr($5,1,2)}'

 

image

 

image

 

Credentials

Host Credentials
: Uses Monitoring Credentials of Target.

 

You have to create a NamedCredential set to test this like this and then set the username and password for this set from Security->Monitoring Credentials:

emcli create_credential_set -set_name=SOA_ORABPEL_STAGE -target_type=oracle_database -auth_target_type=oracle_database -supported_cred_types=DBCreds -monitoring -description='SOA ORABPEL DB Credentials'
Categories: Development

Missing Named Credentials in OEM 12c

Arun Bavera - Thu, 2014-08-28 06:56

We are seeing that the list sometimes doesn’t show all the named credentials.

Yet to see if this resolves the issue but need to restart OMS …

emctl set property -name oracle.sysman.emdrep.creds.region.maxcreds -value 500

Oracle Enterprise Manager Cloud Control 12c Release 3

Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.

SYSMAN password:

Property oracle.sysman.emdrep.creds.region.maxcreds has been set to value 500 for all Management Servers

OMS restart is required to reflect the new property value

Ref:

EM 12c: Missing Named Credentials in the Enterprise Manager 12c Cloud Control Jobs Drop Down List (Doc ID 1493690.1)

Categories: Development

PRECOMPUTE_SUBQUERY hint

XTended Oracle SQL - Wed, 2014-08-27 16:01

I’ve just found out that we can specify query block for PRECOMPUTE_SUBQUERY: /*+ precompute_subquery(@sel$2) */
So we can use it now with SQL profiles, SPM baselines and patches.

SQL> select/*+ precompute_subquery(@sel$2) */ * from dual where dummy in (select chr(level) from dual connect by level<=100);

D
-
X

SQL> @last

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  c437vsqj7c4jy, child number 0
-------------------------------------
select/*+ precompute_subquery(@sel$2) */ * from dual where dummy in
(select chr(level) from dual connect by level<=100)

Plan hash value: 272002086

---------------------------------------------------------------------------
| Id  | Operation         | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time   |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |        |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |      1 |     2 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1 / DUAL@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("DUMMY"='' OR "DUMMY"='' OR "DUMMY"='♥' OR "DUMMY"='♦'
              OR "DUMMY"='♣' OR "DUMMY"='♠' OR "DUMMY"='' OR "DUMMY"=' OR
              "DUMMY"=' ' OR "DUMMY"=' ' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=' '
              OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='►' OR "DUMMY"='◄' OR
              "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=''
              OR "DUMMY"='' OR "DUMMY"='↑' OR "DUMMY"='↓' OR "DUMMY"='' OR
              "DUMMY"=' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"='' OR "DUMMY"=''
              OR "DUMMY"=' ' OR "DUMMY"='!' OR "DUMMY"='"' OR "DUMMY"='#' OR
              "DUMMY"='$' OR "DUMMY"='%' OR "DUMMY"='&' OR "DUMMY"='''' OR
              "DUMMY"='(' OR "DUMMY"=')' OR "DUMMY"='*' OR "DUMMY"='+' OR "DUMMY"=','
              OR "DUMMY"='-' OR "DUMMY"='.' OR "DUMMY"='/' OR "DUMMY"='0' OR
              "DUMMY"='1' OR "DUMMY"='2' OR "DUMMY"='3' OR "DUMMY"='4' OR "DUMMY"='5'
              OR "DUMMY"='6' OR "DUMMY"='7' OR "DUMMY"='8' OR "DUMMY"='9' OR
              "DUMMY"=':' OR "DUMMY"=';' OR "DUMMY"='<' OR "DUMMY"='=' OR "DUMMY"='>'
              OR "DUMMY"='?' OR "DUMMY"='@' OR "DUMMY"='A' OR "DUMMY"='B' OR
              "DUMMY"='C' OR "DUMMY"='D' OR "DUMMY"='E' OR "DUMMY"='F' OR "DUMMY"='G'
              OR "DUMMY"='H' OR "DUMMY"='I' OR "DUMMY"='J' OR "DUMMY"='K' OR
              "DUMMY"='L' OR "DUMMY"='M' OR "DUMMY"='N' OR "DUMMY"='O' OR "DUMMY"='P'
              OR "DUMMY"='Q' OR "DUMMY"='R' OR "DUMMY"='S' OR "DUMMY"='T' OR
              "DUMMY"='U' OR "DUMMY"='V' OR "DUMMY"='W' OR "DUMMY"='X' OR "DUMMY"='Y'
              OR "DUMMY"='Z' OR "DUMMY"='[' OR "DUMMY"='\' OR "DUMMY"=']' OR
              "DUMMY"='^' OR "DUMMY"='_' OR "DUMMY"='`' OR "DUMMY"='a' OR "DUMMY"='b'
              OR "DUMMY"='c' OR "DUMMY"='d'))

PS. I’m not sure, but as far as i remember, when I tested it on 10.2, it didn’t work with specifying a query block.
And I have never seen such usage.

Categories: Development

Partner Webcast – Oracle Internet of Things Platform: Java 8 connecting the world

The Internet of Things Revolution is gaining speed. There are more and more devices, data and connections, thus more and more complexity to handle. But in the first place it brings complete...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Subscription Notifier Version 4.0 Enables WebCenter Users to Create Custom Content Email Notifications

Fishbowl Solutions’ Subscription Notifier has been used by many of our customers for years to manage business content stored in Oracle WebCenter Content. Subscription Notifier automatically sends email notifications based on scheduled queries. Fishbowl released version 4.0 of the product last week, and it includes several significant updates.

Now, users of Subscription Notifier can:

  • Attach native or web-viewable files to notification emails
  • Send individual notification emails for each content item
  • Configure hourly notification schedules
  • Run subscription side effects without sending emails

In addition to the latest updates, the product also offers a host of other features that enable WebCenter users to keep track of their high-value content.

You begin by naming the subscription and specifying whether emails should be sent for items matching the query. The scheduler lets you specify exactly when you want email notifications to go out (note the hourly option, new with version 4.0).

 

SubNoti general settings

The email settings specify who you want to send emails to and how they should appear to recipients. The new “Attach Content” feature gives you the option of sending web-viewable or native files, which provides a way for recipients who don’t use Oracle WebCenter to still see important files. Using the query builder is very simple and determines what content items are included in the subscription. Advanced users also have the option to write more complex queries using SQL.

SubNoti email

The Current Subscription Notifications page gives a summary of all subscriptions. In Version 4.0, simple changes such as enabling, disabling, or deleting subscriptions can be done here.

SubNoti current subscription notifications

Subscription Notifier is a very useful tool for any organization that needs to keep tabs on a large amount of business content. It is part of Fishbowl’s Administration Suite, which also includes Advanced User Security Mapping, Workflow Solution Set, and Enterprise BatchLoader. This set of products works together to simplify the most common administrative tasks in Oracle WebCenter Content.

To learn more about Subscription Notifier, visit Fishbowl’s website or read the press release announcing Version 4.0.

The post Subscription Notifier Version 4.0 Enables WebCenter Users to Create Custom Content Email Notifications appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

force_match => TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE

Bobby Durrett's DBA Blog - Wed, 2014-08-27 14:13

Yesterday and today I’ve read or heard two people mention the force_match => TRUE parameter value for DBMS_SQLTUNE.IMPORT_SQL_PROFILE and how it forces a profile to work on all SQL statements that are the same except for their literal values.  So, I ran a quick test using the coe_xfr_sql_profile.sql utility that comes with the SQLT scripts that are available for download on Oracle’s support site.

I’ve mentioned in earlier posts how we use coe_xfr_sql_profile.sql to force plans on particular SQL statements using the sql_id of the SQL statement and the plan_hash_value of the plan:

July 2013 post

October 2013 post

March 2014 post

May 2014 post

Yesterday I read this post by David Kurtz where he mentions force_match: post

Today I heard Karen Morton mention force_match in her webinar which should soon be posted here: url

So, after the webinar completed I built a test case to see how the force_match=>TRUE option works.  I created a test table and ran a query with a literal in the where clause and got its plan showing its sql_id and plan_hash_value:

ORCL:SYSTEM>create table test as select * from dba_tables;
ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

---------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)| 
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |   992 | 29760 |    29   (0)|
----------------------------------------------------------------

Then I ran coe_xfr_sql_profile.sql to create a profile that forces the plan on the given sql_id:

SQL> @coe_xfr_sql_profile.sql 10g08ytt2m5mu 1950795681

Then, using vi I edited the output of coe_xfr_sql_profile.sql:

vi coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

I searched for force_match and changed the line to read like this:

force_match => TRUE

instead of

force_match => FALSE

There are comments in the script explaining the meaning of these two values but I don’t want to plagiarize the script by including them here.  Next I ran the edited script:

sqlplus system/password < coe_xfr_sql_profile_10g08ytt2m5mu_1950795681.sql

Then I ran a test showing that not only the original query with the where clause literal ‘SYS’ would use the profile but the same query with a different literal ‘SYSTEM’ would use the  created profile.

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYS';

SUM(BLOCKS)
-----------
      34633

ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  10g08ytt2m5mu, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYS'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

ORCL:SYSTEM>SELECT sum(blocks) from test
  2  where owner='SYSTEM';

SUM(BLOCKS)
-----------
        520

ORCL:SYSTEM>
ORCL:SYSTEM>select * from
  2  table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------
SQL_ID  806ncj0a5fgus, child number 0
-------------------------------------
SELECT sum(blocks) from test where owner='SYSTEM'

Plan hash value: 1950795681

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|
|   1 |  SORT AGGREGATE    |      |     1 |    30 |            |
|*  2 |   TABLE ACCESS FULL| TEST |    81 |  2430 |    29   (0)|
----------------------------------------------------------------

Note
-----
  - SQL profile coe_10g08ytt2m5mu_1950795681 used for this statement

Note that a different sql_id = 806ncj0a5fgus represents the second statement but the same plan_hash_value = 1950795681.  Also note that the SQL profile has the same name in both plans = coe_10g08ytt2m5mu_1950795681.

Now that I’m aware of the force_match=>TRUE option of DBMS_SQLTUNE.IMPORT_SQL_PROFILE I can use SQL profiles to force plans on queries that have different literal values, but are otherwise identical.  This adds a whole new set of problems that can be resolved without modifying the existing code which can really help in a performance firefight.

– Bobby

 

 

 

Categories: DBA Blogs

My Speaking Schedule for Oracle Open World 2014

Galo Balda's Blog - Wed, 2014-08-27 12:22

A quick post to let you know about the two presentations that I’ll be doing at Oracle Open World 2014.

Session ID:         UGF4482
Session Title:     “Getting Started with SQL Pattern Matching in Oracle Database 12c
Venue / Room:  Moscone South – 301
Date and Time:  9/28/14, 13:30 – 14:15

Session ID:          CON4493
Session Title:      “Regular Expressions in Oracle Database 101″
Venue / Room:   Moscone South – 303
Date and Time:   10/2/14, 13:15 – 14:00

As usual, you might have to check before the session to make sure the room has not changed.

I hope to see you there.


Filed under: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL Tagged: 12C, Open World, Oracle, Regular Expressions, Row Pattern Matching, SQL
Categories: DBA Blogs

In-memory Consistency

Jonathan Lewis - Wed, 2014-08-27 12:00

A comment on one of my early blogs about the 12c in-memory database option asked how Oracle would deal with read-consistency. I came up with a couple of comments outlining the sort of thing I would look for in a solution, and this note is an outline on how I started to tackle the question – with a couple of the subsequent observations. The data is (nearly) the same as the data I generated for my previous article on the in-memory database (and I’m running 12.1.0.2, of course):


create table t1 nologging
as
select  *
from    all_objects
where   rownum <= 50000
;

insert /*+ append */ into t1 select * from t1;
commit;

insert /*+ append */ into t1 select * from t1;
commit;

insert /*+ append */ into t1 select * from t1;
commit;

begin
        dbms_stats.gather_table_stats(user, 't1', method_opt=>'for all columns size 1');
end;
/

alter table t1
        inmemory priority high memcompress for query low
        inmemory memcompress for query high (object_type)
;

In this case I’ve made the inmemory priority high and I haven’t set any column to “no inmemory” although I have made one column different from the rest (v$_im_column_level doesn’t get populated unless there is some variation across columns). I have to say I couldn’t get very consistent behaviour in terms of when the data finally got into memory with this table creation – possibly something to do with using “alter table” rather than “create table” – but a second “alter table t1 inmemory;” seemed to do the trick if Oracle was playing hard to get.

Once I’d checked that the table was in memory I collected performance figures from v$mystat and v$session_event for the following query:


select
        /* Test Run */
        last_ddl_time
from
        t1
where   t1.created > trunc(sysdate)
and     t1.object_type = 'TABLE'
and     t1.subobject_name is not null
;

Once I was satisfied that the in-memory option was working correctly, I went through the following steps:

  • Session 1: set transaction read only;
  • Session 1: run the query and collect performance figures
  • Session 2: do several small, committed, updates, modifying a total of 30 or 40 random rows
  • Session 2: Flush the buffer cache – so that we can see future block acquisition
  • Session 1: re-run the query and collect performance figures – compare and contrast

The effect of the “set transaction read only;” was to force the session to do some extra work in the second execution of the query to make the data read-consistent back to the start of the “transaction”. The results were as follows (don’t forget that some of the numbers will relate to the action of collecting the performance figures):


First execution
===============
Name                                                                     Value
----                                                                     -----
Requests to/from client                                                      4
opened cursors cumulative                                                    5
user calls                                                                   6
recursive calls                                                              3
session logical reads                                                    6,680
DB time                                                                      1
non-idle wait count                                                          4
consistent gets                                                              3
consistent gets from cache                                                   3
consistent gets pin                                                          3
consistent gets pin (fastpath)                                               3
logical read bytes from cache                                           24,576
calls to kcmgcs                                                              7
calls to get snapshot scn: kcmgss                                            1
table scans (long tables)                                                    1
table scans (IM)                                                             1
IM scan CUs memcompress for query low                                        1
session logical reads - IM                                               6,677
IM scan bytes in-memory                                              5,155,309
IM scan bytes uncompressed                                          45,896,824
IM scan CUs columns theoretical max                                         18
IM scan rows                                                           399,984
IM scan rows optimized                                                 399,984
IM scan CUs split pieces                                                     1
IM scan CUs predicates received                                              3
IM scan CUs predicates applied                                               3
IM scan CUs predicates optimized                                             1
IM scan CUs pruned                                                           1
IM scan segments minmax eligible                                             1
session cursor cache hits                                                    5
workarea executions - optimal                                                1
parse count (total)                                                          4
execute count                                                                5
bytes sent via SQL*Net to client                                         1,150

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
SQL*Net message to client                             9           0           0.00        .000           0
SQL*Net message from client                           9           0           0.44        .049       8,408

Second Execution
================
Name                                                                     Value
----                                                                     -----
Requests to/from client                                                      4
opened cursors cumulative                                                    5
user calls                                                                   6
recursive calls                                                              3
session logical reads                                                    6,728
DB time                                                                      1
non-idle wait count                                                         35
enqueue requests                                                             2
enqueue releases                                                             2
physical read total IO requests                                             29
physical read total multi block requests                                    24
physical read total bytes                                            6,987,776
cell physical IO interconnect bytes                                  6,987,776
consistent gets                                                             92
consistent gets from cache                                                  92
consistent gets pin                                                         44
consistent gets pin (fastpath)                                               5
consistent gets examination                                                 48
logical read bytes from cache                                          753,664
physical reads                                                             853
physical reads cache                                                       853
physical read IO requests                                                   29
physical read bytes                                                  6,987,776
consistent changes                                                          48
free buffer requested                                                      894
CR blocks created                                                           41
physical reads cache prefetch                                              824
physical reads prefetch warmup                                             713
shared hash latch upgrades - no wait                                        43
calls to kcmgcs                                                              7
calls to get snapshot scn: kcmgss                                            1
file io wait time                                                        3,861
data blocks consistent reads - undo records applied                         48
rollbacks only - consistent read gets                                       41
table scans (long tables)                                                    1
table scans (IM)                                                             1
table scan rows gotten                                                   2,803
table scan blocks gotten                                                    41
IM scan CUs memcompress for query low                                        1
session logical reads - IM                                               6,636
IM scan bytes in-memory                                              5,155,309
IM scan bytes uncompressed                                          45,896,824
IM scan CUs columns theoretical max                                         18
IM scan rows                                                           399,984
IM scan rows optimized                                                 399,984
IM scan rows cache                                                          48
IM scan blocks cache                                                        41
IM scan CUs split pieces                                                     1
IM scan CUs predicates received                                              3
IM scan CUs predicates applied                                               3
IM scan CUs predicates optimized                                             1
IM scan CUs pruned                                                           1
IM scan segments minmax eligible                                             1
session cursor cache hits                                                    5
workarea executions - optimal                                                1
parse count (total)                                                          4
execute count                                                                5
bytes sent via SQL*Net to client                                         1,150
bytes received via SQL*Net from client                                   1,772
SQL*Net roundtrips to/from client                                            4

Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
Disk file operations I/O                              2           0           0.01        .003           0
db file sequential read                               5           0           0.01        .001           0
db file scattered read                               24           0           0.38        .016           0
SQL*Net message to client                            10           0           0.01        .001           0
SQL*Net message from client                          10           0           0.76        .076       8,408

There’s quite a lot of stats which probably aren’t interesting – and there’s one detail that is important but doesn’t appear (at least not clearly) and that’s the fact that the table in question had about 6,800 blocks below its highwater mark.

So, what do the stats tell us? The most obvious change, of course, is that we had to do some physical reads to get a result set: 24 multiblock reads and 5 single block reads (the latter from the undo tablespace). This is echoed in the session stats as 853 “physical reads cache” from 29 “physical read IO requests”. We can then see the specific read-consistency work (in two ways – with a third close approximation):

consistent changes                                                          48
CR blocks created                                                           41

data blocks consistent reads - undo records applied                         48
rollbacks only - consistent read gets                                       41

IM scan rows cache                                                          48
IM scan blocks cache                                                        41

We applied 48 undo change vectors to fix up 41 blocks to the correct point in time and used them to read 48 rows – the last pair of figures won’t necessarily match the first two pairs, but they do give us a measure of how much data we had to acquire from the cache when trying to do an in-memory scan.

The number 41 actually appears a couple more times: it’s “table scan blocks gotten” (which might seem a little odd since we got far more than 41 blocks by multiblock reads – but we only really wanted 41), and it’s also the change (downwards) in “session logical reads – IM”. Even when Oracle does a pure in-memory query it calculates the number of blocks it would have been reading and reports that number as “session logical reads” and “session logical reads – IM” – so there’s another way to get confused about buffer visits and another statistic to cross-check when you’re trying to work out how to calculate “the buffer hit ratio” ;)

After the first read the scattered reads all seemed to be 32 blocks of “intermittent” tablescan – perhaps this is a measure of the number of blocks that are compressed into a single in-memory chunk (for query low), but perhaps it’s a side effect of the “physical reads prefetch warmup” that Oracle may do when the cache has a lot of empty space. I’ll leave it as an exercise to the reader to refine the test (or think of a different test) to determine whether it’s the former or latter; it’s quite important to find this out because if Oracle is tracking change at the “in-memory chunk” rather than at the block level then a small amount of high-precision change to an in-memory table could result in a relatively large amount of “redundant” I/O as a long-running query tried to stay read-consistent.


Dress Code 2.0: Wearable Tech Meetup at the OTN Lounge at Oracle OpenWorld 2014

Usable Apps - Wed, 2014-08-27 09:38

What? Dress Code 2.0: Wearable Tech Meetup at the OTN Lounge at Oracle OpenWorld 2014

When? Tuesday, 30-September-2014, 4-6 PM

Partners! Customers! Java geeks! Developers everywhere! Lend me your (er, wearable tech) ears!

Get your best wearables technology gear on and come hang out with the Oracle Applications User Experience team and friends at the OTN Lounge Wearables Technology Meetup at Oracle OpenWorld 2014.

Oracle Apps UX and OTN augmenting and automating work with innnovation and the cloud
  • See live demos of Oracle ideation and proof of concept wearable technology—smart watches, heads-up displays, sensors, and other devices and UIs—all integrated with the Oracle Java Cloud.
  • Try our wearable gadgets for size, and chat with the team about using OTN resources to design and build your own solutions.
  • Show us your own wearables and discuss the finer points of use cases, APIs, integrations, UX design, and fashion and style considerations for wearable tech development, and lots more!

Inexpensive yet tasteful gifts for attendees sporting wearable tech, while supplies last!

Note: A 2014 Oracle OpenWorld or JavaOne conference badge is required for admittance to the OTN Lounge. 

More?

Hands-On Programming with R by Garrett Grolemund

Surachart Opun - Wed, 2014-08-27 02:42
R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS.
R language is useful to become a data scientist, as well as a computer scientist. I mention a book that points about a data science with R. A Hands-On Programming with R Write Your Own Functions and Simulations By Garrett Grolemund. It was written how to solve the logistical problems of data science. Additional, How to write our own functions and simulations with R. In a book, readers are able to learn in practical data analysis projects (Weighted Dice, Playing Cards, Slot Machine) and understand more in R. Additional, Appendix A-E will help to install/update R and R packages as well as loading Data and debugging in R code.
Garrett Grolemund maintains shiny.rstudio.com, the development center for the Shiny R package.
Free Sampler.Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs