Skip navigation.

Feed aggregator

Misconceptions about privacy and surveillance

DBMS2 - Mon, 2014-09-15 11:07

Everybody is confused about privacy and surveillance. So I’m renewing my efforts to consciousness-raise within the tech community. For if we don’t figure out and explain the issues clearly enough, there isn’t a snowball’s chance in Hades our lawmakers will get it right without us.

How bad is the confusion? Well, even Edward Snowden is getting it wrong. A Wired interview with Snowden says:

“If somebody’s really watching me, they’ve got a team of guys whose job is just to hack me,” he says. “I don’t think they’ve geolocated me, but they almost certainly monitor who I’m talking to online. Even if they don’t know what you’re saying, because it’s encrypted, they can still get a lot from who you’re talking to and when you’re talking to them.”

That is surely correct. But the same article also says:

“We have the means and we have the technology to end mass surveillance without any legislative action at all, without any policy changes.” The answer, he says, is robust encryption. “By basically adopting changes like making encryption a universal standard—where all communications are encrypted by default—we can end mass surveillance not just in the United States but around the world.”

That is false, for a myriad of reasons, and indeed is contradicted by the first excerpt I cited.

What privacy/surveillance commentators evidently keep forgetting is:

  • There are many kinds of privacy-destroying information. I think people frequently overlook just how many kinds there are.
  • Many kinds of organization capture that information, can share it with each other, and gain benefits from eroding or destroying privacy. Similarly, I think people overlook just how pervasive the incentive is to snoop.
  • Privacy is invaded through a variety of analytic techniques applied to that information.

So closing down a few vectors of privacy attack doesn’t solve the underlying problem at all.

Worst of all, commentators forget that the correct metric for danger is not just harmful information use, but chilling effects on the exercise of ordinary liberties. But in the interest of space, I won’t reiterate that argument in this post.

Perhaps I can refresh your memory why each of those bulleted claims is correct. Major categories of privacy-destroying information (raw or derived) include:

  • The actual content of your communications – phone calls, email, social media posts and more.
  • The metadata of your communications — who you communicate with, when, how long, etc.
  • What you read, watch, surf to or otherwise pay attention to.
  • Your purchases, sales and other transactions.
  • Video images, via stationary cameras, license plate readers in police cars, drones or just ordinary consumer photography.
  • Monitoring via the devices you carry, such as phones or medical monitors.
  • Your health and physical state, via those devices, but also inferred from, for example, your transactions or search engine entries.
  • Your state of mind, which can be inferred to various extents from almost any of the other information areas.
  • Your location and movements, ditto. Insurance companies also want to put monitors in cars to track your driving behavior in detail.

Of course, these categories overlap. For example, information about your movements can be derived not just from your mobile phone, but also from your transactions, from surveillance cameras, and from the health-monitoring devices that are likely to become much more pervasive in the future.

So who has reason to invade your privacy? Unfortunately, the answer boils down to “just about everybody”. In particular:

  • Any internet or telecom business would like to know, in great detail, what you are doing with their offerings, along with any other information that might influence what you’re apt to buy or do next.
  • Anybody who markets or sells to consumers wants to know similar things.
  • Similar things are true of anybody who worries about credit or insurance risk.
  • Anybody who worries about fraud wants to know who you’re connected to, and also wants to match you against any known patterns of fraud-related behavior.
  • Anybody who hires employees wants to know who might be likely to work hard, get sick or quit.
  • Similarly, they’d like to know who does or might engage in employee misconduct.
  • Medical researchers and caregivers have some of the most admirable reasons for wanting to violate privacy.

And that’s even without mentioning the most obvious suspects — law enforcement and national security of many kinds, who can be presumed to in at least certain cases be able to get any information that’s available to any other organization.

Finally, my sense is:

  • People appreciate the potential of fancy-schmantzy language and image recognition.
  • The graph analysis done on telecom metadata is so simple that people generally “get” what’s going on.
  • Despite all the “big data analytics” hype, commentators tend to forget just how powerful machine learning/predictive analytics privacy intrusions could be. Those psychographic clustering techniques devised to support advertising and personalization could be applied in much more sinister ways as well.

Related links

Categories: Other

Benchmark: TokuDB vs. MariaDB / MySQL InnoDB Compression

Pythian Group - Mon, 2014-09-15 09:55

As the amount of data companies are interested in collecting grows, life becomes all the more difficult for IT staff at all levels within an organization. SAS Enterprise storage devices that were once considered giants are now being phased out in favor of SSD Arrays with features such as de-duplication, tape storage has pretty much been abandoned and the same goes without saying for database engines.

For many customers just storing data is not enough because of the CAPEX and OPEX that is involved, smarter ways of storing the same data are required and since databases generally account for the greatest portion of storage requirements across an application stack. Lately they are used not only for storing data but also for storing logs in many cases. IT managers, developers and system administrators very often turn to the DBA and pose the time old question “is there a way we can cut down on the space the database is taking up?” and this question seems to be asked all the more frequently as time goes by.

This is a dilemma that cannot easily be solved for a MySQL DBA. What would the best way to resolve this issue be? Should I cut down on binary logging? Hmm… I need the binary logs in case I need to track down the transactions that have been executed and perform point in time recovery. Perhaps I should have a look at archiving data to disk and then compress this using tar and gzip? Heck if I do that I’ll have to manage and track multiple files and perform countless imports to re-generate the dataset when a report is needed from historical data. Maybe, just maybe, I should look into compressing the data files? This seems like a good idea… that way I can keep all my data, and I can just take advantage of a few extra CPU cycles to keep my data to a reasonable size – or does it?

Inspired by the time old dilemma I decided to take the latest version of TokuDB for test run and compare it to InnoDB compression which has been around a while. Both technologies promise a great reduction in disk usage and even performance benefits – naturally if data resides on a smaller portion of the disk access time and seek time will decrease, however this isn’t applicable to SSD disks that are generally used in the industry today. So I put together a test system using an HP Gen8 Proliant Server with 4x Intel® Xeon® E3 Processors, 4GB ECC RAM & the Samsung EVO SATA III SSD rated at 6G/s and installed the latest version of Ubuntu 14.04 to run some benchmarks. I used the standard innodb-heavy configuration from the support-files directory adding one change – innodb_file_per_table = ON. The reason for this is that TokuDB will not compress the shared tablespace hence this would affect the results of the benchmarks. To be objective I ran the benchmarks both on MySQL and MariaDB using 5.5.38 which is the latest bundled version for TokuDB.

The databases were benchmarked for speed and also for the space consumed by the tpcc-mysql dataset generated with 20 warehouses. So lets first have a look at how much space was needed by TokuDB vs. InnoDB (using both compressed and uncompressed tables):

 

Configuration GB TokuDB  2,7 InnoDB Compressed Tables  4,2 InnoDB Regular Tables  4,8

 

TokuDB was a clear winner here, of course the space savings depend on the type of data stored in the database however with the same dataset it seems TokuDB is in the lead. Seeing such a gain in storage requirements of course will make you wonder how much overhead is incurred in reading and writing this data, so lets have a look at the “tpm-C” to understand how many orders can be processed per minute on each. Here I have also included results for MariaDB vs. MySQL. The first graph shows the amount of orders that were processed per 10 second interval and the second graph shows the total “tpm-C” after the tests were run for 120 seconds:

 

Toku_Maria_MySQL

Figure 1 – Orders processed @ 10 sec interval

 

Interval MariaDB 5.5.38 MariaDB 5.5.38 InnoDB Compressed TokuDB on MariaDB 5.5.38 MySQL 5.5.38 MySQL 5.5.38 InnoDB Compressed TokuDB on MySQL 5.5.38 10 5300 529 5140 5667 83 5477 20 5743 590 5112 5513 767 5935 30 5322 596 4784 5267 792 5931 40 4536 616 4215 5627 774 6107 50 5206 724 5472 5770 489 6020 60 5827 584 5527 5956 402 6211 70 5588 464 5450 6061 761 5999 80 5679 424 5474 5775 789 6029 90 5759 649 5490 6258 788 5998 100 5288 611 5584 6044 765 6026 110 4637 575 4948 5753 720 5314 120 3696 512 4459 930 472 292 Toku_Maria_MySQL_2

Figure 2 - “tpm-C” for 120 test run

MySQL Edition “tpm-C” TokuDB on MySQL 5.5.38 32669.5 MySQL 5.5.38 32310.5 MariaDB 5.5.38 31290.5 TokuDB on MariaDB 5.5.38 30827.5 MySQL 5.5.38 InnoDB Compressed Tables 4151 MariaDB 5.5.38 InnoDB Compressed Tables 3437

 

Surprisingly enough however, the InnoDB table compression results were very low – perhaps this may have shown better results on regular SAS / SATA disks with traditional rotating disks. The impact on performance was incredibly high and the savings on disk space were marginal compared to those of TokuDB so once again again it seems we have a clear winner! TokuDB on MySQL outperformed both MySQL and MariaDB with uncompressed tables. The findings are interesting because in previous benchmarks for older versions of MariaDB and MySQL, MariaDB would generally outperform MySQL however there are many factors should be considered.

These tests were performed on Ubuntu 14.04 while the previous tests I mentioned were performed on CentOS 6.5 and also the hardware was slightly different (Corsair SSD 128GB vs. Samsung EVO 256GB). Please keep in mind these benchmarks reflect the performance on a specific configurations and there are many factors that should be considered when choosing the MySQL / MariaDB edition to use in production.

As per this benchmark, the results for TokuDB were nothing less than impressive and it will be very interesting to see the results on the newer versions of MySQL (5.6) and MariaDB (10)!

Categories: DBA Blogs

TCPS and SSLv2Hello

Laurent Schneider - Mon, 2014-09-15 08:19

Thanks to platform independence, the same java code work on different platforms.


import java.util.Properties;
import java.security.Security;
import java.sql.*;
import javax.net.ssl.*;

public class KeyStore {
  public static void main(String argv[]) 
      throws SQLException {
    String url="jdbc:oracle:thin:@(DESCRIPTION="+
      "(ADDRESS=(PROTOCOL=TCPS)(Host=SRV01)("+
      "Port=1521))(CONNECT_DATA=(SID=DB01)))";
    Properties props = new Properties();
    props.setProperty("user", "scott");
    props.setProperty("password", "tiger");
    props.setProperty("javax.net.ssl.trustStore",
      "keystore.jks");
    props.setProperty(
      "javax.net.ssl.trustStoreType","JKS");
    props.setProperty(
      "javax.net.ssl.trustStorePassword","***");
    DriverManager.registerDriver(
      new oracle.jdbc.OracleDriver());
    Connection conn = 
      DriverManager.getConnection(url, props);
    ResultSet res = conn.prepareCall("select "+
       "sys_context('USERENV','NETWORK_PROTOCOL"+
       "') txt from dual").
         executeQuery();
    res.next();
    System.out.println("PROTOCOL: "+
      res.getString("TXT"));
  }
}

The code above perfectly works with Linux and Windows.

Okay, in AIX you will get IllegalArgumentException SSLv2Hello at com.ibm.jsse2.sb.a if you don’t add


props.setProperty("oracle.net.ssl_version","3.0");

The default does not work with the Oracle AIX client. Just set it to 1.0 and 3.0 and you will be a bit less plateform-dependent

OOW - Focus On Support and Services for Fusion Apps/Fusion Middleware

Chris Warticki - Mon, 2014-09-15 08:00
Focus On Support and Services for Fusion Apps/Fusion Middleware   Monday, Sep 29, 2014

Conference Sessions

Oracle ERP Cloud: Overview, Implementation Strategies, and Best Practices
Patricia Burke, Director, Oracle
5:00 PM - 5:45 PM Westin Market Street - City CON7288 Understanding Patching for Your Oracle Fusion Cloud Services
Marc Lamarche, Senior Director, Global Fusion HCM Support, Oracle
5:15 PM - 6:00 PM Moscone West - 3007 CON8476 Tuesday, Sep 30, 2014

Conference Sessions

Best Practices for Maintaining Oracle Fusion Middleware
Ken Vincent, Senior Principal Technical Support Engineer, Oracle
10:45 AM - 11:30 AM Moscone West - 3022 CON8285 Wednesday, Oct 01, 2014

Conference Sessions

Modernize Your Analytics Solutions
Rob Reynolds, Senior Director, Oracle
Hermann Tse, Oracle
Gary Young, Senior Director, Big Data / Analytics, Oracle
10:15 AM - 11:00 AM Moscone West - 3016 CON5238 Wednesday, Oct 01, 2014

Conference Sessions

Is Your Organization Trying to Focus on an ERP Cloud Strategy?
Patricia Burke, Director, Oracle
Bary Dyer, Vice President, Oracle
10:00 AM - 10:45 AM Westin Market Street - Concordia CON7614 Compensation in the Cloud: Proven Business Case
ARUL_SENAPATHI@AJG.COM ARUL_SENAPATHI@AJG.COM, Director, Global Oracle HRIS
Rich Isola, Sr. Practice Director, Oracle
Kishan Kasety, Consulting Technical Manager, Oracle
12:30 PM - 1:15 PM Palace - Gold Ballroom CON2709 Succession and Talent Review at Newfield Exploration
Blane Kingsmore, HRIS Manager, Newfield Exploration
Rich Isola, Sr. Practice Director, Oracle
Louann Weaver, Practice Director, Oracle
3:00 PM - 3:45 PM Palace - Gold Ballroom CON2712 Thursday, Oct 02, 2014

Conference Sessions

Oracle Sales Cloud: Overview, Implementation Strategies, and Best Practices
Tom Griffin, Sr. Principal Consultant, Oracle
Mary Wade, Solution Manager, Oracle
10:15 AM - 11:00 AM Moscone West - 2001 CON7331 Thursday, Oct 02, 2014

Conference Sessions

Is Your Organization Trying to Focus on a CX Cloud Strategy?
John Cortez, Principle Solutions Architect, Oracle
Won Park, Consulting Solution Director, Oracle
Mary Wade, Solution Manager, Oracle
11:30 AM - 12:15 PM Moscone West - 3009 CON7575   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

SQL Saturday 323: SQL Server AlwaysOn and availability groups session slides

Yann Neuhaus - Sun, 2014-09-14 23:50

This SQL Saturday’s edition in Paris is now over. It was a great event with a lot of French and international speakers. There were also many attendees indicating that this event is a great place to share about SQL Server technologies. Maybe the Montparnasse tower in Paris played a role here with its panoramic view over Paris from the 40th floor! Smile


blog_16_landscape_from_spuinfo

blog_16_badge_sqlsaturdays


For those who didn’t attend on Saturday, you will find our SQL Server AlwaysOn and availability groups session slides here: SQLSaturday-323-Paris-2014---AlwaysOn-session.pptx

Don’t forget the next big event of the SQL Server community in Paris (1-2 december): Journées SQL Server

We will probably be there and of course we will enjoy to meet you!

Documentum upgrade project - ActiveX and D2-Client 3.1Sp1

Yann Neuhaus - Sun, 2014-09-14 19:31

This is another blog posting an our Documentum upgrade project. This time, the following issue occured: the ActiveX could not be installed using the D2-Client. We had to access the D2-Config url to have it installed. For a normal user, this could not be used.

Analyzes

The workstation had the ActiveX for D2 3.0 installed, the version before the upgrade. Under C:\\Windows\\Downloaded Program Files, we had:  

ctx
ctx.ocx  
D2UIHelper.dll

On my workstation where I could install (using D2-Config) the D2 3.1.1 ActiveX, I also had C:\\Windows\\Downloaded Program Files\\CONFLICT.* folders containing D2UIHelper.dll and ctx.inf

By checking the content of ctx.inf of this new cab I saw that we had the wrong version (see FileVersion) of the

 [ctx.ocx]  
file-win32-x86=thiscab  
RegisterServer=yes  
clsid={8C55AA13-D7D9-4539-8B20-78BC4A795681}  
DestDir=  
FileVersion=3,0,0,2

By checking the "ctx.cab" file in "D2-Client/install" and "D2-Config/install" on the application server I found that we did not have the same version, both ctx-cab had the same date and size but the digital signature was different:  

D2-Config ctx-cab: &8206;17 &8206;September &8206;2013 10:56:11,  
D2-Client: 19 &8206;April &8206;2013 17:03:08

 

Solution

To solve the issue I copied the ctx.cab" from "D2-Config/install" path to "D2-Client/install/". Once this was done the activeX could be installed using the D2-Client url.

It was confirmed by the vendor that this is a bug in the delivered package

Simple Solutions Do Not Equal Easy Builds

Floyd Teter - Sun, 2014-09-14 18:28
Am I the only one that often tries to make solutions far more difficult than needed?  My first approach to any challenge is likely the most complicated thing I could create.  For example, I was working on something with Oracle Apex last week.  Came up with what I thought was a nifty new feature and started building.  After the equivalent of several hundred lines of code, I had something that worked...just not as well as I hoped.

After sitting back and letting things percolate...with a bit of cussing and fussing...I wound up deleting everything I'd built for that nifty new feature.  Replaced it with about two minutes of work.  The replacement was probably the equivalent of 25 or 30 lines of code.  And now the feature worked exactly as I hoped.

Yeah, I'm pretty sure I could complicate a ball bearing if given the opportunity to go off and running with the first ideas that pop into my head.

My point in all this...simple solutions do not equal easy builds, at least when it comes to building solutions.  It takes brain power to refine ideas and initial concepts into simple, elegant solutions.

Eli and the Runaway Diaper now available!

FeuerThoughts - Sun, 2014-09-14 09:12

In 2013, the big sensation in (my) children's publishing was the release of Vivian Vulture and the Cleanup Culture.

In 2014, the honor goes to Eli and the Runaway Diaper.
It's a book about a diaper that gets tired of the day in day out grind of covering Eli's bottom (the names have been changed to protect the innocent). It decides that it's time to look around for a new and hopefully better (more appreciative) bottom.

Eli is initially dismayed, but happy to join the diaper on its quest, so off they go on a grand adventure!

Illustrated by Robert Melegari, it's a fun, light-hearted journey to self-discovery and self-improvement.

You can order it on Amazon,  Createspace, and so on. But if you order it from me, I will sign it and ship it off to you, all for the list price of $12.99.
Categories: Development

Kerberos SSO with Liferay 6.1

Yann Neuhaus - Sun, 2014-09-14 02:22

In my previous blog, I described the process to install a Kerberos Client and how to Kerberized Alfresco. In this blog, I will continue in the same way and present another application that could be configured to use the Kerberos MIT KDC: Liferay. Liferay is a very popular and a leader in Open Source solution for enterprise web platform (Intranet/Extranet/Internet web sites). Liferay could be bundled with several application servers like Tomcat, JBoss, Glassfish, but it could also be installed from scratch (deployment of a war file) with a lot of existing application servers.

 

For this blog, I will need the following properties/variables:

  • example.com = the DNS Domain
  • EXAMPLE.COM = the KDC REALM
  • kdc01oel.example.com = the FQDN of the KDC
  • document.write(['mpatou','EXAMPLE.COM'].join('@')) = the principal of a test user
  • lif01.example.com = the FQDN of the Liferay host server
  • otrs01.example.com = the FQDN of the OTRS host server

 

Please be aware that some configurations below may not be appropriate for production environment. For example, I don't configure Apache to run as a different user like "www" or "apache", I don't specify the installation directory for Apache or Kerberos, aso...

Actual test configuration:

  • OS: Oracle Enterprise Linux 6
  • Liferay: Liferay Community Edition 6.1.1 GA2 - installed on /opt/liferay-6.1.1
  • Application Server: Tomcat 7.0.27 - listening on port 8080

 

This version of Liferay doesn't have a default connection to a Linux KDC so everything should be done from scratch. The first thing to do is to add an Apache httpd in front of Liferay, if there is not already one, to process Kerberos requests. This part is described very quickly without extensive explanations because we don't need all the functionalities of Apache. Of course you can, if you want, add some other configurations to the Apache httpd to manage for example an SSL certificate, the security of your application or other very important features of Apache... So first let's check that the Tomcat used by Liferay is well configured for Kerberos with an Apache front-end:

  • The HTTP port should be 8080 for this configuration
  • The maxHttpHeaderSize must be increased to avoid authentication errors because an http header with a Kerberos ticket is much more bigger than a standard http header
  • The AJP port should be 8009 for this configuration
  • The tomcatAuthentication must be disabled to delegate the authentication to Apache

 

To verify that, just take a look at the file server.xml:

[root ~]# vi /opt/liferay-6.1.1/tomcat-7.0.27/conf/server.xml
1.png

 

Then download Apache httpd from the Apache web site (or use yum/apt-get), extract the downloaded file and go inside of the extracted folder to install this Apache httpd with some default parameters:

[root ~]# cd /opt
[root opt]# wget http://mirror.switch.ch/mirror/apache/dist//httpd/httpd-2.4.10.tar.gz
[root opt]# tar -xvf httpd-2.4.10.tar.gz
[root opt]# cd httpd-2.4.10
[root httpd-2.4.10]# ./configure
[root httpd-2.4.10]# make
[root httpd-2.4.10]# make install

 

This will install Apache httpd 2.4.10 under /usr/local/apache2. There could be some errors during the execution of "./configure" or "make" or "make install" but these kind of issues are generally well known and so the solutions to these issues could be found everywhere on Internet. An installation with the command apt-get will put the configuration file (named apache2.conf not httpd.conf) under /etc/apache2/ so please adapt the description below to your environment.

 

Once Apache httpd is installed, it must be configured to understand and use Kerberos for all incoming requests:

[root httpd-2.4.10]# vi /usr/local/apache2/conf/httpd.conf
# Add at the end of the file
Include /opt/liferay-6.1.1/tomcat-7.0.27/conf/mod_jk.conf
    Include /usr/local/apache2/conf/mod_kerb.conf

[root httpd-2.4.10]# vi /usr/local/apache2/conf/mod_kerb.conf
# New file for the configuration of the module "mod_auth_kerb" and Kerberos
    ServerAdmin root@localhost
    # The FQDN of the host server
    ServerName lif01.example.com:80

# Of course, find the location of the mod_auth_kerb and replace it there if
# it's not the same
    LoadModule auth_kerb_module /usr/local/apache2/modules/mod_auth_kerb.so

‹Location /›
    AuthName "EXAMPLE.COM"
        AuthType Kerberos
        Krb5Keytab /etc/krb5lif.keytab
        KrbAuthRealms EXAMPLE.COM
        KrbMethodNegotiate On
        KrbMethodK5Passwd On
        require valid-user
    ‹/Location›

 

The next step is to build the mod_auth_kerb and mod_jk. The build of mod_auth_kerb requires an already installed Kerberos client in this Liferay server. As seen below, my Kerberos client on this server is under /usr/local. Moreover, the buid of mod_jk may requires to specify the apxs binary used by Apache, that's why there is the "--with-apxs" parameter:

[root httpd-2.4.10]# cd ..
[root opt]# wget http://sourceforge.net/projects/modauthkerb/files/mod_auth_kerb/mod_auth_kerb-5.4/mod_auth_kerb-5.4.tar.gz/download
[root opt]# tar -xvf mod_auth_kerb-5.4.tar.gz
[root opt]# cd mod_auth_kerb-5.4
[root mod_auth_kerb-5.4]# ./configure --with-krb4=no --with-krb5=/usr/local --with-apache=/usr/local/apache2
[root mod_auth_kerb-5.4]# make
[root mod_auth_kerb-5.4]# make install

[root mod_auth_kerb-5.4]# cd ..
[root opt]# wget http://mirror.switch.ch/mirror/apache/dist/tomcat/tomcat-connectors/jk/tomcat-connectors-1.2.40-src.tar.gz
[root opt]# tar -xvf tomcat-connectors-1.2.40-src.tar.gz
[root opt]# cd tomcat-connectors-1.2.40-src/native
[root native]# ./configure --with-apxs=/usr/local/apache2/bin/apxs --enable-api-compatibility
[root native]# make
[root native]# make install

 

The module auth_mod_kerb doesn't need extra configuration but it's not the case of the mod_jk for which we will need to define several elements like log file and level, JkMount parameters which defines http requests that should be sent to the AJP connector, aso:

[root native]# cd ../..
[root opt]# vi /opt/liferay-6.1.1/tomcat/conf/mod_jk.conf
LoadModule jk_module /usr/local/apache2/modules/mod_jk.so
    JkWorkersFile /opt/liferay-6.1.1/tomcat-7.0.27/conf/workers.properties
    JkLogFile /usr/local/apache2/logs/mod_jk.log
    JkLogLevel debug
    JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
    # JkOptions indicate to send SSL KEY SIZE,
    JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
    # JkRequestLogFormat set the request format
    JkRequestLogFormat "%w %V %T"
    JkMount / ajp13
    JkMount /* ajp13

[root opt]# vi /opt/liferay-6.1.1/tomcat/conf/workers.properties
    # Define 1 real worker named ajp13
    worker.list=ajp13
    worker.ajp13.type=ajp13
    worker.ajp13.host=localhost
    worker.ajp13.port=8009
    worker.ajp13.lbfactor=50
    worker.ajp13.cachesize=10
    worker.ajp13.cache_timeout=600
    worker.ajp13.socket_keepalive=1
    worker.ajp13.socket_timeout=300

 

Finally, the last configuration for Apache httpd is to configure a krb5.conf file for the Kerberos client to know where the KDC is located:

[root opt]# vi /etc/krb5.conf
    [libdefaults]
        default_realm = EXAMPLE.COM

    [realms]
        EXAMPLE.COM = {
            kdc = kdc01oel.example.com:88
            admin_server = kdc01oel.example.com:749
            default_domain = example.com
        }

    [domain_realm]
        .example.com = EXAMPLE.COM
        example.com = EXAMPLE.COM

 

Once this is done, there is one step to execute on the KDC side for the configuration of Kerberos. Indeed, there is a configuration above in the file mod_kerb.conf that shows a keytab file named krb5lif.keytab. By default, this file doesn't exist so we must create it! From the KDC host server, execute the following commands to create a new service account for Liferay and then create the keytab for this service account:

[root opt]# kadmin
Authenticating as principal root/document.write(['admin','EXAMPLE.COM'].join('@')) with password.
Password for root/document.write(['admin','EXAMPLE.COM'].join('@')):  ##Enter here the root admin password##

kadmin:  addprinc HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@'))
WARNING: no policy specified for HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@')); defaulting to no policy
Enter password for principal "HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@'))":  ##Enter a new password for this service account##
Re-enter password for principal "HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@'))":  ##Enter a new password for this service account##
Principal "HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@'))" created.

kadmin:  ktadd -k /etc/krb5lif.keytab HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@'))
Entry for principal HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@')) with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5lif.keytab.
Entry for principal HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@')) with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5lif.keytab.
Entry for principal HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@')) with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/krb5lif.keytab.
Entry for principal HTTP/document.write(['lif01.example.com','EXAMPLE.COM'].join('@')) with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/etc/krb5lif.keytab.

kadmin:  exit

[root opt]# scp /etc/krb5lif.keytab document.write(['root','lif01.example.com'].join('@')):/etc/
document.write(['root','lif01.example.com'].join('@'))'s password:
krb5lif.keytab [====================================›] 100% 406 0.4KB/s 00:00
[root opt]# exit

 

From now on, all configurations required by Apache & Tomcat to handle Kerberos tickets are done. The only remaining step and certainly the most complicated is to configure Liferay to understand and use this kind of authentication. For that purpose, a Liferay Hook must be created (in eclipse using the Liferay Plugin for example). Let's name this Liferay Project created with the liferay-plugins-sdk-6.1.1: "custom-hook". For the configuration below, I will suppose that this project is at the following location: "C:/liferay-plugins-sdk-6.1.1/hooks/custom-hook/" and this location is abbreviated to %CUSTOM_HOOK%. You will find at the bottom of this blog a link to download the files that should be in this custom-hook. Feel free to use it!

 

To create a new authentication method, the first step is to create and edit the file %CUSTOM_HOOK%/docroot/WEB-INF/liferay-hook.xml as follow:

liferay-hook.png

 

Then, create and insert in the file %CUSTOM_HOOK%/docroot/WEB-INF/src/portal.properties the following lines:

    # This line defines the new auto login authentication used by Liferay
    auto.login.hooks=com.liferay.portal.security.auth.KerberosAutoLogin

 

And finally, the last step is to create the Java Class %CUSTOM_HOOK%/docroot/WEB-INF/src/com/liferay/portal/security/auth/KerberosAutoLogin with the following content. This class is used to retrieve the Kerberos principal from the Kerberos Ticket received by Apache and then transforms this principal to log the user in Liferay. Please be aware that this code can probably not be used as such because it's specific to our company: the screenName used in Liferay is equal to the principal used in the KDC. That's why there is some logger.info in the code: to help you to find the good relation between the Liferay screenName and the KDC principal.

AutoLogin.png

 

After that, just build your hook and deploy it using the liferay deploy folder (/opt/liferay-6.1.1/deploy/). If necessary, restart Apache and Liferay using the services or the control scripts:

[root opt]# /opt/liferay-6.1.1/tomcat-7.0.27/bin/shutdown.sh
[root opt]# /opt/liferay-6.1.1/tomcat-7.0.27/bin/startup.sh
[root opt]# /usr/local/apache2/bin/apachectl -k stop
[root opt]# /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf

 

Wait for Liferay to start and that's it, you should be able to obtain a Kerberos Ticket from the KDC, access to Liferay (through Apache on port 80) and you should be logged in automatically. That's MAGIC!

Thanks for reading and I hope you will be able to work with Kerberos for a long long time =).

 

Custom hook download link: custom-hook.zip

Delphix

Jonathan Lewis - Sat, 2014-09-13 15:29

I’ve often found in my travels that I’ve come up with a (potential) solution to a problem and wanted to test it “right now” – only to run onto the horns of a dilemma. A typical client offers me one of two options:

  • Option 1: test it on the production system – which is generally frowned on, and sometimes I can’t even get access to the production system anyway.
  • Option 2: test it on something that looks nothing like the production system – and hope that that’s in some way a valid test.

The first option – when it’s offered – is quite stressful, especially if the original performance problem had involved updates; the second option – which is the commoner offer – is also quite stressful because it’s quite hard to prove that the fix is doing what it’s supposed and that it will do it at the expected speed when it gets promoted to production. What I’d really like to do is clone the production system and run and tests and demonstrations on the clone. In fact, that’s exactly what does happen in some rare cases – but then it usually takes several hours (or couple of days) rather than the few minutes I’d prefer to wait.

This is why I like Delphix and why I’m prepared to say so whenever I see a need for it on a client site. It’s also why I’ve agreed to do a little on-line webinar about the product on Thursday 18th Sept at 10:00 am Pacific Time (6:00 pm UK time). The event is free, but you need to register to attend.

 

Disclosure:  The company paid me to visit their premises in Menlo Park CA last year so that I could experiment with their product and talk to their technical staff, without requesting any right to edit or limit any review I might subsequently publish about their product. The company has also paid me for my time for Thursday’s webinar but, again, has made no attempt to exercise editorial control over the content.

 

 


Sayonara to Sequences and Trouble for Triggers – Fun and Games in Oracle 12c

The Anti-Kyte - Sat, 2014-09-13 08:57

Ah, Nostalgia.
Not only can I remember the Good Old Days, I also remember them being far more fun than they probably were at the time.
Oh yes, and I was much younger….and had hair.
Yes, the Good Old Days, when Oracle introduced PL/SQL database packages, partitioning, and when the sequence became extinct.
Hang on, I don’t remember that last one…

The Good Old Ways

Say we have a requirement for a table to hold details of gadgets through the ages.
This table has been modelled with a synthetic key, and also a couple of audit columns so we can track when a row was created and by whom.
Traditionally, the code to fulfill this requirement would follow a familiar pattern.

The table might look something like this :

create table gadgets
(
    id number constraint dino_pk primary key,
    gadget_name varchar2(100) not null,
    created_by varchar2(30) default user,
    creation_date date default sysdate
)
/

NOTE – you’d normally expect to see NOT NULL constraints on the CREATED_BY and CREATION_DATE columns. I’ve left these off for for the purposes of the examples that follow.

We’ll also want to have a sequence to generate a value for the id…

create sequence gad_id_seq
/

As it stands, this implementation has one or two issues…

-- Specify all values
insert into gadgets( id, gadget_name, created_by, creation_date)
values( gad_id_seq.nextval, 'Dial-Up Modem', user, sysdate)
/

-- omit the "default" columns
insert into gadgets( id, gadget_name)
values( gad_id_seq.nextval, 'Tablet Computer')
/

-- specify null values for the "default" columns
-- also, don't use the sequence for the id value
insert into gadgets( id, gadget_name, created_by, creation_date)
values(3, 'Netbook', null, null)
/

The first problem becomes apparent when we query the table after these inserts…

SQL> select * from gadgets;

  ID GADGET_NAME	  CREATED_BY	       CREATION_DATE
---- -------------------- -------------------- --------------------
   1 Dial-Up Modem	  MIKE		       31-AUG-14
   2 Tablet Computer	  MIKE		       31-AUG-14
   3 Netbook

Yes, although the insert was successful for the Netbook row, the explicit specification of CREATED_BY and CREATION_DATE values as NULL has overidden the default values defined on the table.

What’s more, there’s nothing enforcing the use of the sequence to generate the ID value. This becomes a problem when we go to do the next insert…


-- Next insert using sequence...
insert into gadgets(id, gadget_name, created_by, creation_date)
values( gad_id_seq.nextval, 'Smart Phone', null, null)
/

insert into gadgets(id, gadget_name, created_by, creation_date)
*
ERROR at line 1:
ORA-00001: unique constraint (MIKE.DINO_PK) violated

Because we didn’t use the sequence for the previous insert, it’s still set to the value it had after it was last invoked…


SQL> select gad_id_seq.currval from dual;

   CURRVAL
----------
	 3

The traditional solution to these problems is, of course, a trigger…

create or replace trigger gad_bir_trg
    before insert on gadgets
    for each row
    --
    -- Make sure that :
    --  - id is ALWAYS taken from the sequence
    --  - created_by and creation date are always populated
begin
	:new.id := gad_id_seq.nextval;
    :new.created_by := nvl(:new.created_by, user);
    :new.creation_date := nvl(:new.creation_date, sysdate);
end;
/

Now, if we re-run our insert…


insert into gadgets(id, gadget_name, created_by, creation_date)
values( gad_id_seq.nextval, 'Smart Phone', null, null)
/

1 row inserted


SQL> select * from gadgets where gadget_name = 'Smart Phone';

  ID GADGET_NAME	  CREATED_BY	       CREATION_DATE
---- -------------------- -------------------- --------------------
   5 Smart Phone	  MIKE		       31-AUG-14

Yes, even though we’ve invoked the sequence in the INSERT statement, the trigger invokes it again and assigns that value to the ID column ( in this case 5, instead of 4).
Reassuringly thought, the CREATED_BY and CREATION_DATE columns are now populated.

So, in order to fulfill our requirements, we need to create three database objects :

  • A table
  • a sequence
  • a DML trigger on the table

Or at least, we did….

12c – the Brave New World

Oracle Database 12c introduces a couple of enhancements which will enable us to do away with our trigger completely.
First of all…

Changes to Default Values Specification

You can now specify a default value for a column that will be used, even if NULL is explicitly specified on Insert.
Furthermore, you can now also use a sequence number as a default value for a column.

If we were writing this application in 12c, then the code would look a bit different….


create sequence gad_id_seq
/

create table gadgets
(
    id number default gad_id_seq.nextval 
        constraint dino_pk primary key,
    gadget_name varchar2(100) not null,
    created_by varchar2(30) default on null user,
    creation_date date default on null sysdate
)
/

We’ve dispensed with the trigger altogether.
The ID column now uses the sequence as a default.
The CREATED_BY and CREATION_DATE columns will now be populated, even if NULL is explicitly specified as a value in the INSERT statement….


-- Specify all values
insert into gadgets( id, gadget_name, created_by, creation_date)
values( gad_id_seq.nextval, 'Dial-Up Modem', user, sysdate)
/

-- omit the "default" columns
insert into gadgets( id, gadget_name)
values( gad_id_seq.nextval, 'Tablet Computer')
/

-- specify null values for the "default" columns
-- also, don't use the sequence for the id value
insert into gadgets( id, gadget_name, created_by, creation_date)
values(3, 'Netbook', null, null)
/




  ID GADGET_NAME	  CREATED_BY	 CREATION_
---- -------------------- -------------- ---------
   1 Dial-Up Modem	  MIKE		 31-AUG-14
   2 Tablet Computer	  MIKE		 31-AUG-14
   3 Netbook		  MIKE		 31-AUG-14

Whilst we can now guarantee that the CREATED_BY and CREATION_DATE columns are populated, we are still left with one issue, or so you might think…

-- Next insert using sequence...
insert into gadgets(id, gadget_name, created_by, creation_date)
values( gad_id_seq.nextval, 'Smart Phone', null, null)
/

1 row inserted

That’s odd. You’d think that the sequence NEXTVAL would be 3, thus causing the same error as before. However…

 select * from gadgets;

  ID GADGET_NAME	  CREATED_BY	 CREATION_
---- -------------------- -------------- ---------
   1 Dial-Up Modem	  MIKE		 31-AUG-14
   2 Tablet Computer	  MIKE		 31-AUG-14
   3 Netbook		  MIKE		 31-AUG-14
  21 Smart Phone	  MIKE		 31-AUG-14

Hmmm. Let’s take a closer look at the sequence…

select min_value, increment_by, cache_size, last_number
from user_sequences
where sequence_name = 'GAD_ID_SEQ'  
/

 MIN_VALUE INCREMENT_BY CACHE_SIZE LAST_NUMBER
---------- ------------ ---------- -----------
	 1	      1 	20	    41

Yes, it looks like, in 12c at least, the default for sequences is a cache size of 20.
If we wanted to create the sequence in the same way as for 11g ( i.e. with no caching), we’d need to do this :

create sequence gad_id_seq
    nocache
/

We can now see that the sequence values will not be cached :

PDB1@ORCL> select cache_size
  2  from user_sequences
  3  where sequence_name = 'GAD_ID_SEQ'
  4  /

CACHE_SIZE
----------
	 0

All of this is a bit of an aside however. The fact is that, as it stands, it’s still quite possible to by-pass the sequence altogether during an insert into the table.
So, we still need to have a trigger to enforce the use of the sequence, right ?
Well, funny you should say that….

Identity Column in 12c

Time for another version of our table. This time however, we’re dispensing with our sequence, as well as the trigger…

create table gadgets
(
    id number generated as identity 
		constraint gad_pk primary key,
    gadget_name varchar2(100) not null,
    created_by varchar2(30) default on null user not null,
    creation_date date default on null sysdate not null
)
/

Let’s see what happens when we try to insert into this table. Note that we’ve modified the insert statements from before as the sequences does not exist ….

-- Specify all values
insert into gadgets( id, gadget_name, created_by, creation_date)
values( default, 'Dial-Up Modem', user, sysdate)
/


-- specify null values for the "default" columns
-- also, don't use the sequence for the id value
insert into gadgets( id, gadget_name, created_by, creation_date)
values(3, 'Netbook', null, null)
/

-- omit the "default" columns
insert into gadgets( id, gadget_name)
values( null, 'Tablet Computer')
/

The first statement succeeds with no problem. However, the second and third both fail with :

ORA-32795: cannot insert into a generated always identity column

We’ll come back to this in a bit.

In the meantime, if we check the table, we can see the ID column is automagically populated….

 select * from gadgets;

  ID GADGET_NAME	  CREATED_BY	 CREATION_
---- -------------------- -------------- ---------
   1 Dial-Up Modem	  MIKE		 31-AUG-14

Oh, it’s just like being on SQL Server.

How is this achieved ? Well, there are a couple of clues.
First of all, executing the create table statement for this particular version of the table requires that you have the additional privilege of CREATE SEQUENCE.
A further clue can be found by looking once again at USER_SEQUENCES…

select sequence_name, min_value, increment_by, cache_size, last_number
from user_sequences
/

SEQUENCE_NAME	      MIN_VALUE INCREMENT_BY CACHE_SIZE LAST_NUMBER
-------------------- ---------- ------------ ---------- -----------
ISEQ$$_95898		      1 	   1	     20 	 21

If we have a look at the column details for the table, we get confirmation that this sequence is used as the default value for the ID column :

  1  select data_default
  2  from user_tab_cols
  3  where table_name = 'GADGETS'
  4* and column_name = 'ID'
PDB1@ORCL> /

DATA_DEFAULT
--------------------------------------------------------------------------------
"MIKE"."ISEQ$$_95898".nextval


It’s worth noting that this sequence will hang around, even if you drop the table, until or unless you purge the table from the RECYCLEBIN.

If you prefer your sequences to be, well, sequential, the good news is that you can use the Sequence Creation syntax when specifying an identity column.
The change in the default number of values cached for sequences created in 12c, compared with 11g and previously, may lead you to consider being a bit more specific in how you create your sequence, just in case things change again in future releases.

Here we go then, the final version of our table creation script….

 create table gadgets
(
    id number generated always as identity
    (
        start with 1
        increment by 1
        nocache
        nocycle
    )
    constraint gad_pk primary key,
    gadget_name varchar2(100) not null,
    created_by varchar2(30) default on null user not null,
    creation_date date default on null sysdate not null
)
/

As we saw earlier, the INSERT statements for this table, now need to change. We can either specify “DEFAULT” for the ID column :

insert into gadgets( id, gadget_name, created_by, creation_date)
values( default, 'Dial-Up Modem', user, sysdate)
/

…or simply omit it altogether…

insert into gadgets(gadget_name, created_by, creation_date)
values('Smart Phone', user, sysdate)
/

And, of course, we can also omit the values for the other defaulted columns should we choose….

insert into gadgets(gadget_name)
values('Netbook')
/

If we check the table after these statements, we can see that all is as expected :


select * from gadgets
/

  ID GADGET_NAME	  CREATED_BY	 CREATION_
---- -------------------- -------------- ---------
   1 Dial-Up Modem	  MIKE		 31-AUG-14
   2 Smart Phone	  MIKE		 31-AUG-14
   3 Netbook		  MIKE		 31-AUG-14

As with a “traditional” table/sequence/trigger setup, an erroneous INSERT will cause a gap in the sequence…

insert into gadgets( id, gadget_name, created_by, creation_date)
values( default, 'Psion Series 5', 'Aridiculouslylongusernamethatwontfitnomatterwhat', sysdate)
/

values( default, 'Psion Series 5', 'Aridiculouslylongusernamethatwontfitnomatterwhat', sysdate)
                                   *
ERROR at line 2:
ORA-12899: value too large for column "MIKE"."GADGETS"."CREATED_BY" (actual:
48, maximum: 30)

insert into gadgets( id, gadget_name, created_by, creation_date)
values( default, 'Psion Series 5', default, default)
/

select * from gadgets;

  ID GADGET_NAME	  CREATED_BY	 CREATION_
---- -------------------- -------------- ---------
   1 Dial-Up Modem	  MIKE		 31-AUG-14
   2 Smart Phone	  MIKE		 31-AUG-14
   3 Netbook		  MIKE		 31-AUG-14
   5 Psion Series 5	  MIKE		 31-AUG-14
Conclusion

While we can see that 12c hasn’t done away with sequences altogether, it is fair to say that they are now a lot more unobtrusive.
As for the good old DML trigger ? Well, they’ll still be with us, but they may well be a little lighter on the mundane default handling stuff we’ve been through in this post.


Filed under: Oracle, PL/SQL, SQL Tagged: column default value, create sequence, default always, default cache value of sequence, default on null, generated as identity, identity column, insert value into identity column, ORA-32795 : cannot insert into a generated always identity column

MySQL high availability management with ClusterControl

Yann Neuhaus - Sat, 2014-09-13 03:03

Installing and managing a highly available MySQL infrastructure can be really tedious. Solutions to facilitate database and system administrator’s task exist, but few of these cover the complete database lifecycle and address all the database infrastructure management requirements. Severalnines’ product ClusterControl is probably the only solution that covers the full infrastructure lifecycle and is also able to provide a full set of functionalities required by database cluster architectures. In this article, I will show how to install, monitor and administrate a database cluster with ClusterControl.


Introduction

Severalnines is a Swedish company mostly composed of ex-MySQL AB staff. Severalnines provides automation and management software for database clusters. Severalnines’ ClusterControl perfectly fits this objective by providing a full “deploy, manage, monitor, and scale” solution. ClusterControl supports several database cluster technologies such as: Galera Cluster for MySQL, Percona XtraDB Cluster, MariaDB Galera Cluster, MySQL Cluster and MySQL Replication. However ClusterControl does not only support MySQL based cluster but also MongoDB clusters such as MongoDB Sharded Cluster, MongoDB Replica Set and TokuMX. In this article we will use Percona XtraDB Cluster to demonstrate ClusterControl functionalities.

 There are two different editions of ClusterControl: the community edition that provides basic functionalities and the enterprise edition that provides a full set of features and a really reactive support. All the details about the features of both editions can be found on the Severalnines website (http://www.severalnines.com/ClusterControl). In this article, we will detail four main global functionalities that are covered by ClusterControl:

 

1. The cluster deployment

2. The cluster management

3. The cluster monitoring

4. The scalability functionalities

 

The cluster architecture that we chose for the purpose of this article is represented in Figure 1. This cluster is composed by three Percona XtraDB nodes (green), two HAProxy nodes (red) and one ClusterControl (blue).

 

clustercontrol001.png

Figure 1: Percona XtraDB Cluster architecture


1. Cluster Deployment

As stated in the introduction, ClusterControl can manage several kinds of MySQL clusters or MongoDB clusters. The cluster deployment starts on Severalnines website on http://www.severalnines.com/configurator by choosing the kind of cluster we want to install. Once we have selected Percona XtraDB Cluster (Galera), we can select on which infrastructure we want to deploy the cluster. We can choose between on-premise, Amazon EC2 or Rackspace. Since we want to install this cluster on our own infrastructure, our choice here is “on-premise”.

Then we simply have to fill in the general settings forms by specifying parameters such as operating system, platform, number of cluster nodes, ports number, OS user, MySQL password, system memory, database size, etc., as presented in Figure 1.

 

clustercontrolsetup.png

Figure 2: General Settings


Once the general settings forms are filled in, we have to specify the nodes that belong to the Percona XtraDB cluster as well as the storage details.

The first settings are related to the ClusterControl server, the ClusterControl address and memory. There are also the details regarding the Apache settings, since the web interface is based on an Apache web server:

 

clustercontrolsetup002.png

Figure 3: ClusterControl settings


Now you can fill in the parameters related to the Percona XtraDB data nodes.

 

clustercontrolsetup003.png

Figure 4: Percona XtraDB nodes settings


Once all settings are entered, a deployment package can be automatically generated through the “Generate Deployment Script” button. We simply have to execute it on the ClusterControl server in order to deploy the cluster. Of course, it is still possible to edit the configuration parameters by editing the my.cnf file located in s9s-galera-1.0.0-/mysql/config/my.cnf.

 

[root@ClusterControl severalnines]# tar xvzf s9s-galera-percona-2.8.0-rpm.tar.gz

[root@ClusterControl severalnines]# cd s9s-galera-percona-2.8.0-rpm/mysql/scripts/install/

[root@ClusterControl install]# bash ./deploy.sh 2>&1|tee cc.log

 

The deployment package will download and install Percona XtraDB Cluster on the database hosts, as well as the ClusterControl components to manage the cluster. When the installation is successfully finalized, we can access the ClusterControl web interface via http://ClusterControl

Once logged in to ClusterControl we are able to view all database systems that are managed and monitored by ClusterControl. This means that you can have several differing cluster installations, all managed from one ClusterControl web interface.

 

clustercontrolsetup004.png

Figure 5: ClusterControl Database Clusters


Now the Percona XtraDB cluster is deployed and provides data high availability by using three data nodes. We still have to implement the service high availability and service scalability. In order to do that, we have to setup two HAProxy nodes in the frontend. Adding an HAProxy node with ClusterControl is a straightforward procedure. We would use a one-page wizard to specify the nodes to be included in the load balancing set and the node that will act as the load balancer, as presented in Figure 6.

 

clustercontrolsetup005.png

Figure 6 : Load balancer installation, using HAProxy


To avoid having a Single Point Of Failure (SPOF), it is strongly advised to add a second HAProxy node by following the same procedure as for adding the first HAProxy node. Then simply add a Virtual IP, using the “Install Keepalived” menu as presented in Figure 7.

 

clustercontrolsetup0x1.png 

 Figure 7: Virtual IP configuration using KeepAlived


2. Cluster Management 

ClusterControl offers numbers of administration features such as: Online backup scheduling, configuration management, database node failover and recovery, schema management, manual start/stop of nodes, process management, automated recovery, database user management, database upgrades/downgrades, adding and removing nodes online, cloning (for galera clusters), configuration management (independently for each MySQL node) and comparing status of different cluster nodes.

Unfortunately, presenting all these great management functionalities is not possible in the context of this article. Therefore, we will focus on backup scheduling and user, schema, and configuration management.

 

a. Backup Scheduling

As far as I remember, MySQL backup has always been a hot topic. ClusterControl offers three backup possibilities for MySQL databases: mysqldump, Percona Xtrabackup (full) and Percona Xtrabackup (incremental). Xtrabackup is a hot backup facility that does not lock the database during the backup. Scheduling the backups and having a look on performed backups is really easy with ClusterControl. It is also possible to immediately start a backup from the backup schedules’ interface. The Figure 7 presents the backup scheduling screen.

 

clustercontrolsetup007.png

Figure 8: Backup scheduling screen (retouched image for the purpose of this article)

You do not have to make a purge script to remove old backups anymore: ClusterControl is able to purge the backups after the definition of the retention period (from 0 to 365 days).

Unfortunately the restore procedure has to be managed manually since ClusterControl does not provide any graphical interface to restore a backup.

 

b. User, schema, and configuration management 

We can manage the database schemas, upload dumpfiles, and manage user privileges through the ClusterControl web interface.

 

clustercontrolsetup008.png

Figure 9: MySQL user privileges management

 

You can also change the my.cnf configuration file, apply the configuration changes across the entire cluster, and orchestrate a rolling restart – if required. Every configuration change is version-controlled.

 

clustercontrolsetup009.png

 Figure 10: MySQL Configuration management

 

New versions of the database software can be uploaded to ClusterControl, which then automates rolling software upgrades.

 

clustercontrolsetup010.png

Figure 11: Rolling upgrade through ClusterControl interface


A production cluster can easily be cloned, with a full copy of the production data, e.g. for testing purposes.

 

clustercontrolsetup011.png

Figure 12: Cloning Cluster screen


3. Cluster monitoring

With ClusterControl, you are not only able to build a cluster from scratch or get a full set of cluster management functionalities. It is also a great monitoring tool that provides you with a number of graphs and indicators, such as the list of top queries (by execution time or Occurrence), the running queries, the query histogram, CPU/Disk/Swap/RAM/Network usage, Tables/Databases growth, health check, and schema analyzer (showing tables without primary keys or redundant indexes). Furthermore, ClusterControl can record up to 48 different MySQL counters (such as opened tables, connected threads, aborted clients, etc.), present all these counters in charts, and many other helpful things that a database administrator will surely appreciate.

 

clustercontrolsetup012.png

Figure 13: Database performance graphics with time range and zoom functionalities (retouched image for the purpose of this article)


ClusterControl provides some interesting information regarding database growth for data and indexes. Figure 14 presents a chart showing the database growth since the last 26 days.

 

clustercontrolsetup013.png

Figure 14: Database growth since the last 26 days

 

ClusterControl is also able to send e-mail notifications when alerts are raised or even create custom expressions. The database administrator can also setup its own warning as well as critical thresholds for CPU, RAM, disk space, and MySQL memory usage. The following figure represents the resource usage for a given node.

 

clustercontrolsetup014.png

Figure 15: Resources usage for a Master node


Power users can set up custom KPIs, and get alerts in case of threshold breaches.

 

clustercontrolsetup015.png

 Figure 16: Custom KPIs definition

 

Health Report consists of a number of performance advisors that automatically examine the configuration and performance of the database servers, and alert in case of deviations from best practice rules.

 

clustercontrolsetup0xx.png

Figure 17: Health report with performance advisors

 

4. Scalability functionalities

Sooner or later it will be necessary to add or remove either a data node or a HAProxy node to the cluster for scalability or maintenance reasons. With ClusterControl, adding a new node is as easy as selecting the new host and giving it the role we want in the cluster. ClusterControl will automatically install the package needed for this new node and make the appropriate configuration in order to integrate it in the cluster. Of course, removing a node is just as easy.

 

clustercontrolsetup017.png

 Figure 18: New node addition and "add master" screens

 

Conclusion

With ClusterControl, Severalnines did a great job! For those who ever tried to build and administrate a highly available MySQL architecture using disparate clustering components such as heartbeat, DRBD (Data Replication Block Device), MySQL replication or any other high availability component, I am sure that you often wished to have a solution that provides a complete package. Deploying multiple clustering technologies can become a nightmare. Of course there are solutions such as MMM (Multi-Master replication Management for MySQL), but there is no solution covering the whole cluster lifecycle and offering such an amazing set of features via a nice web interface.

In addition to the great set of functionalities provided by ClusterControl, there is the Severalnines support: Their support team is amazingly efficient and reactive. The reaction time presented on the Severalnines website indicates 1 day but I never waited more than 1 hour before getting a first answer.

As stated in the introduction, there are two editions: The community edition with a limited set of functionalities is free, whereas the enterprise edition is available under a commercial license and support subscription agreement. This subscription includes ClusterControl software, upgrades, and 12 incidents per year. It is also interesting to notice that Severalnines and Percona are partners starting from this year.

 

The summary of my ClusterControl experience is presented in the table below:

 

Advantages

Drawbacks / limitation

+ Covers the whole cluster lifecycle from installation, upgrade as well as the management and monitoring phases


+ Much easier to use than many other tools that do not even provide half of the ClusterControl functionalities


+ Each operation includes a new job subscription – all operation are therefore logged


+ Amazingly reactive support!

- Does not provide backup restore functionalities


- It is not possible to acknowledge alerts or blackout targets

 

Additional information can be found on http://www.severalnines.com/blog. Since dbi services is Severalnines partner and has installed this solution at several customer sites, feel free to contact us if you have any additional question regarding ClusterControl.

Mein APEX Vortrag bei Orbit in Bonn

Denes Kubicek - Sat, 2014-09-13 01:05
Am 05.11.2014 halte ich einen Vortrag über Rapid Application Development bei Orbit in Bonn. Dabei wird auch noch mein Freund und Kollege Tobias Arnhold sein. Er wird über die Themen Plugins und Reporting sprechen. Anschließend zeigt Frank Weyher (der ebenso ein guter Kollege und Freund ist :)), wie man APEX mit Office integrieren kann.

Die Themen sind sicherlich interessant und eine Anmeldung lohnt sich sicherlich. Der Link zu der Veranstaltung finden Sie hier.

Categories: Development

Change unknown SYSMAN password on #EM12c

DBASolved - Fri, 2014-09-12 17:52

When I normally start work on a new EM 12c environment, I would request to have a userid created; however, I don’t have a userid in this environment and I need access EM 12c as SYSMAN.  Without knowing the password for SYSMAN, how can I access the EM 12c interface?  The short answer is that I can change the SYSMAN password from the OS where EM 12c is running.

Note:
Before changing the SYSMAN password for EM 12c, make sure to understand the following:

  1. SYSMAN is used by the OMS to login to the OMR to store and query all activity
  2. SYSMAN password has to be changed at both the OMS and OMR to EM 12c to work correctly
  3. Do not modify the SYSMAN or any  other repository user at the OMR level (not recommended)

The steps to change an unknown SYSMAN password is as follows:

Tip: Make sure you know what the SYS password is for the OMR.  It will be needed to reset SYSMAN.

1. Stop all OMS processes

cd <oms home>/bin
emctl stop oms 

Image 1:
sysman_pwd_stop_oms.png

 

 

 

 

 

 

2. Change the SYSMAN password

cd <oms home>/bin
emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys password> -new_pwd <new sysman password>

In Image 2, notice that I didn’t pass the password for SYS or SYSMAN on the command line.  EMCTL will ask you to provide the password if you don’t put it on the command line.

Image 2:
sysman_pwd_change_pwd.png

 

 

 

 

 

 

 

3. Stop the Admin Server on the primary OMS and restart OMS

cd <oms home>/bin
emctl stop oms -all
emctl start oms

Image 3:
sysman_pwd_start_oms.png

 

 

 

 

 

 

 

 

 

4. Verify that all of OMS is up and running

cd <oms home>/bin
emctl status oms -details

Image 4:

sysman_pwd_oms_status.png
 

 

 

 

 

 

 

 

 

 

After verifying that the OMS is backup, I can now try to login to the OMS interface.

Image 5:
sysman_pwd_oem_access.png

 

 

 

 

 

 

 

 

 

 

 

As we can see, I’m able to access OEM as SYSMAN now with the new SYSMAN password.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Next Generation Outline Extractor Webcast - Oct 8th!

Tim Tow - Fri, 2014-09-12 16:07
I am doing a repeat of my Kscope15 'Best Speaker' award-winning presentation as part of the ODTUG webcast series. Here is the official announcement from ODTUG:
Wednesday, October 8, 2014 12:00 PM - 1:00 PM EDT

Next Generation Essbase Outline Extractor Tips and Tricks
Tim Tow, Applied OLAP
The Next Generation Outline Extractor is the follow-up to the classic OlapUnderground Essbase Outline Extractor used by thousands of Essbase customers. This session, which was the highest-rated session at Kscope15 in Seattle, explores some of the new capabilities of the Next Generation Outline Extractor, including command line operations exported directly to a relational database. Attend this session to learn how to leverage this free utility in your company. Make you you sign up to join my on October 8th; you can register here!.
Categories: BI & Warehousing

Best of OTN - Week of September 7th

OTN TechBlog - Fri, 2014-09-12 13:43
Database Community - Laura Ramsey, OTN Database Community Manager

Here are a few round ups of session that will be at Oracle OpenWorld in just a few weeks!  

Database Developer Hands-On-Labs
Oracle Database Service sessions,  from good friend Chris Warticki - @cwarticki

OTN DBA/DEV Watercooler Blog- What do you know about the Oracle RAC Stack? It's a compelling approach to Virtualization. Read on to see what you need to know.

Java Community - Tori Wieldt, OTN Java Community ManagerLAST CHANCE: Use code DDD4 to save $75 off a JavaOne Discover Pass.

JCP Activities at JavaOne 2014 - Read this blog to learn what's happening with the JCP program, JCP.Next, Adopt-a-JSR, meet the 12th Annual JCP Award nominees and potential new JCP Executive Committee (EC) members.

Driverless Cars and Java - Paul Perrone is a regular fixture at JavaOne, and this year's conference is no exception. He will give a session called "Automated Vehicle Testing with Java." Read the full oracle.com story, "Java Takes the Wheel.

Friday Funny: “Whats the object-oriented way to become wealthy?”
A: Inheritance
Systems Community  -

SPARC M7 - A good read about “the forthcoming SPARC M7, the biggest and baddest SPARC processor that either Sun Microsystems or Oracle has ever created.”

Interesting Q&A on the OTN Solaris Community:
"I've imported a service, using inetconv, into the smf repository with a typo. The service is now MASKED so I can't remove it using svcadm delcust. Can a service be un-MASKED ? - Turns out the behavior of "svccfg import" and "svccfg delete fmri" has changed in Oracle Solaris 11. Go to the conversation to learn more.

Oracle's SPARC T5-4 Server Processor Upgrade Animation  - This animation shows you how to upgrade a SPARC T5-4 server from a one-processor to a two-processor configuration and much more.

Get involved in community conversations on the following OTN channels...



OOW - Focus On Support and Services for Enterprise Manager

Chris Warticki - Fri, 2014-09-12 08:00
Focus On Support and Services for Enterprise Manager   Monday, Sep 29, 2014

Conference Sessions

Best Practices for Maintaining and Supporting Oracle Database
Balaji Bashyam, Vice President, Oracle
Roderick Manalac, Consulting Tech Advisor, Oracle
11:45 AM - 12:30 PM Moscone South - 310 CON8270 Best Practices for Maintaining and Supporting Oracle Enterprise Manager
Farouk Abushaban, Senior Principal Technical Analyst, Oracle
2:45 PM - 3:30 PM Intercontinental - Grand Ballroom C CON8567 Oracle WebLogic Server: Best Practices for Troubleshooting Performance Issues
Laurent Goldsztejn, Principal Engineer, Proactive Support, Oracle
4:00 PM - 4:45 PM Moscone South - 270 CON8307 Oracle Exadata: Maintenance and Support Best Practices
Christian Trieb, CDO, Paragon Data GmbH
Jaime Figueroa, Senior Principal Technical Support Engineer, Oracle
Bennett Fleisher, Customer Support Director, Oracle
4:00 PM - 4:45 PM Moscone South - 310 CON8259 Tuesday, Sep 30, 2014

Conference Sessions

Best Practices for Maintaining Oracle Fusion Middleware
Ken Vincent, Senior Principal Technical Support Engineer, Oracle
10:45 AM - 11:30 AM Moscone West - 3022 CON8285 Oracle Database 12c Upgrade: Tools and Best Practices from Oracle Support
Agrim Pandit, Principal Software Engineer, Oracle
5:00 PM - 5:45 PM Moscone South - 310 CON8236 Wednesday, Oct 01, 2014

Conference Sessions

Taming the Wild West with Oracle Database Options
Mike Brotherton, JP Morgan Chase
Ashok Pandya, Consulting Solutions Director, Oracle
4:45 PM - 5:30 PM Intercontinental - Union Square CON3910 Wednesday, Oct 01, 2014

Conference Sessions

Proactive Support Best Practices: Oracle E-Business Suite Payables and Payments
Stephen Horgan, Senior Principal Technical Support Engineer, Oracle
Andrew Lumpe, Senior Principal Support Engineer, Oracle
2:00 PM - 2:45 PM Moscone West - 3006 CON8479 Thursday, Oct 02, 2014

Conference Sessions

Real-World Oracle Maximum Availability Architecture with Oracle Engineered Systems
Bill Callahan, Director, Products and Technology, CCC Information Services
Jim Mckinstry, Consulting Practice Director, Oracle
9:30 AM - 10:15 AM Intercontinental - Grand Ballroom B CON2335 Oracle E-Business Suite Architecture Best Practices: Tips from CBS
John Basone, CBS
Greg Jerry, Director - Oracle Enterprise Architecture, Oracle
12:00 PM - 12:45 PM Marriott Marquis - Salon 4/5/6* CON3829 Best Practices for Maintaining Your Oracle RAC Cluster
William Burton, Consulting Member of Technical Staff, Oracle
Scott Jesse, Customer Support Director, RAC, Storage & RAC Assurance, Oracle
Bryan Vongray, Senior Principal Technical Support Engineer, Oracle
12:00 PM - 12:45 PM Moscone South - 310 CON8252 Optimizing Oracle Exadata with Oracle Support Services: A Client View from KPN
Eric Zonneveld, Ing., KPN NV
Jan Dijken, Principal Advanced Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone South - 305 CON7054   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

Watch: 5 Best Practices for Launching Your Online Video Game

Pythian Group - Fri, 2014-09-12 07:24

Warner Chaves, Principal Consultant at Pythian, has had the privilege of working with several companies on their video game launches, and is best known for his work with the highly anticipated release of an action-adventure video game back in 2013. Through his experience, he’s developed a set of best practices for launching an online video game.

“You don’t want to have angry gamers on the launch of the game because they lost progress in the game,” he says. “Usually at launch, you will have really high peaks of volume, and there might be some pieces of the infrastructure that are not as prepared for that kind of load. There also might be some parts of the game that are actually more popular than what  you expected.”

Watch his latest video below, 5 Best Practices for Launching Your Online Video Game.

Categories: DBA Blogs

Log Buffer #388, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-12 07:22

In order to expand the knowledge about database features of any kind, blogs are inevitable these days. Whether its Oracle, MySQL, or SQL Server blogs writers are contributing like never before and this log buffer edition skims some of it.

Oracle:

The Oracle Utilities family of products use Oracle standard technology such as the Oracle Database and Oracle Fusion Middleware (a.k.a. Oracle WebLogic).

OBIEE SampleApp in The Cloud: Importing VirtualBox Machines to AWS EC2.

The default value for the INMEMORY_MAX_POPULATE_SERVERS parameter is derived from the PGA_AGGREGATE_LIMIT parameter.

Most customers of Oracle Enterprise Manager using JVM Diagnostics use the tool to monitor their Java Applications servers like Weblogic, Websphere, Tomcat, etc.

Taking Enterprise File Exchange to the Next Level with Oracle Managed File Transfer 12c.

SQL Server:

The concept of a synonym was introduced in SQL Server 2005. Synonyms are very simple database objects, but have the potential to save a lot of time and work if implemented with a little bit of thought.

This article summarizes the factors to consider and provide an overview of various options for HA and DR in cloud based SQL Server deployments.

Chris Date is famous for his writings on relational theory. Chris took on the role of communicating and teaching Codd’s relational theory, and reluctantly admits to a role in establishing SQL as the dominant relational language.

Introduction of how to design a star schema dimensional model for new BI developers.

Have you ever wondered why the transaction log file grows bigger and bigger? What caused it to happen? How do you control it? How does the recovery model of a database control the growing size of the transaction log? Read on to learn the answers.

MySQL:

A common migration path from standalone MySQL/Percona Server to a Percona XtraDB Cluster (PXC) environment involves some measure of time where one node in the new cluster has been configured as a slave of the production master that the cluster is slated to replace.

How to shrink the ibdata file by transporting tables with Trite.

OpenStack users shed light on Percona XtraDB Cluster deadlock issues.

There are a lot of tools that generate test data.  Many of them have complex XML scripts or GUI interfaces that let you identify characteristics about the data. For testing query performance and many other applications, however, a simple quick and dirty data generator which can be constructed at the MySQL command line is useful.

How to calculate the correct size of Percona XtraDB Cluster’s gcache.

Categories: DBA Blogs