Skip navigation.

Feed aggregator

Slides and demo script from my ORDS talk at APEX Connect 2016 in Berlin

Dietmar Aust - Thu, 2016-04-28 11:25
Hi everybody,

I just came back from the DOAG APEX Connect 2016 conference in Berlin ... very nice location, great content and the wonderful APEX community to hang out with ... always a pleasure. This time we felt a little bit pink ;)

As promised, you can download the slides and the demo script (as is) from my site.

Instructions are included.

See you at KScope in Chicago, #letswreckthistogether .

Cheers and enyoy!
~Dietmar. 

Log Buffer #471: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-04-28 09:14

This Log Buffer Edition covers Oracle, SQL Server and MySQL blog posts of the week.

Oracle:

Improving PL/SQL performance in APEX

A utility to extract and present PeopleSoft Configuration and Performance Data

No, Oracle security vulnerabilities didn’t just get a whole lot worse this quarter.  Instead, Oracle updated the scoring metric used in the Critical Patch Updates (CPU) from CVSS v2 to CVSS v3.0 for the April 2016 CPU.  The Common Vulnerability Score System (CVSS) is a generally accepted method for scoring and rating security vulnerabilities.  CVSS is used by Oracle, Microsoft, Cisco, and other major software vendors.

Oracle Cloud – DBaaS instance down for no apparent reason

Using guaranteed restore points to navigate through time

SQL Server:

ANSI SQL with Analytic Functions on Snowflake DB

Exporting Azure Data Factory (ADF) into TFS Source Control

Getting started with Azure SQL Data Warehouse

Performance Surprises and Assumptions : DATEADD()

With the new security policy feature in SQL Server 2016 you can restrict write operations at the row level by defining a block predicate.

MySQL:

How to rename MySQL DB name by moving tables

MySQL 5.7 Introduces a JSON Data Type

Ubuntu 16.04 first stable distro with MySQL 5.7

MariaDB AWS Key Management Service (KMS) Encryption Plugin

MySQL Document Store versus Bug hunter

Categories: DBA Blogs

No Filters: My ASU/GSV Conference Panel on Personalized Learning

Michael Feldstein - Thu, 2016-04-28 07:57

By Michael FeldsteinMore Posts (1069)

ASU’s Lou Pugliese was kind enough to invite me to participate on a panel discussion on “Next-Generation Digital Platforms,” which was really about a soup of adaptive learning, CBE, and other stuff that the industry likes to lump under the heading “personalized learning” these days. One of the reasons the panel was interesting was that we had some smart people on the stage who were often talking past each other a little bit because the industry wants to talk about the things that it can do something about—features and algorithms and product design—rather than the really hard and important parts that it has little influence over—teaching practices and culture and other messy human stuff. I did see a number of signs at the conference (and on the panel) that ed tech businesses and investors are slowly getting smarter about understanding their respective roles and opportunities. But this particular topic threw the panel right into the briar patch. It’s hard to understand a problem space when you’re focusing on the wrong problems. I mean no disrespect to the panelists or to Lou; this is just a tough nut to crack.

I admit, I have few filters under the best of circumstances and none left at all by the second afternoon of an ASU/GSV conference. I was probably a little disruptive, but I prefer to think of it as disruptive innovation.

Here’s the video of the panel:

The post No Filters: My ASU/GSV Conference Panel on Personalized Learning appeared first on e-Literate.

Standard SQL ? – Oracle REGEXP_LIKE

The Anti-Kyte - Thu, 2016-04-28 05:55

Is there any such thing as ANSI Standard SQL ?
Lots of databases claim to conform to this standard. Recent experience tends to make me wonder whether it’s more a just basis for negotiation.
This view is partly the result of having to juggle SQL between three different SQL parsers in the Cloudera Hadoop infrastructure, each with their own “quirks”.
It’s worth remembering however, that SQL differs across established Relational Databases as well, as a recent question from Simon (Teradata virtuoso and Luton Town Season Ticket Holder) demonstrates :

Is there an Oracle equivalent of the Teradata LIKE ANY operator when you want to match against a list of patterns, for example :

like any ('%a%', '%b%')

In other words, can you do a string comparison, including wildcards, within a single predicate in Oracle SQL ?

The short answer is yes, but the syntax is a bit different….

The test table

We’ve already established that we’re not comparing apples with apples, but I’m on a bit of a health kick at the moment, so…

create table fruits as
    select 'apple' as fruit from dual
    union all
    select 'banana' from dual
    union all
    select 'orange' from dual
    union all
    select 'lemon' from dual
/
The multiple predicate approach

Traditionally the search statement would look something like :

select fruit
from fruits
where fruit like '%a%'
or fruit like '%b%'
/

FRUIT 
------
apple 
banana
orange

REGEXP_LIKE

Using REGEXP_LIKE takes a bit less typing and – unusually for a regular expression – less non-alphanumeric characters …

select fruit
from fruits
where regexp_like(fruit, '(a)|(b)')
/

FRUIT 
------
apple 
banana
orange

We can also search for multiple substrings in the same way :

select fruit
from fruits
where regexp_like(fruit, '(an)|(on)')
/

FRUIT 
------
banana
orange
lemon 

I know, it doesn’t feel like a proper regular expression unless we’re using the top row of the keyboard.

Alright then, if we just want to get records that start with ‘a’ or ‘b’ :

select fruit
from fruits
where regexp_like(fruit, '(^a)|(^b)')
/

FRUIT 
------
apple 
banana

If instead, we want to match the end of the string…

select fruit
from fruits
where regexp_like(fruit, '(ge$)|(on$)')
/

FRUIT
------
orange
lemon

…and if you want to combine searching for patterns at the start, end or anywhere in a string, in this case searching for records that

  • start with ‘o’
  • or contain the string ‘ana’
  • or end with the string ‘on’

select fruit
from fruits
where regexp_like(fruit, '(^o)|(ana)|(on$)')
/

FRUIT
------
banana
orange
lemon

Finally on this whistle-stop tour of REGEXP_LIKE, for a case insensitive search…

select fruit
from fruits
where regexp_like(fruit, '(^O)|(ANA)|(ON$)', 'i')
/

FRUIT
------
banana
orange
lemon

There’s quite a bit more to regular expressions in Oracle SQL.
For a start, here’s an example of using REGEXP_LIKE to validate a UK Post Code.
There’s also a comprehensive guide here on the PSOUG site.
Now I’ve gone through all that fruit I feel healthy enough for a quick jog… to the nearest pub.
I wonder if that piece of lime they put in top of a bottle of beer counts as one of my five a day ?


Filed under: Oracle, SQL Tagged: regexp_like

Contemplating Upgrading to OBIEE 12c?

Rittman Mead Consulting - Thu, 2016-04-28 04:00
Where You Are Now

NewImage
OBIEE 12c has been out for some time, and it seems like most folks are delaying upgrading to OBIEE 12c until the very last minute. Or at least until Oracle decides to put out another major version change of OBIEE, which is understandable. You’ve already spent time and money and devoted hundreds of resource hours to system monitoring, maintenance, testing, and development. Maybe you’ve invested in staff training to try to maximize your ROI in your existing OBIEE purchase. And now, after all this time and effort, you and your team have finally gotten things just right. Your BI engine is humming along, user adoption and stickiness are up, and you don’t have a lot of dead objects clogging up the Web Catalog. Your report hacks and work-arounds have been worked and reworked to become sustainable and maintainable business solutions. Everyone is getting what they want.

Sure, this scenario is part fantasy, but it doesn’t mean that as a BI team lead or member, you’re not always working toward this end. It would be nice to think that the people designing the tools with which we do this work understood the daily challenges and processes we must undergo in order to maintain the precarious homeostasis of our BI ecosystems. That’s where Rittman Mead comes in. If you’re considering upgrading to OBIEE 12c, or are even curious, keep reading. We’re here to help.

So Why Upgrade

Let’s get right down to it. Shoot over here and here to check out what our very own Mark Rittman had to say about the good, the bad, and the ugly of 12c. Our Silvia Rauton did a piece on lots of the nuts and bolts of 12c’s new front-end features. They’re all worth a read. Upgrading to OBIEE 12c offers many exciting new features that shouldn’t be ignored.

Heat Map

How Rittman Mead Can Help

We understand what it is to be presented with so many project challenges. Do you really want to risk the potential perils and pitfalls presented by upgrading to OBIEE 12c? We work both harder and smarter to make this stuff look good. And we get the most out of strategy and delivery via a number of in-house tools designed to keep your OBIEE deployment in tip top shape.

Maybe you want to make sure all your Catalog and RPD content gets ported over without issue? Instead of spending hours on testing every dashboard, report, and other catalog content post-migration, we’ve got the Automated Regression Testing package in our tool belt. We deploy this series of proprietary scripts and dashboards to ensure that everything will work just the way it was, if not better, from one version to the next.

Maybe you’d like to make sure your system will fire on all cylinders or you’d like to proactively monitor your OBIEE implementation. For that we’ve got the Performance Analytics Dashboards, built on the open source ELK stack to give you live, active monitoring of critical BI system stats and the underlying database and OS.

OBIEE Performance Analytics

On top of these tools, we’ve got the strategies and processes in place to not only guarantee the success of your upgrade, but to ensure that you and your team remain active and involved in the process.

What to Expect

You might be wondering what kinds of issues you can expect to experience during upgrading to OBIEE 12c (which is to say, nothing’s going to break, right?). Are you going to have to go through a big training curve? Does upgrading to OBIEE 12c mean you’re going to experience considerable resource downtime as your team, or an even an outside company, manages this process? To answer this question, I’m reminded of a quote from the movie Fight Club: “Choose your level of involvement.”

While we always prefer to work alongside your BI or IT team to facilitate the upgrade process, we also know that resource time is valuable and that your crew can’t stop what they’re doing until things wraps up. We often find that the more clients are engaged with the process, however, the easier the hand-off is because clients better understand best practices, and IT and BI teams are more empowered for the future.

Learning More about OBIEE 12c

But if you’re like many organizations, maybe you have to stay more hands off and get training after the upgrade is complete. Check out the link here to look over the agenda of our OBIEE 12c Bootcamp training course. Like our hugely popular 11g course, this program is five days of back-to-front instruction taught via a selection of seminars and hands-on labs, designed to impart most everything your team will need to know to continue or begin their successful BI practice.

What we often find is that, in addition to being a thorough and informative course, the Bootcamp is a great way to bring together teams or team members, often dispersed among different offices, under one roof to gain common understanding about how each person plays an important role as a member of the BI process. Whether they handle the ETL, data modeling, or report development, everyone can benefit from what often evolves from a training session into some impromptu team building.

Feel Empowered

If you’re still on the fence about whether or not to upgrade, as I said before, you’re not alone. There are lots of things you need to consider, and rightfully so. You might be thinking, “What does this mean for extra work on the plates of my resources? How can I ensure the success of my project? Is it worth it to do it now, or should I wait for the next release?” Whatever you may be mulling over, we’ve been there, know how to answer the questions, and have some neat tools in our utility belt to move the process along. In the end, I hope to have presented you with some bits to aid you in making a decision about upgrading to OBIEE 12c, or at least the impetus to start thinking about it.

If you’d like any more information or just want to talk more about the ins and outs of what an upgrade might entail, send over an email or give us a call.

The post Contemplating Upgrading to OBIEE 12c? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Any Questions

Jonathan Lewis - Thu, 2016-04-28 02:56

I’m going to be at the OUG Scotland conference on 22nd June, and one of my sessions is a panel session on Optimisation where I’ll be joined by Joze Senegacnik and Card Dudley.

The panel is NOT restricted to questions about how the cost based optimizer works (or not), we’re prepared to tackle any questions about making Oracle work faster (or more efficiently – which is not always the same thing). This might be configuration, indexing, other infrastructure etc.; and if we haven’t got a clue we can always ask the audience.

To set the ball rolling on the day it would be nice to have a few questions in advance, preferably from the audience but any real-world problems will be welcome and (probably) relevant to the audience. If you have a question that you think suitable please email it to me or add it as a comment below. Ideally a question will be fairly short and be relevant to many people; if you have to spend a long time setting the scene and supplying lots of specific detail then it’s probably a question that an audience (and the panel) would not be able to follow closely enough to give relevant help.

Update 29th April

I’ve already had a couple of questions in the comments and a couple by email – but keep them coming.

 

 

 


Server Problems : Update

Tim Hall - Thu, 2016-04-28 01:08

hard-disk-42935_640This is a follow on from my server problems post from yesterday…

Regarding the general issue, misiaq came up with a great suggestion, which was to use watchdog. It’s not going to “fix” anything, but if I get a reboot when the general issue happens, that would be much better than having the server sit idle for 5 hours until I  wake up.

Archive Storage Services vs Archive Cloud Services

Pat Shuff - Thu, 2016-04-28 01:07
Yesterday we started talking about the cost comparison for storage in the cloud. We briefly touched on the cost of long term archive in the cloud. How much does it cost to backup data for long term archive and what is the best way to do this? Years ago the default way of doing this was to copy your data on disk to a tape unit and put the tape in a box. The box was then put in an environmentally controlled room to extend the lifetime of tape and a person was put on staff to pull the data off the shelf when the data was needed. The data might be a backup of data on disk or a secondary copy just in case the disk failed. Tape was typically used to provide separation of duties required by Sarbanes-Oxly to keep people who report on financial data separate from the financial data. It also allowed companies to take large volumes of data, like seismic data, and not keep it on spinning disks. The traces were reloaded when geophysicists wanted to look at the data.

The first innovation in this technology was to provide a robot to load and unload tapes as a tape unit gets full or needs to be reloaded. Magazines were created that could hold eight tapes and the robots had bar code readers so that they could seek to the right tape in the magazine, pull it out of the series of tapes and inserted into the tape unit for reading or writing. Management software got more advanced and understood the bar code values and could sequence the whopping 800 GB of data that could be written to an LT04 tape. Again, technology gets updated and the industry moved to LT05 and LT06 tapes with significantly higher densities. A single LT06 could hold 2.5 TB per tape unit. Technology marches on and compression allows us to store 6 TB on these disks. If we go back to our 120 TB case that we talked about yesterday this means that we will need 20 tapes (at $30-$45 for each tape) and $25K for a single tape drive unit. Most tape drive systems support 8 tapes per magazine so we are talking about something that will support three magazines. To support three magazines, we need a second shelf in our tape storage so the price goes up by about $20K. We are sitting at about $55K to backup our 120 TB and $5.5K in support annually for the hardware. We also need about $1K in tape for the number of full and incremental backups that we want which would be $20K for four months of retention before we recycle the tapes. These tapes are good for a dozen re-writes so every three years we will need to repurchase tapes. If we spread the cost of the tape unit, tape drives, and tapes across three years we are looking at $2K/month to backup our 120 TB. We also need to factor in $60/week for tape pickup and storage fees at a service like Iron Mountain and a couple of $250 charges to retrieve tapes in the event of a catastrophic failure to drive tapes back to our data center from cold storage. This bumps the cost to $2.2K/month which is significantly cheaper than the $10K/month for network storage in our data center or $3.6K/month for cloud storage services. Unfortunately, a tape unit requires someone to care and feed it and you will pay that person more than $600/month but not $7.8K/month which you would with the cloud or disk solutions.

If you had a ton of data to archive you could purchase a tape silo that supported hundreds or thousands of magazines. Unfortunately, this expandability cones at a cost. The tape backup unit grew from an eighth of a rack to twenty full racks. There isn't much in between. You can get an eighth of a rack solution, a full rack solution, or a twenty full rack solution. The larger solution comes in at hundreds of thousands of dollars rather than tens of thousands.

Enter cloud solutions. Amazon and Oracle offer tape solutions in the cloud. Both companies offer the twenty full rack solution but only charge a per tape charge to consumers. Amazon Glacier charges $7/TB/month to store data. Oracle charges $1/TB/month for the same service. Both companies charge for data restoration and outbound transfer of data. The Amazon Glacier cost of writing 120 TB and reading back 10% of it comes in at $2218/month. This is the same cost as having the tape unit on site. The key difference is that we can recover the data by requesting it from Amazon and get it back in less than four hours. There is no emergency recovery charges. There is not the weekly pickup charges. We can expand the amount that we backup and the bulk of this cost is reading back the data ($1300). Storage is relatively cheap for our backups, we just need to plan on the cost of recovery and try to limit this since it is the bulk of the cost.

We can drop this cost even more using the Oracle Archive Cloud Services. The price from Oracle is $1/TB/month but the recovery and transmission charges are about the same. The same archive service with Oracle is $1560/month with roughly $1300 being the charges for restoring and outbound transfer of the data. Unfortunately, Oracle does not offer an un-metered archive service so we have to guestimate how much we are going to restore on a monthly basis.

Both services use REST apis to write, restore, and read data. When a container (Oracle Archive) or bucket (Amazon Glacier) is created, a PUT call is done to the endpoint of the service. The first step required by both are authentication to provide credentials into the service. Below we show the Oracle authentication and creation process through the REST api.

The important part of this is the archive header extension. This differentiates if the container is spinning disk or if it is tape in the cloud.

Amazon recommends using a windows based tool like s3browser, CloudBerry, or using a language like Java, .NET, or Ruby and their published SDKs. CloudBerry works for the Oracle Archive as well. When you create a container you have the option of pulling down storage or archive as the container type.

Both services allow you to encrypt and compress the data as it is written with HTML Headers changing the characteristics and parameters of the container. Both services require you to issue a PUT request to write the data to tape. Below we show the Oracle REST api.

For CloudBerry and the other gui based tools, uploading is just a drag and drop from your local file system to the tape storage in the cloud.

Amazon details the readback procedure and job system that shows the status of the restore request. Oracle has a similarly defined retrieval policy as well as an archive tutorial. Both services offer a 4 hour window to allow for restoration. Below is an example of a restore request and checking on the job status of the job spawned to load the tape and transfer the data for reading. The file is ready to read when the completedPercentage is 100.

We can do the same thing with the S3 browser and Amazon Glacier. We need to request the restore, check the job status, then download the restored files. The files change color when they are ready to read.

In summary, we have looked at how to reduce cost of archives and backups. We looked at using a secondary disk at our data center or another data center. We looked at using on site tape units. We looked at disk in the cloud. Today we looked at tape in the cloud. It is important to remember that not one of these solutions is the answer. A combination of any or all of them are needed. Daily and weekly backups should happen to a secondary disk locally. This data is most likely to be restored on a regular basis. Once you get a full backup or two under your belt, move the data to another site. It might be spinning disk, it might be tape but something needs to be offsite in the event of a true catastrophic failure like a communication link going out (think Dell PowerVault and a thunderstorm) and you loose your primary lun and secondary lun that contains your backups. The whole idea of offsite backups are not for restore but primary for insurance and regulation compliance. If someone needs to see the old data, it is there. You are betting that you won't need to read it back and the cloud vendors are counting on that. If you do read it back on a regular basis you might want to significantly increase your budget, pass the charges onto the people who want to read data back, or look for another solution. Tape storage in the cloud is a great way of archiving data for a long time at a low cost.

Fall 2014 IPEDS Data: Top 30 largest online enrollments per institution

Michael Feldstein - Wed, 2016-04-27 17:21

By Phil HillMore Posts (402)

The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. This is the third year of data.

Let’s look at the top 30 online programs for Fall 2014 (in terms of total number of students taking at least one online course). Some notes on the data source:

  • I have combined the categories ‘students exclusively taking distance education courses’ and ‘students taking some but not all distance education courses’ to obtain the ‘at least one online course’ category;
  • Each sector is listed by column;
  • IPEDS tracks data based on the accredited body, which can differ for systems – I manually combined most for-profit systems into one institution entity as well as Arizona State University[1];
  • See this post for Fall 2013 Top 30 data and see this post for Fall 2014 profile by sector and state.
Fall 2014 Top 30 Largest Online Enrollments Per Institution Number Of Students Taking At Least One Online Course (Graduate & Undergraduate Combined)

Top 30 Online Enrollments By Fall 2014 IPEDS Data

The post Fall 2014 IPEDS Data: Top 30 largest online enrollments per institution appeared first on e-Literate.

Fall 2014 IPEDS Data: New Profile of US Higher Ed Online Education

Michael Feldstein - Wed, 2016-04-27 16:12

By Phil HillMore Posts (402)

The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. I have been analyzing and sharing the data in the initial Fall 2012 dataset and for the Fall 2013 dataset. Both WCET and the Babson Survey Research Group also provide analysis of the IPEDS data for distance education. I highly recommend the following analysis in addition to the profile below (we have all worked together behind the scenes to share data and analyses).

Below is a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.

Please note the following:

  • For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
  • I have provided some flat images as well as an interactive graphic at the bottom of the post. The interactive graphic has much better image resolution than the flat images.
  • There are three tabs below in the interactive graphic – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state; the third shows a map view.
  • Yes, I know I’m late this year in getting to the data.

By Sector

If you select the middle tab, you can view the same data for any selected state. As an example, here is data for Virginia in table form.

By State Table VA

There is also a map view of state data colored by number of, and percentage of, students taking at least one online class for each sector. If you hover over any state you can get the basic data. As an example, here is a view highlighting Virginia private 4-year institutions.

By State Map VA

Interactive Graphic

For those of you who have made it this far, here is the interactive graphic. Enjoy the data.

The post Fall 2014 IPEDS Data: New Profile of US Higher Ed Online Education appeared first on e-Literate.

How to recover space from already deleted files

Pythian Group - Wed, 2016-04-27 13:15

Wait, what? Deleted files are gone, right? Well, not so if they’re currently in use, with an open file handle by an application. In the Windows world, you just can’t touch it, but under Linux (if you’ve got sufficient permissions), you can!

Often in the Systems Administration, and Site Reliability Engineering world, we will encounter a disk space issue being reported, and there’s very little we can do to recover the space. Everything is critically important! We then check for deleted files and find massive amounts of space consumed when someone has previously deleted Catalina, Tomcat, or Weblogic log files while Java had them in use, and we can’t restart the processes to release the handles due to the critical nature of the service. Conundrum!

Here at Pythian, we Love Your Data, so I thought I’d share some of the ways we deal with situations like this.

How to recover

First, we grab a list of PIDs with files still open, but deleted. Then iterate over the open file handles, and null them.

PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done

This could be scripted in an automatic nulling of all deleted files, with great care.

Worked example

1. Locating deleted files:

[root@importantserver1 usr]# lsof | head -n 1 ; lsof | grep -i deleted
 COMMAND   PID   USER   FD  TYPE DEVICE SIZE/OFF NODE   NAME
 vmtoolsd  2573  root   7u  REG  253,0  9857     65005  /tmp/vmware-root/appLoader-2573.log (deleted)
 zabbix_ag 3091  zabbix 3wW REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3093  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3094  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3095  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3096  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 zabbix_ag 3097  zabbix 3w  REG  253,0  4        573271 /var/tmp/zabbix_agentd.pid (deleted)
 java      23938 tomcat 1w  REG  253,0  0        32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 2w  REG  253,0  45322216 32155  /opt/log/tomcat/catalina.out (deleted)
 java      23938 tomcat 9w  REG  253,0  174      32133  /opt/log/tomcat/catalina.2015-01-17.log (deleted)
 java      23938 tomcat 10w REG  253,0  57408    32154  /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 java      23938 tomcat 11w REG  253,0  0        32156  /opt/log/tomcat/manager.2014-12-09.log (deleted)
 java      23938 tomcat 12w REG  253,0  0        32157  /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 java      23938 tomcat 65w REG  253,0  363069   638386 /opt/log/archive/athena.log.20160105-09 (deleted)

2. Grab the PIDs:

[root@importantserver1 usr]# lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq
 2573
 3091
 3093
 3094
 3095
 3096
 3097
 23938

Show the deleted files that each process still has open (and is consuming space):

[root@importantserver1 usr]# export PIDS=$(lsof | awk '/deleted/ { if ($7 > 0) { print $2 }; }' | uniq)
[root@importantserver1 usr]# for PID in $PIDS; do ll /proc/$PID/fd | grep deleted; done
 lrwx------ 1 root root 64 Mar 21 21:15 7 -> /tmp/vmware-root/appLoader-2573.log (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 root root 64 Mar 21 21:15 3 -> /var/tmp/zabbix_agentd.pid (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 1 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 10 -> /opt/log/tomcat/localhost.2016-02-12.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 11 -> /opt/log/tomcat/manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 12 -> /opt/log/tomcat/host-manager.2014-12-09.log (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 2 -> /opt/log/tomcat/catalina.out (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 65 -> /opt/log/archive/athena.log.20160105-09 (deleted)
 l-wx------ 1 tomcat tomcat 64 Mar 21 21:15 9 -> /opt/log/tomcat/catalina.2015-01-17.log (deleted)

Null the specific files (here, we target the catalina.out file):

[root@importantserver1 usr]# cat /dev/null > /proc/23938/fd/2
Alternative ending

Instead of deleting the contents to recover the space, you might be in the situation where you need to recover the contents of the deleted file. If the application still has the file descriptor open on it, you can then recover the entire file to another one (dd if=/proc/23938/fd/2 of=/tmp/my_new_file.log) – assuming you have the space to do it!

Conclusion

While it’s best not to get in the situation in the first place, you’ll sometimes find yourself cleaning up after someone else’s good intentions. Now, instead of trying to find a window of “least disruption” to the service, you can recover the situation nicely. Or, if the alternative solution is what you’re after, you’ve recovered a file that you thought was long since gone.

Categories: DBA Blogs

Deploy Docker containers using AWS Opsworks

Pythian Group - Wed, 2016-04-27 12:51
Introduction

This post is about how to deploy Docker containers on AWS using Opsworks and Docker Composer.
For AWS and Docker, the introduction isn’t required. So, let’s quickly introduce Opsworks and Docker Composer.

Opsworks

Opsworks is a great tool provided by AWS, which runs Chef recipes on your Instances. If the instance is an AWS instance, you don’t pay anything for using Opsworks, but you can also manage instances outside of AWS with a flat cost just by installing the Agent and registering the instance on Opsworks.

Opsworks Instances type

We have three different types of instances on Opsworks:

1. 24x7x365
Run with no stop

2. Time based
Run in a predefined time. Such as work hours.

3. Load based
Scale up and down according to the metrics preconfigured.

You can find more details here.

Custom JSON

Opsworks provides Chef Databags (variables to be used in your recipes) via Custom JSON, and that’s the key to this solution. We will manage everything just changing a JSON file. This file can become a member of your development pipeline easily.

Life cycle

Opsworks has five life cycles:
1. Setup
2. Configure
3. Deploy
4. Undeploy
5. Shutdown
We will use setup, deploy, and shutdown. You can find more details about Opsworks life cycle here.

Docker Compose

Docker Compose was originally developed under the Fig project. Nowadays, the fig is deprecated, and docker-compose is a built-in component of Docker.
Using docker-compose, you can manage all containers and their attributes (links, share volumes, etc.) in a Docker host. Docker-compose can only manage containers on the local host where it is deployed. It cannot orchestrate Docker containers between hosts.
All configuration is specified inside of a YML file.

Chef recipes

Using Opsworks, you will manage all hosts using just one small Chef cookbook. All the magic is in translating Custom JSON file from Opsworks to YML file to be used by docker-compose.
The cookbook will install all components (Docker, pip, and docker-compose), translate Custom JSON to YML file and send commands to docker-compose.

Hands ON

Let’s stop talking and see things happen.

We can split it into five steps:

  1. Resources creation
    1. Opsworks Stack
        1. Log into your AWS account
        2. Go to Services -> Management Tools -> Opsworks
          Accessing Opsworks menu
        3. Click on Add stack (if you already have stacks on Opsworks) or Add your first stack (if it’s the first time you are creating stacks on opsworks)
        4. Select type Chef 12 stack
          Note: The Chef cookbook used in this example only supports Chef12
        5. Fill out stack information
          aws_opsworks_docker_image02
          Note:
          – You can use any name as stack name
          – Make sure VPC selected are properly configured
          – This solution supports Amazon Linux and Ubuntu
          – Repository URL https://bitbucket.org/tnache/opsworks-recipes.git
        6. Click on advanced if you want to change something. Changing “Use OpsWorks security groups” to No can be a good idea when you need to communicate with instances which are running outside of Opsworks
        7. Click on “Add stack”
    2. Opsworks layer
        1. Click on “Add a layer”
        2. Set Name, Short name and Security groups. I will use webserver

      Note:
      Use a simple name because we will use this name in next steps
      The Name web is reserved for AWS internal use

        1. Click on “Add layer”

      aws_opsworks_docker_image03

    3. Opsworks Instance
        1. Click on “Instances” on left painel
        2. Click on “Add an instance”
        3. Select the size (instance type)
        4. Select the subnet
        5. Click on “Add instance”

      aws_opsworks_docker_image05

  2. Resources configuration
    1. Opsworks stack
        1. Click on “Stack” on left painel
        2. Click on “Stack Settings”
        3. Click on “Edit”
        4. Find Custom JSON field and paste the content of the file bellow

      custom_json_1

      1. Click on “Save”
    2. Opsworks layer
        1. Click on “Layers” on left painel
        2. Click on “Recipes”
        3. Hit docker-compose and press enter on Setup
        4. Hit docker-compose::deploy and press enter on Deploy
        5. Hit docker-compose::stop and press enter on Deploy
        6. Click on “Save”

      aws_opsworks_docker_image04

  3. Start
    1. Start instance
        1. Click on start

      aws_opsworks_docker_image06

  4. Tests
    Note: Wait until instance get online state

      1. Open your browser and you should be able to see It works!
      2. Checking running containers

    aws_opsworks_docker_image07

  5. Management
      1. Change custom json to file bellow (See resources configuration=>Opsworks stack)

    custom_json_2

      1. Click on “Deployments” on left painel
      2. Click on “Run Command”
      3. Select “Execute Recipes” as “Command”
      4. Hit “docker-compose::deploy” as “Recipes to execute”
      5. Click on “Execute Recipes”

    Note: Wait until deployment finish

      1. Checking running containers

    aws_opsworks_docker_image08

Categories: DBA Blogs

Stats History

Jonathan Lewis - Wed, 2016-04-27 06:09

From time to time we see a complaint on OTN about the stats history tables being the largest objects in the SYSAUX tablespace and growing very quickly, with requests about how to work around the (perceived) threat. The quick answer is – if you need to save space then stop holding on to the history for so long, and then clean up the mess left by the history that you have captured; on top of that you could stop gathering so many histograms because you probably don’t need them, they often introduce instability to your execution plans, and they are often the largest single component of the history (unless you are using incremental stats on partitioned objects***)

For many databases it’s the histogram history – using the default Oracle automatic stats collection job – that takes the most space, here’s a sample query that the sys user can run to get some idea of how significant this history can be:


SQL> select table_name , blocks from user_tables where table_name like 'WRI$_OPTSTAT%HISTORY' order by blocks;

TABLE_NAME                           BLOCKS
-------------------------------- ----------
WRI$_OPTSTAT_AUX_HISTORY                 80
WRI$_OPTSTAT_TAB_HISTORY                244
WRI$_OPTSTAT_IND_HISTORY                622
WRI$_OPTSTAT_HISTHEAD_HISTORY          1378
WRI$_OPTSTAT_HISTGRM_HISTORY           2764

5 rows selected.

As you can see the “histhead” and “histgrm” tables (histogram header and histogram detail) are the largest stats history tables in this (admittedly very small) database.

Oracle gives us a couple of calls in the dbms_stats package to check and change the history setting, demonstrated as follows:


SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31

1 row selected.

SQL> execute dbms_stats.alter_stats_history_retention(7)

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                          7

1 row selected.

Changing the retention period doesn’t reclaim any space, of course – it simply tells Oracle how much of the existing history to eliminate in the next “clean-up” cycle. This clean-up is controllled by a “savtime” column in each table:

SQL> select table_name from user_tab_columns where column_name = 'SAVTIME' and table_name like 'WRI$_OPTSTAT%HISTORY';

TABLE_NAME
--------------------------------
WRI$_OPTSTAT_AUX_HISTORY
WRI$_OPTSTAT_HISTGRM_HISTORY
WRI$_OPTSTAT_HISTHEAD_HISTORY
WRI$_OPTSTAT_IND_HISTORY
WRI$_OPTSTAT_TAB_HISTORY

5 rows selected.

If all you wanted to do was stop the tables from growing further you’ve probably done all you need to do. From this point onwards the automatic Oracle job will start deleting the oldest saved stats and re-using space in the existing table. But you may want to be a little more aggressive about tidying things up, and Oracle gives you a procedure to do this – and it might be sensible to use this procedure anyway at a time of your own choosing:


SQL> execute dbms_stats.purge_stats(sysdate - 7);

Basically this issues a series of delete statements (including a delete on the “stats operation log (wri$_optstat_opr)” table that I haven’t previously mentioned) – here’s an extract from an 11g trace file of a call to this procedure (output from a simple grep command):


delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_tab_history          where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_ind_history h        where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_aux_history          where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_opr                  where start_time < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history     where savtime < :1 and rownum <= NVL(:2, rownum)
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histgrm_history      where savtime < :1 and rownum <= NVL(:2, rownum)

Two points to consider here: although the appearance of the rownum clause suggests that there’s a damage limitation strategy built into the code I only saw one commit after the entire delete cycle, and I never saw a limiting bind value being supplied. If you’ve got a large database with very large history tables you might want to delete one day (or even just a few hours) at a time. The potential for a very long, slow, delete is also why you might want to do a manual purge at a time of your choosing rather than letting Oracle do the whole thing on auto-pilot during some overnight operation.

Secondly, even though you may have deleted a lot of data from these table you still haven’t reclaimed the space – so if you’re trying to find space in the sysaux tablespace you’re going to have to rebuild the tables and their indexes. Unfortunately a quick check of v$sysaux_occupants tells us that there is no official “move” producedure:


SQL> execute print_table('select occupant_desc, move_procedure, move_procedure_desc from v$sysaux_occupants where occupant_name = ''SM/OPTSTAT''')

OCCUPANT_DESC                 : Server Manageability - Optimizer Statistics History
MOVE_PROCEDURE                :
MOVE_PROCEDURE_DESC           : *** MOVE PROCEDURE NOT APPLICABLE ***

So we have to run a series of explicit calls to alter table move and alter index rebuild. (Preferably not when anyone is trying to gather stats on an object). Coding that up is left as an exercise to the reader, but it may be best to move the tables in the order of smallest table first, rebuilding indexes as you go.

Footnote:

*** Incremental stats on partitioned objects: I tend to assume that sites which use partitioning are creating very large databases and have probably paid a lot more attention to the details of how to use statistics effectively and successfully; that’s why this note is aimed at sites which don’t use partitioning and therefore think that the space taken up by the stats history significant.


Server Problems : Any ideas?

Tim Hall - Wed, 2016-04-27 02:54

I’m pretty sure last night’s problem was caused by a disk failure in the RAID array. The system is working now, but it might go down sometime today to get the disk replaced. Hopefully they won’t do what they did last time and wipe the bloody lot!

metered vs un-metered vs dedicated services

Pat Shuff - Wed, 2016-04-27 01:07
One of the newest concepts that has been introduced for cloud services is the concept of un-metered or dedicated services. Before we dive into this subject, let's review what a cloud service really is. When you boil it all down, you are basically leasing computer resources on a computer that you don't own. You are taking a slice of a compute engine, slice of a disk drive, part of a network connection. You are renting space. Think of it as living in an apartment. Yes, this is a silly analogy but if you think about it, it makes sense. You can rent an efficiency, one, two, or three bedroom apartment. You can get parking with or without a roof over your car. You can get a storage closet or a garage to store stuff that you are not using but want to keep around. There are benefits to apartments. You don't have to cut the grass. You typically have access to a pool but don't need to maintain it. If the toilet backs up or the gas stops working you call the super and they come fix it. You still have to replace your own light bulbs that burn out. You still need to clean your own bathroom and kitchen and take out your trash on a regular basis. On the grander scale, you don't need to drop 10% down and get a mortgage to live there. Monthly rents are typically cheaper than paying down a mortgage. Your taxes are bundled into your rent cost. You basically show up, use the apartment and go on with your life. On the flip side, there are drawbacks to apartments. If your upstairs neighbor likes to play heavy metal at 2am or throw wild parties on the weekends it does make it hard to sleep. Someone might park in your parking spot so you need to park farther away from your front door. You can't pull into your garage to unload your groceries and have to potentially carry them in the rain across the parking lot and up the stairs to your third floor apartment. The super might decide that Tuesday they are going to repaint all bathrooms another color and you need to be out of the way for a day and put up with the smell even though you planned a dinner party the next night. It is difficult to grill on your balcony and you can't really sit out without sharing the space with all of your neighbors. The true downfall is that twenty years from now, you will still be renting your apartment (and the rent probably went up every other year) while your college buddies are celebrating a mortgage burning party and the only thing that they owe on a monthly basis is the taxes that the government takes annually.

Yes, our analogy is silly. Yes, our analogy is relevant. It is easy to decide that you want another job in another city so you hire a mover, pack up all your stuff, and move to another apartment. This is where our analogy breaks down. Cloud vendors charge you for every piece of furniture that you take out of the building. They charge you to use the stairs or elevator. They charge you every time a moving van exits the building full of furniture and boxes of clothes. It is free to bring stuff in because it locks you into the apartment. Just don't try to take anything out. Remember that storage closet or garage that you got with your apartment, you can open the door and put stuff in for free but if you carry anything out (even if you just relocate it to your apartment) you get charged per item that you carry across the threshold.

If you look at storage from any cloud vendor they offer a metered storage service. The same is true for compute services. You can lease a virtual processor and memory and grind on data all that you want. The catch is when you want to transfer your files or report results of you analysis to your desktop computer, you get charged on a per gigabyte transferred across the internet. Cost calculators help you calculate these costs but they are a little hard to estimate and use to calculate outbound data charges. Amazon, for example, has a calculators that you can use. The AWS pricing calculator allows you to look at the cost of all cloud services.

Let's walk through the cost of Amazon Glacier. The price list says that you should pay $0.007/GB/month or $7/TB/month to keep things in cold storage. We will use 120 TB as our basis for analysis. We put this as the amount to store and see the cost of storing the data is $860/month.

If we plan on reading back 10% of this data during the month the price goes up to $2217. The bulk of these charges are the outbound charges. The cost goes to $921 if we read the data to an EC2 instance and not all the way back to our data center or desktop computer. To use our apartment analogy, you are paying $860 to get a storage garage. You pay $61 every time you take something out and move it to your apartment. You can put all you want into the storage area (as long as your don't exceed the space of the storage unit) but taking something out will cost you. If you put your recently retrieved item in your car or a truck and drive it out the gate you get a surcharge of $1300. It is important to remember that pulling more stuff out of your storage will cost you more. Putting this in terms of computer archive, you can store all of your emails, contracts, customer transactions, patient records in long term storage. If your on-site storage fails for some reason or if you get a legal request to review five years of data, you can pull the data back from cold storage. It will cost you to pull the data back but it is still cheaper than keeping the seven years of longer data on spinning disk in your data center (estimate $3K/TB plus 10% per year for spinning disk in your data center).

We can do the same calculation for cloud storage using S3. We can store 120 TB for roughly $3950/month. If we want to read back 10%, or 12 TB, of that data, it will cost us $5150 or $1200 additional.

We can reduce the cost by using lower speed storage in the cloud. We put the S3 data into the infrequent category to save money. This drops the cost to just over $3K which does save us about $2100/month. We agree to pay a lower cost to get higher latency and longer retrieval times. It is better than using tape in the cloud and we can save some money with this option.

We can opt for reduced redundancy storage (aka non-mirrored and non-replicated data) but we risk data loss since we will only have a single copy in the cloud. This drops the cost to $4300 with the data retrieval but we have to weigh the cost vs data loss risks.

Let's not pick on Amazon. How does this compare to Azure? Unfortunately, we can't start with Microsoft tape in the cloud, they don't offer the service. We must start with block storage in the Azure cloud. Microsoft has an Azure pricing calculator that you can use to perform the same calculations. The calculator and pricing is a little difficult to use when you first get into it. You basically need to put together the calculation a piece at a time. You need to factor in the cost of the storage and the cost of transferring the data from the cloud to your data center. This is done in two different pieces. An example of what we are looking for can be seen below.

We need to piece together the calculator. First we add the storage component then the bandwidth component. There is a transaction component but this amount is trivial and we are going to ignore it for simplicity.

If we look at the options for Azure storage, we can basically select blob storage in different zones. In the grand scheme of things, the cost is not siginificantly higher one way or another. The basic cost is about the same.

The third class of storage that are going to look at is the Oracle Storage Cloud Service. We can look at Oracle Storage Cloud Service as well as Oracle Archive Cloud Service. The Archive service compares directly to the Amazon Glacier service except that it is $1/TB/month and suffers the same transfer charges for outbound data. The Oracle Storage Cloud Service is similar to the Amazon S3 and Azure Blob Storage Service but it is offered either as a metered service (as is S3 and Blobs) and un-metered services. Unfortunately, Oracle does not provide a cost calculator for general use. The Value Added Distributors are given a copy of the calculator but it is not generally available. The key difference with the Oracle storage services is that there are two significant flavor differences; metered and non-metered. The metered services are charged just like the Microsoft and Amazon services. You pay for what you use on a per GB basis and pay for outbound data transfer. An example of the pricing calculator is shown below. Note that we do need to have a good guestimate on how much data we will transfer outbound across the internet. These charges are not consumed if you are reading the data to a compute engine in the cloud unlike S3 which still consumes cost just for reading the data off the disk.

The most significant differential in storage offerings is the non-metered storage. Oracle offers storage in blocks that you reserve and allocate for 12 months. This is different from the metered storage in that metered can start with 10 TB and grow to 120 TB over the year. With the non-metered storage, you start with 120 TB and end with 120 TB. You can extend your contract and grow storage but you basically extend a new contract for more storage. You can not shrink your storage and pay for less. The benefit of this is that you don't have to pay for outbound data transfer. You can read and write as much as you want and not get charged for transferring the data across the internet. A pricing calculator for this is simple. How much do you need and how long do you need it?

If we piece all of this together and look a price comparison between the three service providers, the answer of which is cheapest comes down to it depends. Oracle non-metered storage has a significant advantage if you are planning on reading back your data at high or unpredictable rates. Amazon S3 infrequent is the cheapest if you don't plan on reading back your data and want it as an insurance policy only. I honestly would go with Glacier or Oracle Archive if this is the case since it is an order of magnitude cheaper. The chart below compares 120 TB of storage and the variable charge for reading back this data on a monthly basis. If you have 120 TB of storage and plan on reading back 120 TB on a monthly basis, the Oracle non-metered storage is significantly cheaper. If you are only planning on reading back 12 to 24 TB per month the cost is about the same for all of the services.

In summary, one option is not clearly better than the other (except for high read rates) and this blog is intended to help you decide on what fits your needs best. Pricing calculators can help with the cost based on transfer rates. It is important to remember that storage transfer is a significant part of the calculation. It is also important to look at your usage model. We assumed that you started with 120 TB and ended with 120 TB for our analysis. If you start with 12 TB and grow to 120 TB, the pricing calculation will be a little different. Neither the Amazon nor Azure calculators will help you run this simulation and you will have to calculate everything on a month by month basis. It is also interesting to take 120 TB of on-premise storage and assume that each TB can be purchased at $3K/TB. If we assume 10% annual hardware maintenance and a three year amortization, the charge for on-premise storage is $1030/month which might be more or less than cloud based storage. Your results might vary.

Percona Live Data Performance Conference 2016 Retrospective

Pythian Group - Tue, 2016-04-26 07:39

 

Last week the annual Percona Live Data Performance Conference was held in Santa Clara, California. This conference is a great time to catch up with the industry, and be exposed to new tools and methods for managing MySQL and MongoDB.

Sessions

The highlights from this year’s sessions and tutorials centered around a few technologies:

  • The typical sessions for Galera Cluster and Performance Schema are always getting better, along with visualization techniques.
  • Oracle MySQL’s new Document Store blurs the line between RDBMS and NoSQL.
  • Facebook’s RocksDB is getting smaller and faster.
  • ProxySQL, the new proxy kid on the block, promises to address MySQL scalability issues.
  • If security is a concern, which it should be, Hashicorp’s Vault project would be something to look into for managing MySQL secrets or encrypting data in transit.
  • MongoDB was a hot topic as well, with a number of sessions addressing management of environments and design patterns.

I expect to see an influx of articles regarding ProxySQL and MySQL’s Document Store in the next few months.

Networking

The evenings were also great events for networking and socializing, giving attendees the chance to rub shoulders with some of the most successful ‘WebScale’ companies to hear stories from the trenches. Events included the Monday Community Networking Reception and Wednesday’s Game Night.

Thank you to all those who attended the Annual Community Dinner at Pedro’s organized by Pythian on Tuesday night! We had a blast and we hope you did as well.

Community Dinner At Pedro's

Thank you!

Pythian sponsored and provided a great range of sessions this year, and we want to thank all those who stopped by our booth or attended our sessions.

I’d like to give a huge shout-out to Percona for continuing to organize a high-quality MySQL user conference focused on solving some of the toughest technical issues that can be thrown at us, and an equal shout-out to the other sponsors and speakers that play a huge part in making this conference happen.

I am looking forward to what PerconaLive Europe will bring this fall, not to mention what we can expect next year when Percona Live Santa Clara rolls around again.

Categories: DBA Blogs

Optimize ADF HTTP Response Size with ChangeEventPolicy

Andrejus Baranovski - Tue, 2016-04-26 02:18
You should read this post, if you are looking how to reduce ADF HTTP response size. This can be important for ADF application performance tuning, to improve PPR request response time. By default in ADF 12.2.1, iterator is assigned with ChangeEventPolicy = ppr. This works great for UI component bindings refresh, no need to set individual partial triggers. On other side, this generates extra content in ADF HTTP response and eventually increases response size. I would recommend to use ChangeEventPolicy = ppr, only when its really needed - dynamic forms, with unknown refresh dependencies. Otherwise set ChangeEventPolicy = none, to generate smaller response.

I will demonstrate below the difference for ADF HTTP response with ChangeEventPolicy=ppr/none. First let's take a look into page load response size:


Page contains list component and form. Both are based on two different iterators, set with ChangeEventPolicy = ppr. This generates AdfPage.PAGE.updateAutoPPRComponents calls for each UI item, referencing attributes from the iterator. In complex screens, this adds significant amount of extra text to the response, could increase size even by half:


Partial response also contains same calls added to the response. Select list item:


Each item from the form will be referenced by JavaScript call, to register it for refresh:


Let's change it to ChangeEventPolicy = none. Set it for both iterators:


We should set refresh dependencies manually. Form block must be set with PartialTrigger, referencing list component - to refresh, when list selection changes:


Next/Previous buttons dependency also must be set, to change form row:


With manual refresh dependency changes, there are no extra calls added to ADF HTTP response, reducing overall response size:


Same applies for PPR response:


Download sample application - ChangeEventPolicyPPRApp.zip.

Please be patient!

Tim Hall - Tue, 2016-04-26 01:46

angry-1300616_640It’s extremely nice to have a big audience. It’s very flattering that people care enough about what I say to be bothered to read it. The problem with having a large audience is people can get a very demanding at times.

making backup better

Pat Shuff - Tue, 2016-04-26 01:07
Yesterday we looked at backing up our production databases to cloud storage. One of the main motivations behind doing this was cost. We were able to reduce the cost of storage from $3K/TB capex plus $300/TB/year opex to $400/TB/year opex. This is a great solution but some customers complain that it is not generic enough and latency to cloud storage is not that great. Today we are going to address both of these issues with the cloud storage appliance. First, let's address both of the typical customer complaints.

The database backup cloud service is just that. It backs up a database. It does it really well and it does it efficiently. You replace one of the backup library modules that translates writes of backup data to the cloud REST api rather than a tape driver. The software works well with commercial products like Symantec or Legato and integrates well into that solution. Unfortunately, the critics are right. The database backup cloud service does that and only that. It backs up Oracle databases. It does not backup MySQL, SQL Server, DB2, or other databases. It is a single use tool. A very useful single use tool but a single use tool. We need to figure out how to make it more generic and backup more than just databases. It would be nice if we could have it backup home directories, email servers, virtual machines, and other stuff that is used in the data center.

The second complaint is latency. If we are writing to an SSD or spinning disk attached to a server via high speed SCSI, iSCSI, or SAS, we should expect 10ms access time or less. If we are writing to a server half way across the country we might experience 80ms latency. This means that a typical read or write takes eight times longer when we read and write cloud storage. For some applications this is not an issue. For others this latency makes the system unusable. We need to figure out how to read adn write at 10ms latency but leverage the expandability of the cloud storage and lower cost.

Enter stage left the Oracle Cloud Storage Appliance. The appliance is a software component that listens on the data center internet using the NFS protocol and talks to the cloud services using the storage REST api. Local disks are used as a cache front end to store data that is written to and read from the network shares exposed by the appliance. These directories map directly to containers in the Oracle Storage Cloud Service and can be clear text or encrypted when stored. Data written from network servers is accepted and released quickly as it is written to local disk and slow tricked to the cloud storage. As the cache fills up, data is aged and migrated from the cache storage into cloud storage. The metadata representing the directory structure and storage location is updated to show that the data is no longer stored locally but stored in the cloud. If a read occurs from the file system, the meta data helps the appliance locate where the data is stored and it is presented to the network client from the cache or pulled from the cloud storage and temporarily stored in the local cache as long as there is space. A block diagram of this architecture is shown below

The concept of how to use this device is simple. We create a container in our cloud storage and we attach to it with the cloud storage appliance. This attachment is exposed via an nfs mount to clients on our corporate network and anyone on the client can read or write files in the cloud storage. Operations happen at local disk speed using the network security of the local network and group/owner rights in the directory structure. It looks, smells, and feels just like nfs storage that we would spend thousands of dollars per TB to own and operate.

For the rest of this blog we are going to go through the installation steps on how to configure the appliance. The minimum requirements for the appliance are

  • Linux 7 (3.10 kernel or later)
  • Docker 1.6.1 or later
  • two dual core x86 CPUs
  • 4 GB of RAM
We will be installing our configuration on a Windows desktop running VirtualBox. We will not go through the installation of Oracle Enterprise Linux 7 because we covered this a long time ago. We do need to configure the OS to have 4 GB of RAM and at least 2 virtual cores as shown in the images below.

We also need to configure a network. We configure two networks. One is for the local desktop console and the other is for the public internet. We could configure a third interface to represent our storage network but for simplicity we only configure two.

We can boot our Linux 7 system and will need to select the 3.10 kernel. By default it will want to boot to the 3.8 kernel which will cause problems in how the appliance works.

What we would like to do is remove the 3.8 kernel from our installation. This is done by removing the packages with the rpm -e command. We then update the grub.cfg file to list only the 3.10 kernels.

Once we have removed the kernels, we update the grub loader and enable additional options for the yum update.

The next step that we need to take is to install docker. This is done with the yum install command.

Once we have the docker package installed, we need to make sure that we have the nfs-client and nfs-server installed and started.

It is important to note that the tar bundle is not generally available. It does require product manager approval to get a copy of the software for installation. The file that I got was labeled oscsa-1.0.5.tar.gz. I had to unzup and untar this file after loading it on my Linux VirtualBox instance. I did not do a screen capture of the download but did go through the installation process.

We start the service with the oscsa command. When we start it it brings up a management web page so that we can make the connection to the cloud storage service. To see this page we need to start firefox and connect to the page.

One of the things that we need to know is the end point of our storage. We can find this by looking at the management console for our cloud services. If we click on the storage cloud service details link we can find it.

Once we have the end point we will need to enter this into the management console of the appliance as well as the cloud credentials.

We can add encryption and a container name for our network share and start reading and writing.

We can verify that everything is working from our desktop by mounting the nfs share or by using cloudberry to examine our cloud storage containers. In this example we use cloudberry just like we did when we looked at the generic Oracle Storage Cloud Services.

We can examine the properties of the container and network share from the management console. We can look at activity and resources available for the caching.

In summary, we looked at a solution to two problems offered by our database backup solution. The first was single purpose and the second was latency. By providing a network share to the data center we can not only backup or Oracle database but all of the databases by having the backup software write to the network share. We can backup other files like virtual machines, email boxes, and home directories. Disk latency operates at the speed of the local disk rather than the speed of the cloud storage. This software does not cost anything additional and can be installed on any virtual platform that supports Linux 7 with kernel 3.10 or greater. When we compare this to the Amazon Storage Gateway which requires 2x the processing power and $125/month to operate it looks significantly better. We did not compare it to the Azure solution because it is an iSCSI hardware solution and not easy to get a copy of for testing.

A Moment of Clarity on the Role of Technology in Teaching

Michael Feldstein - Mon, 2016-04-25 21:26

By Phil HillMore Posts (402)

This following excerpt is based on a post first published at The Chronicle of Higher Education.

With all of the discussion around the role of online education for traditional colleges and universities, over the past month we have seen reminders that key concerns are about people and pedagogy, not technology. And we can thank two elite universities that don’t have large online populations — MIT and George Washington University — for this clarity.

On April 1, the MIT Online Education Policy Initiative released its report,“Online Education: A Catalyst for Higher Education Reforms.” The Carnegie Corporation-funded group was created in mid-2014, immediately after an earlier initiative looked at the future of online education at MIT. The group’s charter emphasized a broader policy perspective, however, exploring “teaching pedagogy and efficacy, institutional business models, and global educational engagement strategies.”

While it would be easy to lament that this report comes from a university with few online students and yet dives into how online learning fits in higher education, it would be a mistake to dismiss the report itself. This lack of “in the trenches” experience with for-credit online education helps explain the report’s overemphasis on MOOCs and its underemphasis on access and nontraditional learner support. Still, the MIT group did an excellent job of getting to some critical questions that higher-education institutions need to address. Chief among them is the opportunity to use online tools and approaches to instrument and enable enhanced teaching approaches that aren’t usually possible in traditional classrooms.

The core of the report, in fact, is based on the premise that online education and online tools can enable advances in effective pedagogical approaches, including constructivism, active learning, flipped classrooms, problem-based learning, and student-centered education. It argues that the right way to use technology is to help professors teach more effectively:

“Technology can support teachers in the application of the relevant principles across a group of students with high variability. In fact, technology can help tailor lessons to the situation in extremely powerful ways.

The instrumentation of the online learning environment to sense the student experience and the ability to customize content on a student-by-student basis may be the key to enabling teachers to provide differentiated instruction, informed by a solid foundation in cognitive science. Modern online courses and delivery platforms already implement some of these concepts, and provide a framework for others.”

But there is value in seeing what happens when that advice is ignored. And that’s where an incident at George Washington University comes in. If technology is just thrown at the problem with no consideration of helping educators to adopt sound pedagogical design, then we can see disasters.

On April 7, four students who took an online program for a master’s degree in security and safety leadership from George Washington’s College of Professional Studies filed a class-action lawsuit against the university for negligence and misleading claims. As reported byThe GW Hatchet, a student newspaper:

For a non-paywall version of the full article, good through 4/26, follow this link.

Update: What interesting timing! See Michelle Pacansky-Brock’s post on very similar topic.

The nature of online classes varies dramatically, much like face-to-face classes. But, in both scenarios, the teacher matters and the teaching matters. When an online class is taught by an engaged and empathetic instructor who seeks to be aware of the needs of her students, the asynchronous nature of online learning may become a benefit to students, not a disadvantage. This is contingent upon the design of the course, which is where instructional designers or “learning engineers” can play an important role. Many instructors, however, play both roles — and those who do are often the professors who experience deep transformations in their face-to-face classes as a result of what they learned from teaching online.

The post A Moment of Clarity on the Role of Technology in Teaching appeared first on e-Literate.