Skip navigation.

Feed aggregator

To see how illogical the Brookings Institution report on student loans is, just read the executive summary

Michael Feldstein - 2 hours 22 min ago
il·log·i·cal i(l)ˈläjikəl/ adjective
  1. lacking sense or clear, sound reasoning.  ((From Google’s definition))

There have been multiple articles both accepting the Brookings argument that “typical borrowers are no worse off now than they were a generation ago” and those calling out the flaws in the Brookings report. I have written two articles here and here criticizing the report. The problem is that much of the discussion is more complicated that it needs to be. A simple reading of the Brookings executive summary exposes just how illogical the report is.

College tuition and student debt levels have been increasing at a fast pace for at least two decades. These well-documented trends, coupled with an economy weakened by a major recession, have raised serious questions about whether the market for student debt is headed for a crisis, with many borrowers unable to repay their loans and taxpayers being forced to foot the bill.

The argument is set up – yes, tuition and debt levels are going up, but how is a crisis defined? It’s specifically about “many borrowers unable to repay their loans”. Is there a crisis? That’s not a bad setup, and it is a valid question to address.

Our analysis of more than two decades of data on the financial well-being of American households suggests that the reality of student loans may not be as dire as many commentators fear. We draw on data from the Survey of Consumer Finances (SCF) administered by the Federal Reserve Board to track how the education debt levels and incomes of young households evolved between 1989 and 2010. The SCF data are consistent with multiple other data sources, finding significant increases in average debt levels, but providing little indication of a significant contingent of borrowers with enormous debt loads.

This is an interesting source of data. Yes, the New York Fed’s Survey of Consumer Finances tracks student debt, but this data is almost four years old due to triennial survey method. 1

But hold on – now we’re talking about “significant contingent of borrowers with enormous debt loads”? I thought the issue was ability to repay. What does “enormous” even mean other than being a scary word?

First, we find that roughly one-quarter of the increase in student debt since 1989 can be directly attributed to Americans obtaining more education, especially graduate degrees. The average debt levels of borrowers with a graduate degree more than quadrupled, from just under $10,000 to more than $40,000. By comparison, the debt loads of those with only a bachelor’s degree increased by a smaller margin, from $6,000 to $16,000.

Fair enough point to start, noting that a quarter of debt growth comes from higher levels of education including grad school. Average debt loads have gone up more than 2.5x for undergrads, and that certainly sounds troublesome given the report’s main point of “no worse off”. Using the ‘but others are worse off, so this is not as bad’ argument, Brookings notes that grad students had their debt go up by 4x. The argument here appears to be that 2.5 is less than 4.2

Second, the SCF data strongly suggest that increases in the average lifetime incomes of college-educated Americans have more than kept pace with increases in debt loads. Between 1992 and 2010, the average household with student debt saw in increase of about $7,400 in annual income and $18,000 in total debt. In other words, the increase in earnings received over the course of 2.4 years would pay for the increase in debt incurred.

Despite the positioning of the report that a small portion of borrowers skews the data and coverage, Brookings resorts to using the mythical “average household”. For that mythical entity, they certainly seem to have the magical touch to not pay any taxes and obtain zero-interest loans.3

Nonetheless, we’ve now changed the issue again – first by ability to repay, then whether the loan is “enormous”, and now based on how long a mythical payoff takes.

Third, the monthly payment burden faced by student loan borrowers has stayed about the same or even lessened over the past two decades. The median borrower has consistently spent three to four percent of their monthly income on student loan payments since 1992, and the mean payment-to-income ratio has fallen significantly, from 15 to 7 percent. The average repayment term for student loans increased over this period, allowing borrowers to shoulder increased debt loads without larger monthly payments.

Small issue, but we’ve now gone from average household as key unit of measurement to median borrower? Two changes from one paragraph to the other – average to median and household to borrower?

OK, now we have replaced the scary “enormous” with “borrowers struggling with high debt loads”. Although not in the executive summary, the analysis of the report seems to define these large debts as $100,000 or more. Doesn’t it matter who the borrower is? A humanities PhD graduate working as an adjunct for $25,000 a year might view $20,000 debt as enormous.

Brookings introduces a new measure, and this one does at least take into account the difference in borrowers: payment-to-income ratios of median borrowers. If I’m reading the argument correctly (this took a while based on key measures and terms changing paragraph to paragraph), not only should there be no crisis, but the situation might actually be improving.

These data indicate that typical borrowers are no worse off now than they were a generation ago, and also suggest that the borrowers struggling with high debt loads frequently featured in media coverage may not be part of a new or growing phenomenon. The percentage of borrowers with high payment-to-income ratios has not increased over the last 20 years—if anything, it has declined.

So I was reading it correctly: “typical borrowers are no worse off” and the percentage of borrowers with high ratios has declined.4 The only problem, however, is that if we go back to the original setup of the issue, “many borrowers unable to repay their loans”, there might be a much more direct measurement. How about actually seeing if borrowers are failing to repay their loans (aka being delinquent)?

The Brookings report does not analyze loan delinquency at all - the word “default” is only mentioned three times – once referring to home mortgages and twice referring to interest rates (not once for the word “delinquent”). What do actual delinquency rates show us?

It turns out that we can go to the same source of data and find out. Here is the New York Fed report from late 2013:


D’oh! It turns out that real borrowers with real tax brackets paying off off real loans are having real problems. The percentage at least 90 days delinquent has more than doubled in just the past decade. In fact, based on another Federal Reserve report, the problem is much bigger for the future, “44% of borrowers are not yet in repayment, and excluding those, the effective 90+ delinquency rate rises to more than 30%”.

More than 30% of borrowers who should be paying off their loans are at least 90 days delinquent? It seems someone didn’t tell them that their payment-to-income ratios (at least for their mythical average friends) are just fine and that they’re “no worse off”.

Back to the Brookings report:

This new evidence suggests that broad-based policies aimed at all student borrowers, either past or current, are likely to be unnecessary and wasteful given the lack of evidence of widespread financial hardship. At the same time, as students take on more debt to go to college, they are taking on more risk. Consequently, policy efforts should focus on refining safety nets that mitigate risk without creating perverse incentives.

Despite the flawed analysis that changed terms, changed key measures, and failed to look at any data on delinquencies, Brookings now calls out a “lack of evidence of widespread financial hardship”. How can we take their recommendations seriously when the supporting analysis is fundamentally illogical?

At least the respectable news organizations will do basic checking of the report before parroting such flawed analysis.

The worries are exaggerated: Only 7% of young adults with student debt have $50,000 or more.

— David Leonhardt (@DLeonhardt) June 24, 2014

ICYMI=>The Student Debt Crisis Is Being Manufactured To Justify Debt Forgiveness #tcot #taxes

— Jeffrey Dorfman (@DorfmanJeffrey) July 5, 2014


  1. Also note that we’re skipping the years with the highest growth in student debt.
  2. This argument also ignores or trivializes the issue that grad students are indeed students.
  3. There is no other way to get to the 2.4 year payoff.
  4. And yet another change – from average to median to typical.

The post To see how illogical the Brookings Institution report on student loans is, just read the executive summary appeared first on e-Literate.

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Tue, 2014-07-29 14:35

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Tue, 2014-07-29 14:35

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Tue, 2014-07-29 14:35

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-07-29 14:35

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-07-29 14:35

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-07-29 14:35

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-07-29 14:35

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-07-29 14:35

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Solid Conference San Francisco 2014: Complete Video Compilation

Surachart Opun - Tue, 2014-07-29 08:17
Solid Conference focused on the intersection of software and hardware. It's great community with Software and Hardware. Audiences will be able to learn new idea to combine software and hardware. It gathered idea from engineers, researchers, roboticists, artists, founders of startups, and innovators.
Oreilly launched HD videos (Solid Conference San Francisco 2014: Complete Video Compilation Experience the revolution at the intersection of hardware and software—and imagine the future) for this conference. Video files might huge for download. It will spend much time. Please Use some download manager programs for help.
After watched, I excited to learn some things new with it (Run times: 36 hours 8 minutes): machines, devices, components and etc.

Written By: Surachart Opun
Categories: DBA Blogs

The Nature of Digital Disruption

WebCenter Team - Tue, 2014-07-29 08:10
by Dave Gray, Entrepreneur, Author & Consultant

Digital Disruption – The change that occurs when new digital technologies and business models affect the value proposition of existing goods and services or bring to market an entirely new innovation.

Why is the shift to digital so disruptive?

As a global society, we are currently in the process of digitizing everything. We are wrapping our physical world with a digital counterpart, a world of information, which parallels and reflects our own. We want to know everything we can think of about everything we can think of.

This whirl of digital information changes the playing field for businesses, because digital information does not abide by any of the rules that we are used to in business. 

In a digital world, products and services have no physical substance. There are no distribution costs. A single prototype can generate an infinite number of copies at no cost. And since the products and services are so different, the environment around them becomes unstable; as the digital layer interacts with the physical layer, everything in the ecosystem is up for grabs. Suddenly new products become possible and established ones become obsolete overnight.

Science-fiction writer Arthur C. Clarke once said that “Any sufficiently advanced technology is indistinguishable from magic.”

In the business world today, you are competing with sorcerers. You need to learn magic.

Let’s take the music industry as an example of how technology changes the playing field. Music used to be very expensive to record and distribute. Every time a new technology comes along, the music industry has had to adjust.

The graph on the left shows units sold in the music industry, by media, since 1973. See the overlapping curves? Each technology has a lifecycle – early in the lifecycle sales are low, but they rise as more people adopt the technology. When a new technology comes along the older technologies suffer. But not to worry, people still need their music, right? Typically the lifecycle curve for “units sold” closely echoes the revenue curve.

But when the product becomes purely digital – when it enters the realm of magic – the cost of making and distributing the product plummets to nearly zero. This means more people can produce and distribute music, more cheaply and easily. More music becomes available to the public and purchases skyrocket – but the price per unit drops precipitously.

Take a look at the two graphs below. The left chart is units sold and the right one is revenue. Note how digital downloads (units sold) have skyrocketed, while the revenue curve is the smallest in years. 

The core issue is that even though unit sales rise rapidly, the price per unit drops so much faster that the revenue from sales fails to make up the difference. The industrial-age company, which has built its business model on the high costs of producing and distributing physical products, now has a high-cost infrastructure which is suddenly obsolete. What was once an asset is now a critical liability. This opens the entire industry to new players who can offer services to this new world at a dramatically lower cost.

The product is now digital. So the album, which you once charged $15 for, now retails for about $10. Ouch. You just lost a third of your revenue. But it gets worse. In the old days you sold music by the album, because the cost to make and distribute single songs on CD kept the cost of singles relatively high. So people would buy albums which contained a lot of songs, it now appears, that they didn’t really want. The chart below compares the typical mix between album and single sales on CD vs. downloads. The product mix has flipped completely, from most people buying albums for $15, to most people buying songs for $1.

So the revenue per unit drops once again. Even with some people buying albums, the average revenue per unit is about $1.50. That means your entire industry has lost about 90% of your revenue, almost overnight. 

In the world of manufacturing we talk about efficiency and productivity. You look to efficiency to decrease your costs and productivity to increase your revenue. In between you seek to make a profit. But you can’t streamline yourself to profits when the world is changing around you so profoundly. You need different strategies, different tactics.

The digital revolution is the biggest shift in the music industry since the 1920’s, when phonograph records replaced sheet music as the industry’s profit center.

What’s going on here? First, the means of making and distributing the product change. Suddenly the costs are so low that thousands of new competitors enter the market. Every artist can now compete with you from his or her garage, bringing new meaning to the word “garage band.”

But as if that weren’t bad enough, this also changes the things that people buy and the way they buy them. It’s a cascading effect.

So who wins and how do they win? Let’s look at Apple’s iTunes strategy. Apple looked at the entire industry as an ecosystem – people buy music and they play it on a device. If they like the experience they buy more music. In time they might buy another device, and so on, and so on. This is not a business process, it’s a business cycle.

Sony had everything that Apple had – in fact, much more. They had a powerful music-player brand, the Walkman, the established industry leader for portable music players. They had more engineers. They had a music division with 21 record labels. 

Sony’s divisions, which worked in their favor for efficiency and productivity, worked against them when it came to collaboration and innovation. The company was divided into separate operating units which competed with each other internally, making it difficult to collaborate on projects that spanned across multiple units. Sony was a classic industrial-age company, focused on productivity and efficiency.

What did Apple do that Sony didn’t? They focused on the system, not the product.

If you want to record your own music, Apple makes the software for that. If you want to sell your music, you can sell it on iTunes. If you want to play it, Apple makes the device. In case you hadn’t noticed, Apple had to look at the entire ecosystem of the record industry through a new, digital lens, including:

  1. Understand the digital infrastructure and how it changed the playing field.
  2. Relentless focus on user experience – simplicity, “just works” design, delight customers.
  3. Smart partnerships: Apple began by giving away the money: Record companies made 70 cents on every 99 cent purchase, with the rest split between artists and merchandising costs.
  4. Interoperability: Apple chose to support an open format that would work with any player, while Sony chose a proprietary format for their first digital media player.

In short: 

Think creatively. Understand, provide for, and support the entire ecosystem. Fill in the gaps when you can. Eliminate middlemen if you can – partner with them if you must. Partner with value providers (like artists and record companies that own large repositories of music). Be fearless about cannibalizing your own core business – if you’re not doing, it somebody else is.

The core difference is between an industrial, manufacturing-based model which focuses on efficiency and productivity – making more widgets more efficiently, and an information-based model which focuses on creativity and innovation. The industrial model thrives on successful planning and logistics, while the information model thrives on systems thinking, rapid learning and adaptation to a changing environment.

What can you do? As a company, you will need to innovate differently. That’s the subject of my next post, which we will discuss next week.  

In the meantime, you can hear more from Dave on Digital Disruption in our Digital Business Thought Leaders webcast "The Digital Experience: A Connected Company’s Sixth Sense". 

Create Windows Service for Oracle RAC

Pythian Group - Tue, 2014-07-29 08:08

It’s my first time on RAC system for Windows and I’m happy to learn something new to share.

I created a new service for database (restoredb) only to find out the ORACLE_HOME is for the service is “c:\oracle\product\10.2.0\asm_1″

Any ideas as to what was wrong?

C:\dba_pythian>set oracle_home=C:\oracle\product\10.2.0\db_1

C:\dba_pythian>echo %ORACLE_HOME%

C:\dba_pythian>oradim -NEW -SID restoredb -STARTMODE manual
Instance created.

 1 STOPPED agent11g1Agent                                    c:\oracle\app\11.1.0\agent11g
 2 STOPPED agent11g1AgentSNMPPeerEncapsulator                c:\oracle\app\11.1.0\agent11g\bin\encsvc.exe
 3 STOPPED agent11g1AgentSNMPPeerMasterAgent                 c:\oracle\app\11.1.0\agent11g\bin\agntsvc.exe
 4 RUNNING +ASM1                                             c:\oracle\product\10.2.0\asm_1
 5 RUNNING ClusterVolumeService                              C:\oracle\product\10.2.0\crs
 6 RUNNING CRS                                               C:\oracle\product\10.2.0\crs
 7 RUNNING CSS                                               C:\oracle\product\10.2.0\crs
 8 RUNNING EVM                                               C:\oracle\product\10.2.0\crs
 9 STOPPED JobSchedulerDWH1                                  c:\oracle\product\10.2.0\db_1
10 STOPPED JobSchedulerRMP1                                  c:\oracle\product\10.2.0\db_1
11 RUNNING OraASM10g_home1TNSListenerLISTENER_PRD-DB-10G-01  C:\oracle\product\10.2.0\asm_1
12 STOPPED OraDb10g_home1TNSListener                         c:\oracle\product\10.2.0\db_1
13 STOPPED ProcessManager                                    "C:\oracle\product\10.2.0\crs"
14 RUNNING DWH1                                              c:\oracle\product\10.2.0\db_1
15 RUNNING RMP1                                              c:\oracle\product\10.2.0\db_1
16 RUNNING agent12c1Agent                                    C:\agent12c\core\
17 RUNNING restoredb                                         c:\oracle\product\10.2.0\asm_1
18 STOPPED JobSchedulerrestoredb                             c:\oracle\product\10.2.0\asm_1

Check the PATH variable to find HOME for ASM is listed before DB.


Create database service specifying the fullpath to oradim from the DB HOME

C:\dba_pythian>oradim -DELETE -SID restoredb
Instance deleted.

 1 STOPPED agent11g1Agent                                    c:\oracle\app\11.1.0\agent11g
 2 STOPPED agent11g1AgentSNMPPeerEncapsulator                c:\oracle\app\11.1.0\agent11g\bin\encsvc.exe
 3 STOPPED agent11g1AgentSNMPPeerMasterAgent                 c:\oracle\app\11.1.0\agent11g\bin\agntsvc.exe
 4 RUNNING +ASM1                                             c:\oracle\product\10.2.0\asm_1
 5 RUNNING ClusterVolumeService                              C:\oracle\product\10.2.0\crs
 6 RUNNING CRS                                               C:\oracle\product\10.2.0\crs
 7 RUNNING CSS                                               C:\oracle\product\10.2.0\crs
 8 RUNNING EVM                                               C:\oracle\product\10.2.0\crs
 9 STOPPED JobSchedulerDWH1                                  c:\oracle\product\10.2.0\db_1
10 STOPPED JobSchedulerRMP1                                  c:\oracle\product\10.2.0\db_1
11 RUNNING OraASM10g_home1TNSListenerLISTENER_PRD-DB-10G-01  C:\oracle\product\10.2.0\asm_1
12 STOPPED OraDb10g_home1TNSListener                         c:\oracle\product\10.2.0\db_1
13 STOPPED ProcessManager                                    "C:\oracle\product\10.2.0\crs"
14 RUNNING DWH1                                              c:\oracle\product\10.2.0\db_1
15 RUNNING RMP1                                              c:\oracle\product\10.2.0\db_1
16 RUNNING agent12c1Agent                                    C:\agent12c\core\

C:\dba_pythian>dir C:\oracle\product\10.2.0\db_1\BIN\orad*
 Volume in drive C has no label.
 Volume Serial Number is D4FE-B3A8

 Directory of C:\oracle\product\10.2.0\db_1\BIN

07/08/2010  10:01 AM           121,344 oradbcfg10.dll
07/20/2010  05:20 PM             5,120 oradim.exe
07/20/2010  05:20 PM             3,072 oradmop10.dll
               3 File(s)        129,536 bytes
               0 Dir(s)  41,849,450,496 bytes free

C:\dba_pythian>C:\oracle\product\10.2.0\db_1\BIN\oradim.exe -NEW -SID restoredb -STARTMODE manual
Instance created.

 1 STOPPED agent11g1Agent                                    c:\oracle\app\11.1.0\agent11g
 2 STOPPED agent11g1AgentSNMPPeerEncapsulator                c:\oracle\app\11.1.0\agent11g\bin\encsvc.exe
 3 STOPPED agent11g1AgentSNMPPeerMasterAgent                 c:\oracle\app\11.1.0\agent11g\bin\agntsvc.exe
 4 RUNNING +ASM1                                             c:\oracle\product\10.2.0\asm_1
 5 RUNNING ClusterVolumeService                              C:\oracle\product\10.2.0\crs
 6 RUNNING CRS                                               C:\oracle\product\10.2.0\crs
 7 RUNNING CSS                                               C:\oracle\product\10.2.0\crs
 8 RUNNING EVM                                               C:\oracle\product\10.2.0\crs
 9 STOPPED JobSchedulerDWH1                                  c:\oracle\product\10.2.0\db_1
10 STOPPED JobSchedulerRMP1                                  c:\oracle\product\10.2.0\db_1
11 RUNNING OraASM10g_home1TNSListenerLISTENER_PRD-DB-10G-01  C:\oracle\product\10.2.0\asm_1
12 STOPPED OraDb10g_home1TNSListener                         c:\oracle\product\10.2.0\db_1
13 STOPPED ProcessManager                                    "C:\oracle\product\10.2.0\crs"
14 RUNNING DWH1                                              c:\oracle\product\10.2.0\db_1
15 RUNNING RMP1                                              c:\oracle\product\10.2.0\db_1
16 RUNNING agent12c1Agent                                    C:\agent12c\core\
17 RUNNING restoredb                                         c:\oracle\product\10.2.0\db_1
18 STOPPED JobSchedulerrestoredb                             c:\oracle\product\10.2.0\db_1

Categories: DBA Blogs

How SQL Server Browser Service Works

Pythian Group - Tue, 2014-07-29 08:07

Some of you may wonder the role SQL browser service plays in the SQL Server instance. In this blog post, I’ll provide an overview of the how SQL Server browser plays crucial role in connectivity and understand the internals of it by capturing the network monitor output during the connectivity with different scenario.

Here is an executive summary of the connectivity flow:   ExecutiveWorkflow


Here is another diagram to explain the SQL Server connectivity status for Named & Default instance under various scenarios:


Network Monitor output for connectivity to Named instance when SQL Browser is running:

In the diagram below, we can see that an UDP request over 1434 was sent from a local machine (client) to SQL Server machine (server) and response came from server 1434 port over UDP to client port with list of instances and the port in which it is listening:



Network Monitor output for connectivity to Named instance when SQL Browser is stopped/disabled:

 We can see that client sends 5 requests which ended up with no response from UDP 1434 of server. so connectivity will never be established to the named instance.



Network Monitor output for connectivity to Named instance with port number specified in connection string & SQL Browser is stopped/disabled:

 There is no call made to the server’s 1434 port over UDP instead connection is directly made to the TCP port specified in the connection string.

image005  Network Monitor output for connectivity to Default instance when SQL Browser running:

 We can see that no calls were made to server’s 1434 port over UDP in which SQL Server browser is listening.



Network Monitor output for connectivity to Default instance which is configured to listen on different port other than default 1433 when SQL Browser running:

 We can see that connectivity failed after multiple attempts because client assumes that default instance of SQL Server always listens on TCP port 1433.

You can refer the blog below to see some workarounds to handle this situation here:

image007 References:

SQL Server Browser Service -

Ports used by SQL Server and Browser Service -

SQL Server Resolution Protocol Specification -

Thanks for reading!


Categories: DBA Blogs

Oracle Database – Turning OFF the In-Memory Database option

Marco Gralike - Tue, 2014-07-29 07:03
So how to turn it the option off/disabled…As a privileged database user: > Just don’t set the INMEMORY_SIZE parameter to a non zero value…(the default...

Read More

Beta1 of the UnifiedPush Server 1.0.0 released

Matthias Wessendorf - Tue, 2014-07-29 06:47

Today we are announcing the first beta release of our 1.0.0 version. After the big overhaul, including a brand new AdminUI with the last release this release contains several enhancements:

  • iOS8 interactive notification support
  • increased APNs payload (2k)
  • Pagination for analytics
  • improved callback for details on actual push delivery
  • optimisations and improvements

The complete list of included items are avialble on our JIRA instance.

iOS8 interactive notifications

Besides the work on the server, we have updated our Java and Node.js sender libraries to support the new iOS8 interactive notification message format.

If you curious about iOS8 notifications, Corinne Krych has a detailed blog post on it and how to use it with the AeroGear UnifiedPush Server.

Swift support for iOS

On the iOS client side Corinne Krych and Christos Vasilakis were also busy starting some Swift work: our iOS registration SDK supports swift on this branch. To give you an idea how it looks, here is some code:

func application(application: UIApplication!, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: NSData!) {
  // setup registration
  let registration = 
  AGDeviceRegistration(serverURL: NSURL(string: "<# URL of the running AeroGear UnifiedPush Server #>"))

    // attemp to register
    registration.registerWithClientInfo({ (clientInfo: AGClientDeviceInformation!) in
        // setup configuration
        clientInfo.deviceToken = deviceToken
        clientInfo.variantID = "<# Variant Id #>"
        clientInfo.variantSecret = "<# Variant Secret #>"

        // apply the token, to identify THIS device
        let currentDevice = UIDevice()

        // --optional config--
        // set some 'useful' hardware information params
        clientInfo.operatingSystem = currentDevice.systemName
        clientInfo.osVersion = currentDevice.systemVersion
        clientInfo.deviceType = currentDevice.model

        success: {
            println("UnifiedPush Server registration succeeded")
        failure: {(error: NSError!) in
            println("failed to register, error: \(error.description)")

To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

  • Android
  • Apache Cordova (with jQuery and Angular/Ionic)
  • iOS

The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

For those of you who that are into Swift, there Swift branches for these demos as well:


We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!

SQL Saturday in Paris on 12 -13 September

Yann Neuhaus - Mon, 2014-07-28 22:35

As you certainly know SQL Saturday events are very popular in SQL Server world community. This is the second time the event takes place in Paris (France), but this time, we have a new format with pre-conferences on Wednesday and classic sessions on Saturday. During pre-conferences, we will talk about a particular subject for a whole day.

This time, I have the opportunity to participate twice by giving two sessions (in French) with the following program:

  • Friday: Inside the SQL Server storage and backups

If you are interested in how the SQL Server storage works and how to deal with corruption as well as backups, this session might be interesting for you.

Be careful: the pre-conferences on Wednesday are fee-paying sessions (but not that expensive). You can still register at this address.

  • Saturday: SQL Server AlwaysOn deep dive

SQL Server AlwaysOn is a new great high-availability and disaster recovery feature provided by Microsoft. You can come take a look at this session if you are concerned by questions like:

  • How to configure my Windows failover cluster and quorum in my situation?
  • What exactly is a read-only secondary replica?
  • What are the built-in tools provided by Microsoft to monitor and troubleshoot this infrastructure?

Good news: the sessions on Saturday are free!

Take a look at the agenda if you want to attend to other interesting sessions. I hope there will be many attendees! Smile

Query with new plan

Bobby Durrett's DBA Blog - Mon, 2014-07-28 18:24

I came up with a simple query that shows a running SQL executing a different plan than what it had in the past.  Here is the query:

-- show currently executing sqls that have history
-- but who have never run with the current plan
-- joins v$session to v$sql to get plan_hash_value of 
-- executing sql.
-- queries dba_hist_sqlstat for previously used values 
-- of plan_hash_value.
-- only reports queries that have an older plan that is 
-- different from the new one.

v$session vs,
v$sql sq
vs.sql_id=sq.sql_id and
vs.SQL_CHILD_NUMBER=sq.child_number and
sq.plan_hash_value not in 
(select ss.plan_hash_value
from dba_hist_sqlstat ss
ss.sql_id=sq.sql_id) and 
0 < 
(select count(ss.plan_hash_value)
from dba_hist_sqlstat ss

Example output:

---------- ------------- ------------ ---------------
       229 cq8bhsxbbf9k7            0      3467505462

This was a test query.  I ran it a bunch of times with an index and then dropped the index after creating an AWR snapshot.  The query executed with a different plan when I ran it without the index.  The same type of plan change could happen in production if an index were accidentally dropped.

I’m hoping to use this query to show production queries that have run in the past but whose current plan differs from any that they have used before.  Of course, a new plan doesn’t necessarily mean you have a problem but it might be helpful to recognize those plans that are new and that differ from the plans used in the past.

- Bobby


Categories: DBA Blogs

Impugn My Character Over Technical Points–But You Should Probably Be Correct When You Do So. Oracle 12c In-Memory Feature Snare? You Be The Judge ‘Cause Here’s The Proof.

Kevin Closson - Mon, 2014-07-28 18:04
Executive Summary

This blog post offers proof that you can trigger In-Memory Column Store feature usage with the default INMEMORY_* parameter settings. These parameters are documented as the approach to ensure In-Memory functionality is not used inadvertently–or at least they are documented as the “enabling” parameters.

Index of Related Posts

This is part 4 in a series: Part I, Part II, Part III, Part IV.

What Really Matters?

This is a post about enabling versus using the Oracle Database 12c Release In-Memory Column Store feature which is a part of the separately licensed Database In-Memory Option of 12c. While reading this please be mindful that in this situation all that really matters is what actions on your part effect the internal tables that track feature usage.

Make Software, Not Enemies–And Certainly Not War

There is a huge kerfuffle regarding the separately licensed In-Memory Column Store feature in Oracle Database 12c Release–specifically how the feature is enabled and what triggers charges for usage of the feature.

I pointed out a) the fact that the feature is enabled by default and b) the feature is easily accidentally used. I did that in Part I and Part II in my series on the matter.  In Part III I shared how the issue has lead to industry journalists quoting–and then removing–said quotes. I’ve endured an ungodly amount of shameful backlash from friends on the Oaktable Network list as they asserted I was making a mole hill out of something that was a total lark (that was a euphemistic way of saying they all but accused me of misleading my readers).  I even had friends suggesting this is a friendship-ending issue. Emotion and high-technology are watery-oil like in nature.

About the only thing that hasn’t happened is for anyone to apologize for being totally wrong in their blind-faith rooted feelings about this issue. What did he say? Please read on.

From the start I pointed out that the INMEMORY_QUERY feature is enabled by default–and that it is conceivable that someone could use it accidentally. The back lash from that was along the lines of how many parameters and what user actions are needed for that to be a reality.  Maria Colgan–who is Oracle’ s PM for the In-Memory Column Store feature–tweeted that I’m confusing people when announcing her blog post on the fact that In-Memory Column Store usage is controlled not by INMEMORY_QUERY but instead INMEMORY_SIZE. Allow me to add special emphasis to this point. In a blog post on, Oracle’s PM for this Oracle database  feature explicitly states that INMEMORY_SIZE must be changed from the default to use the feature.

If I were to show you everyone else was wrong and I was right, would you think less of me? Please, don’t let it make you feel less of them. We’re just people trying to wade through the confusion.

The Truth On The Matter

Here is the truth and I’ll prove it in a screen shot to follow:

  1. INMEMORY_QUERY is enabled by default. If it is set you can trigger feature usage–full stop.
  2. INMEMORY_SIZE is zero by default.  Remember this is the supposedly ueber-powerful setting that precludes usage of the feature and not, in fact, the top-level-sounding INMEMORY_QUERY parameter. As such this should be the parameter that would prevent you for paying for usage of the feature.

In the following screenshot I’ll show that INMEMORY_QUERY is at the default setting of ENABLE  and INMEMORY_SIZE is at the default setting of zero. I prove first there is no prior feature usage. I then issue a CREATE TABLE statement specifying INMEMORY.  Remember, the feature-blocking INMEMORY_SIZE parameter is zero.  If  “they” are right I shouldn’t be able to trigger In-Memory Column Store feature usage, right? Observe–or better yet, try this in your own lab:


So ENABLED Means ENABLED? Really? Imagine That.

So I proved my point which is any instance with the default initialization parameters can trigger feature usage. I also proved that the words in the following three screenshots are factually incorrect:


Screenshot of blog post on


Screenshot of email to Oracle-L Email list:






I didn’t want to make a mole hill of this one. It’s just a bug. I don’t expect apologies. That would be too human–almost as human as being completely wrong while wrongly clinging to one’s wrongness because others are equally, well, wrong on the matter.


Sundry References

 Print out of Maria’s post on and link to same: Getting started with Oracle Database In-Memory Part I



Filed under: oracle

Early Review of Google Classroom

Michael Feldstein - Mon, 2014-07-28 16:36

Meg Tufano is co-Founder of SynaptIQ+ (think tank for social era knowledge) and leader of McDermott MultiMedia Group (an education consulting group focused on Google Apps EDU). We have been checking out Google Classroom – with her as the teacher and me as the student. I include some of Meg’s bio here as it is worth noting her extensive experience designing and teaching online courses for more than a decade.

Meg posted a Google Slides review of her initial experiences using Google Classroom from a teacher’s perspective, which I am sharing below with minimal commentary. The review includes annotated slides showing the various features and Meg’s comments.

I have not done as much work to show the student view, but I will note the following:

  • The student view does not include the link to the Chrome Store that Meg finds to be too confusing.
  • The biggest challenge I’ve had so far is managing my multiple Google accounts (you have to be logged into the Google Apps for Edu as your primary Google account to enter Classroom, which is not that intuitive to students).
  • I wonder if Google will continue to use Google tools so prominently in Classroom (primary GDrive, YouTube, GDocs) or if the full release will make it easier to embed non-Google tools.
  • I have previously written “Why Google Classroom won’t affect institutional LMS market … yet”, and after initial testing, nothing has changed my opinion.
  • I have one other post linking to video-based reviews of Google Classroom here.

The post Early Review of Google Classroom appeared first on e-Literate.

auto-generate SQLAlchemy models

Catherine Devlin - Mon, 2014-07-28 15:30

PyOhio gave my lightning talk on ddlgenerator a warm reception, and Brandon Lorenz got me thinking, and PyOhio sprints filled my with py-drenaline, and now ddlgenerator can inspect your data and spit out SQLAlchemy model definitions for you:

$ cat merovingians.yaml
name: Clovis I
from: 486
to: 511
name: Childebert I
from: 511
to: 558
$ ddlgenerator --inserts sqlalchemy merovingians.yaml

from sqlalchemy import create_engine, Column, Integer, Table, Unicode
engine = create_engine(r'sqlite:///:memory:')
metadata = MetaData(bind=engine)

merovingians = Table('merovingians', metadata,
Column('name', Unicode(length=12), nullable=False),
Column('reign_from', Integer(), nullable=False),
Column('reign_to', Integer(), nullable=False),

conn = engine.connect()
inserter = merovingians.insert()
conn.execute(inserter, **{'name': 'Clovis I', 'reign_from': 486, 'reign_to': 511})
conn.execute(inserter, **{'name': 'Childebert I', 'reign_from': 511, 'reign_to': 558})

Brandon's working on a pull request to provide similar functionality for Django models!