Skip navigation.

Feed aggregator

Point-of-sale developers find themselves targets for cyberattacks [VIDEO]

Chris Foot - 5 hours 2 min ago

Transcript

Hi, welcome to RDX! Banks, retailers and other organizations that use point-of-sale and payment software developed by Charge Anywhere should take extra precautions to ensure their databases are protected.

The software developer recently announced that it sustained a breach that may have compromised data that was produced as far back as 2009. This event reaffirms cybersecurity experts’ assertions that cybercriminals are targeting companies that provide payment software, as opposed to simply attacking merchants.

While it’s up to Charge Anywhere and other such enterprises to patch any bugs in their software, those using these programs should ensure their point-of-sale databases containing payment card info are strictly monitored. Informing DBAs on how to better manage their access credentials is another necessary step.

Thanks for watching!

The post Point-of-sale developers find themselves targets for cyberattacks [VIDEO] appeared first on Remote DBA Experts.

OLTP type 64 compression and ‘enq: TX – allocate ITL entry’ on Exadata

Pythian Group - 5 hours 11 min ago

Recently we’ve seen a strange problem with the deadlocks at the client database on Exadata, Oracle version 11.2.0.4 . Wait events analysis showed that sessions were waiting for “enq: TX – allocate ITL entry” event. It was strange because there are at most two sessions making DMLs and at least two ITL slots are available in the affected tables blocks. I made some block dumps and found that affected blocks contain the OLTP-compressed data, Compression Type = 64 (DBMS_COMPRESSION Constants – Compression Types).  Actually table has the “compress for query high” attribute, but direct path inserts have never used, so I’m not expecting any compressed data here. Compression Type 64 is very specific type. Oracle migrates data out of HCC compression units into Type 64 compression blocks in case of updates of HCC compressed data. We made some tests and were able to reproduce Type 64 compression without direct path operations. Here is one of the test cases. MSSM tablespace has been used, but problem is reproducible with ASSM too.

create table z_tst(num number, rn number, name varchar2(200)) compress for query high partition by list(num)
(
partition p1 values(1),
partition p2 values(2));

Table created.

insert into z_tst select mod(rownum , 2) + 1, rownum, lpad('1',20,'a') from dual connect by level <= 2000;

2000 rows created.

commit;

Commit complete.

select dbms_compression.get_compression_type(user, 'Z_TST', rowid) comp, count(*)  cnt from Z_tst
group by dbms_compression.get_compression_type(user, 'Z_TST', rowid);

      COMP        CNT
---------- ----------
        64       2000

select  dbms_rowid.rowid_block_number(rowid) blockno, count(*) cnt from z_tst a
group by dbms_rowid.rowid_block_number(rowid);

   BLOCKNO        CNT
---------- ----------
      3586        321
      2561        679
      3585        679
      2562        321

select name, value from v$mystat a, v$statname b where a.statistic# = b.statistic# and lower(name) like '%compress%' and value != 0;

NAME                                                    VALUE
-------------------------------------------------- ----------
heap block compress                                        14
HSC OLTP Compressed Blocks                                  4
HSC Compressed Segment Block Changes                     2014
HSC OLTP Non Compressible Blocks                            2
HSC OLTP positive compression                              14
HSC OLTP inline compression                                14
EHCC Block Compressions                                     4
EHCC Attempted Block Compressions                          14

alter system dump datafile 16 block min 2561 block max 2561;

We can see that all rows are compressed by compression type 64. From the session statistics we can see that HCC had been in place before the data was migrated into OLTP Compressed Blocks. I think, this is not an expected behavior and there is should not be any compression involved at all. Let’s take a look into the block dump:

Block header dump:  0x04000a01
 Object id on Block? Y
 seg/obj: 0x6bfdc  csc: 0x06.f5ff8a1  itc: 2  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0055.018.0002cd54  0x00007641.5117.2f  --U-  679  fsc 0x0000.0f5ffb9a
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
bdba: 0x04000a01
data_block_dump,data header at 0x7fbb48919a5c
===============
tsiz: 0x1fa0
hsiz: 0x578
pbl: 0x7fbb48919a5c
     76543210
flag=-0----X-
ntab=2
nrow=680
frre=-1
fsbo=0x578
fseo=0x5b0
avsp=0x6
tosp=0x6
        r0_9ir2=0x1
        mec_kdbh9ir2=0x1
                      76543210
        shcf_kdbh9ir2=----------
                  76543210
        flag_9ir2=--R-LNOC      Archive compression: N
                fcls_9ir2[3]={ 0 32768 32768 }
                perm_9ir2[3]={ 0 2 1 }

It’s bit odd that avsp (available space) and tosp (total space) = 6 bytes. So there is no free space in the block at all, but I’m expecting to see 10% pctfee defaults here since it’s OLTP compression.
Let’s try to update two different rows in the same type 64 compressed block:

select rn from z_tst where DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) = 3586 and rownum <= 4;

        RN
----------
      1360
      1362
      1364
      1366
From the first session:
update z_tst set name = 'a' where rn = 1360;
From the second:
update z_tst set name = 'a' where rn = 1362;
-- waiting here

Second session waits on “enq: TX – allocate ITL entry” event.

Summary

In some cases HCC and subsequent OLTP, type 64 compression can take place even without direct path operations (probably a bug).

OLTP, type 64 compressed block, in contrast to regular OLTP compression, can have no free space after data load.

In case of DML operations, the whole type 64 compressed block gets locked (probably a bug).

Better not to set HCC attributes on segments until the real HCC compression operation.

 

Categories: DBA Blogs

The Database Protection Series Continues – Evaluating the Most Common Threats and Vulnerabilities – Part 1

Chris Foot - 6 hours 32 min ago
Introduction

This is the second article of a series that focuses on securing your database data stores.  In the introduction, I provide an overview of the database protection process and what will be discussed in future installments.   Before we begin the activities required to secure our databases, we need to have a firm understanding of the most common database vulnerabilities and threats.

This two-part article is not intended to be an all-inclusive listing of database vulnerabilities and threat vectors.   We’ll take a look at some of the more popular tactics used by hackers as well as some common database vulnerabilities.   As we cover the topics, I’ll provide you with some helpful hints along the way to decrease your exposure.  The list is not in any particular order.   In future articles, I’ll refer back to this original listing from time-to-time to ensure that we continue to address them.

Separation of Duties (Or Lack Thereof)

Every major industry regulation is going to have separation of duties as a compliance objective.  If your organization complies with SOX, you should be well aware of the separation of duties requirements.  In order for my organization to satisfy our PCI compliance objectives, we need to constantly ensure that no single person, or group, is totally in control of a security function.    Since we don’t store or process PCI data, we focus mainly on securing the architecture.  So, I’ll use my organization’s compliance activities to provide two quick examples:

  • We assign the responsibility of security control design to an internal RDX team and security control review to a third-party auditing firm.
  • Personnel assigned the responsibility of administering our internal systems, which include customer access auditing components, do not have privileges to access our customer’s systems.

The intent is to prevent conflict of interest, intentional fraud, collusion or unintentional errors to increase the vulnerability of our systems.  For smaller organizations, this can be a challenge as it introduces additional complexity into the administration processes and can lead to an increase in staff requirements.   The key is review all support functions related to the security and access of your systems and prioritize them according to the vulnerability created by misuse.  Once the list is complete, you decompose the support activities into more granular activities and divide the responsibilities accordingly.

Unidentified Data Stores Containing Sensitive Data

It’s a pretty simple premise – you can’t protect a database that you don’t know about.  The larger the organization, the greater the chance is that sensitive data being is being stored and not protected.    Most major database manufacturers provide scanners that allow you to identify all of their product’s installations.  These are most often used during the dreaded licensing audits.    As part of our database security service offering, RDX uses McAfee’s Vulnerability Scanner to identify all databases installed on the client’s network.

Once you identify these “rogue” data stores, your next goal is to find out what’s in them.  This can be accomplished by asking the data owner to provide you with that information.  A better strategy is to purchase one of the numerous data analyzers available on the market.  The data analyzer executes sophisticated pattern matching algorithms to identify potentially sensitive data elements.   Because of the complex matching process that has to occur, there aren’t a lot of free offerings on the web you can take advantage of.   In our case, McAfee’s database scanner also includes the data identification feature.   It helps us to uncover the sensitive data elements that are hidden in our customers’ database data stores.

Clones from Production

Application developers have a particular affinity for wanting real world data to test with.  Can you blame them?    There’s a myriad of data variations they need to contend with.   Cloning live environments allows them to focus on writing and testing their code and less on the mindless generation of information that attempts to mimic production data stores.

Cloning creates a whole host of vulnerabilities.   The cloning process creates a duplicate of the production environment and it needs to be secured accordingly.   In addition, application developers shouldn’t have access to sensitive production data.  They’ll need access to the cloned systems to perform their work.   The first step is to identify the sensitive elements and then create a strategy to secure them.

Data masking, also known as data scrambling, allows administrators to effectively secure cloned data stores.   After the cloning process is performed, the administrator restricts access to the system until the data scrambling process is complete.  The key to a successful masking process is to replace the original data with as realistic of a replacement as possible.   Masking is not intended to be an alternative to encryption, its intent is to obscure the original values stored in the system.

There are several types of data masking offerings available.  Your first step should be to check your database product’s feature list.  Oracle, for example, provides a data masking feature.   There’s also a wealth of third-party products available to you.  If you have limited funding, search the internet for database masking and you’ll find lots of free alternatives.   Use a strong level of due diligence if you have to use a free alternative and check the data before and after scrambling – no matter which option you choose.

If your cloning process creates any files that contain data, most often used to transfer the database from the source to target, wipe them out after the cloning process is complete.  Lastly, perform an in-depth account security review.   Remove all accounts that aren’t needed for testing and only create the new ones needed to provide the required functionality to your application developers.  In part 2 of this article, we’ll discuss backup, output and load files.   You will also need to secure your cloned systems’ files accordingly.

Default, Blank and Weak Passwords

Years ago, after my career as an Oracle instructor, I became an industry consultant.  One of my company’s offerings was the database assessment.  I reviewed customers’ environments to ensure they were optimized for performance, availability and security.     Those were the days when breaches, and the resulting attention paid to security, were far less prominent than they are today.   I’d bring up a logon screen to the database and attempt to log in using the default passwords that were available in Oracle.   At that time there were about a dozen or so accounts automatically available after database creation.   I always loved the reaction of the client as they watched me successfully access their environment.  At each login, I’d say “this one gives me DBA”, “this one gives me access to all of your data tables”….   You can’t believe how many times I successfully logged in using sys/change_on_install.

Although Oracle, like most major database vendors, have ratcheted down on accounts, default and weak passwords are still a problem.   The database documentation will contain a listing of the accounts that are automatically created during installation.   Some advanced features will also require accounts to be activated.   After database creation and every time you install a new feature, do a quick scan of the documentation and then select from the users table to see if you have additional accounts to secure.

All major database vendors including Oracle, SQL Server, MySQL and DB2 provide password complexity mechanisms.   Some of them are automatically active when the database is created while others must be manually implemented.    In addition, most allow you to increase the complexity by altering the code.

Once your complexity strategy is underway, you’ll need to use a password vault to store your credentials.  I’m particularly fond of vaults that also provide a password generator.  The password vault’s feature list will be important.   When you perform your analysis of password vaults some of the more important features to focus on are:  how the password vault enforces its own security, logging and auditing features available, encryption at rest and during transfer, backup encryption, early-warning systems, dual launch key capabilities (takes 2 personnel to check out password), automatic notification when a password is checked out, how it handles separation of duties and if it can record the actions taken on the targets after the credentials are accessed.

Unencrypted Data At Rest

Database encryption, if administered correctly, provides a strong defense against data theft.  Most major database vendors provide data encryption as part of the product’s feature set.  Microsoft SQL Server and Oracle call theirs Transparent Data Encryption (TDE), IBM provides a few alternatives including Infosphere Guardium and MySQL provides a set of functions that perform data encryption.

You’ll also need to determine what data you want to encrypt as well as how you want to do it.  Most of the vendor offerings allow you to encrypt data at different levels including column, table, file and database.  Like most database features, you will need to balance security with processing overhead.  Most encryption alternatives will add enough overhead to impact database performance.  Identify the sensitive data elements and encrypt them.

Key management is crucial.   You can use all the encryption you want, but your environment will still be vulnerable if you don’t perform effective key management.    Here are a couple of RDX best practices for encryption key management:

  • Keep your encryption algorithms up to date. Vendors release updates for their encryption features on a fairly regular basis.   New database versions often contain significant security enhancements including new and improved encryption functionality.  There should be no debate with anyone in your organization.   If you are storing sensitive data, keep encryption functionality current.
  • DB2, Oracle, SQL Server and MySQL all have strong encryption features. If your database product doesn’t, you’ll need to rely upon on a robust, third-party product.   Encryption isn’t a feature that you want to skimp on or utilize a homegrown solution to provide.
  • Store the keys securely in a safe, centralized location. During our security analysis, we have seen a few shops store their keys in the same areas as the data they are encrypting.    Storing them in a centralized location allows you to lock that storage area down, provide separation of duties and activate access alerts.
  • Key rotation is the tech term for changing your encryption key values on a regular basis. Depending on the database, this can be a fairly complex process.  For other implementations, it’s fairly simple.    Complex or simple, come up with a plan to rotate your keys at least yearly.
Wrap-up

In part 2 of this article, we’ll cover unsecured data transmissions, securing input, output, report and backup files, SQL Injection and buffer overflow protection and a few other topics.  We’ll then continue our discussion on the process of securing our sensitive database data stores by outlining the key elements of a database security strategy.

The post The Database Protection Series Continues – Evaluating the Most Common Threats and Vulnerabilities – Part 1 appeared first on Remote DBA Experts.

A 2014 (Personal) Blogging Retrospective

Michael Feldstein - 6 hours 54 min ago

Unlike many of the bloggers who I enjoy reading the most, I don’t often let my blogging wander into the personal except as a route to making a larger point. For some reason, e-Literate never felt like the right outlet for that. But with the holidays upon us, with some life cycle events in my family causing me to be a bit more introspective than usual, and with the luxury of having discovered Phil’s top 20 posts of the year post showing up in my inbox, I’m in the mood to ruminate about my personal journey in blogging, where it’s taken me so far, and what it means to me. In the process, I’ll also reflect a bit on what we try to do at e-Literate.

When I started the blog 10 years ago, I honestly didn’t know what I was doing. OK, I guess that’s still true in some ways. What I mean is that I was looking for a purpose in my life. I had been a middle school and high school teacher for five years. It was the by far the best job I had ever had, and in some ways is still the best job I ever had. I left for a few different reasons. One was financial. I had fallen in love with a woman who had two teenaged daughters and suddenly found myself having to support a family. Another was frustration with a lack of professional growth opportunities. I taught in a wonderful, tiny little private school that operated out of eight rooms in the back of the Hoboken public library. It was amazing. But I wanted to do more and there was really no place for me to grow at the small school. I was young and feeling my oats. Lacking teacher’s certification and having been spoiled by teaching in such an amazing environment, I despaired of finding the right opportunity that would be professionally exciting while also allowing me to support my family. Part of it, too, was that I was beginning to get drawn to larger, systemic and cultural questions. For example, in the United States we have strong local control over our school systems, and my experience was that the overwhelming majority of parents care deeply for their children and want what’s best for them. Theoretically, it should be simple for parents to demand and get better schools. But that rarely happens. Why not? Why was the wonderful place that I was working at so rare? So I went wandering. I tried a few different things, but none of them made me happy. I am a teacher from a family of teachers. I needed to be close to education. But I also needed to support my family. And I needed to spread my wings, intellectually. I kept getting drawn to the bigger, systemic issues.

I started e-Literate just before I got a job at the SUNY Learning Network, having wandered in the wilderness first of graduate school and then of corporate e-Learning and knowledge management for a number of years. I had hoped that writing in public would help me clarify for myself what I wanted to do next in education as well as find some fellow travelers who might help me identify some sort of a career path that made sense. Meanwhile, I made a few good friends at SUNY, but mostly I grew quickly frustrated with the many barriers to doing good educational work that, once again, just shouldn’t exist if we lived in any kind of a rational world. Blogging was an oasis for me. It was a place where I found the kind of community that I should have had in academia but mostly didn’t. As I learned from early ed tech bloggers like Stephen Downes, Alan Levine, D’Arcy Norman, Scott Leslie, Beth Harris, Steven Zucker, Joe Ugoretz, George Siemens, and Dave Cormier (who co-hosted a wonderful internet radio show in those pre-podcasting days), I felt like I had found a home. It’s hard to describe what those early times of edublogging felt like if you weren’t around then. It was much friendlier. Much cozier. Everybody was just trying to figure stuff out together. I was just another shmoe working in the coal mines at a public university system, but in the blogosphere, there were really smart, articulate, accomplished people who took what I had to say seriously and encouraged me to say more. We argued sometimes, but mostly it was the good kind of argument. Arguments over what matters and what is true, rather than over who matters and what is the correct thing to say. It was…magical. I owe a great debt of gratitude to the bloggers I have mentioned here as well as others. I am ashamed to realize that I probably haven’t expressed that publicly before now. Without the folks who were already here when I arrived, I wouldn’t be where I am and who I am.

That said, finding a community is not the same thing as finding a purpose. The blogging wasn’t part of a satisfying career doing good in education so much as it was an escape from an unsatisfying career of failing to do good in education.

Then Blackboard sued Desire2Learn over a patent.

Such a strange thing to change a person’s life. Like most people, I really didn’t know what to make of it at first. I have never been dogmatically anti-corporate, anti-patent, or even anti-Blackboard. That said, Blackboard had proven itself to be a nasty, hyper-competitive company in those days, and this sounded like more of the same at first blush. But really, what did it mean to assert a patent in ed tech? I decided to figure it out. I read up on patent law and studied the court documents from the case (which Desire2Learn was publishing). I got a lot of help from Jim Farmer and some folks in the law community. And what I learned horrified me. Blackboard’s patent, if it had been upheld, would have applied to every LMS on the market, both proprietary and open source. Much worse, though, was the precedent it would have set. The basic argument that Blackboard made in their patent application process was that their invention was novel because it applied specifically to education. It was a little bit like arguing that one could patent a car that was designed only to be driven to the grocery store. Even if you didn’t care about the LMS, a successful assertion of that patent would have opened up Pandora’s box for any educational software. And if companies perceived that they could gain competitive advantages over their rivals by asserting patents, it would be the end of creative experimentation in educational technology. The U.S. patent system is heavily tilted toward large companies with deep pockets. Blackboard was already in the process of assembling a patent portfolio that would have enabled them to engage in what’s known as “stacking.” This is when a company files a flurry of lawsuits over a bunch of patents against a rival. Even if most of those assertions are bogus, it doesn’t matter, because the vast majority of organizations simply can’t afford the protracted legal battle. It’s less expensive for them to fold and just pay the extortion money patent license fees, or to sell out to the patent holder (which is probably what Blackboard really wanted from Desire2Learn). All that’s left in the market is for the big companies to cut cross-licensing deals with each other. Whatever you may think about the current innovation or lack thereof in educational technology, whatever we have now would have been crushed had Blackboard succeeded. That includes open source innovation. If a college president was told by her legal counsel that running a campus installation of WordPress with some education-specific modifications might violate a patent, what do you think the institutional decision about running WordPress would be?

So I went to war. I may have been just some shmoe working in the coal mines of a public university system, but dammit, I was going to organize. I translated the legalese of the patent into plain English so that everybody could see how ridiculous it was. I started a Wikipedia page on the History of Virtual Learning Environments so that people could record potential prior art against the patent. Mostly, I wrote about what I was learning about patents in general and Blackboard’s patents in particular. I wrote a lot. If you look down at the tag cloud at the bottom of the blog page, you’ll see that “Blackboard-Inc.” and “edupatents” are, to this day, two of the most frequently used tags on e-Literate.

And then an amazing thing happened. People listened. Not just the handful of edubloggers who were my new community, but all kinds of people. The entries on the Wikipedia page exploded in a matter of days. Every time Blackboard’s Matt Small gave a statement to some news outlet, I was asked to respond. I began getting invited to speak at conferences and association meetings for organizations that I never even knew existed before. Before I knew it, my picture was in freakin’ USA Today. e-Literate‘s readership suddenly went off the charts. In a weird way, I owe the popularity of the blog and the trajectory of my career to Blackboard and Matt Small.

And with that, I finally found my purpose. I won’t pretend that the community outrage and eventual outcome of the patent fight were mostly due to me—there were many, many people fighting hard, not the least of which were John Baker and Desire2Learn—but I could tell that I was having an impact, in part because of the ferocity with which Matt Small attempted to get me into trouble with my employers. With the blog, I could make things happen. I could address systemic issues. It isn’t a good vehicle for everything, but it works for some things. That’s why, more often than not, the best question to ask yourself when reading one of my blog posts is not “What is Michael really trying to say?” but “What is Michael really trying to do?” A lot of the time, I write to try to influence people to take (or not take) a particular course of action. Sometimes it’s just one or a couple of particular people who I have in mind. Other times it may be several disparate groups. For me, the blog is a tool for improving education, first and foremost. Improvement only happens when people take action. Therefore, saying the right things isn’t enough. If my writing is to be worth anything, it has to catalyze people to do the right things.

Of course, it doesn’t always work. Once Blackboard gave up on their patent assertion, I tried to rally colleges and universities to take steps to protect against educational patent assertion in the future. There was very little interest. Why? For starters, it was easier for them to vilify Blackboard than it was to confront the much more complex reality that our patent system itself is deeply flawed. But also, a lot of it was that the universities that were in the best position to take affirmative steps harbored fantasies of being Stanford and owning a piece of the next Google. Addressing the edupatent problem in a meaningful way would have been deeply inconvenient for those ambitions and forced them to think hard about their intellectual property transfer policies. With the immediate threat over, there was no appetite for introspection on college campuses. The patent suit was dropped, Michael Chasen eventually left the company, Matt Small was moved into another role, and life has gone on. I suspect that somewhere in some university startup incubator is a student who was still in middle school when the edupatent war was going on and is filing patent applications for a “disruptive” education app today. Cue the teaser for the sequel, “Lawyers for the Planet of the Apes.”

Meanwhile, my blogging had raised my profile enough to get me out of SUNY and land me a couple of other jobs, both of which taught me a great deal about the larger systemic context and challenges of ed tech but neither of which turned out to be a long-term home for me (which I knew was likely to be the case at the time that I took them). But at the second job in particular, I got too busy with work to blog as regularly as I wanted to. It really bothered me that I had built up a platform that could make a difference and was largely unable to do anything with it. So I decided to try to turn it into a group blog. The blogosphere had changed by then. The power law had really taken hold. There were a handful of bloggers who got most of the attention, and it was getting harder for new voices to break in. So I decided to invite people who maybe didn’t (yet) have the same platform that I did but who regularly taught me important things through their writing to come and blog on e-Literate, writing whatever they liked, whenever they liked, however often they liked. No strings attached. I’m proud to have posts here from people like Audrey Watters, Bill Jerome, David White, Kim Thanos, and Laura Czerniewicz, among others. Most of the people I invited wrote one or a few posts and then moved on to other things. Which was fine. I wasn’t inviting them because I wanted to build up e-Literate. I was inviting them because I wanted to expose their good work to more people. That’s something that we still try to do whenever we can. For example, the analysis that Mike Caulfield raising doubts about some of the Purdue Course Signals research was hugely important to the field of learning analytics. I’m proud to have had the opportunity to draw attention to it.

Like I said, most of the bloggers wrote a few pieces and then moved on. Most. One of them just kept hanging around, like a relative you invite to dinner who never gets the hint when it’s time to leave. As with many of the others, I had not really met Phil Hill before and mainly knew him through his writing. Before long, he was writing more blog posts on e-Literate than I was. And—please don’t tell him I told you this—I love his writing. Phil is more of a natural analyst than I am. He has a head for details that may seem terribly boring in and of themselves but often turn out to have important implications. Whether he is digging through discrepancies on employee numbers to call BS on D2L’s claims of hypergrowth (and therefore their rationale for all the investment money and debt they are taking on) or collaborating with WCET’s Russ Poulin on an analysis of how the Federal government’s IPEDs figures are massively misreporting the size of online learning programs, At the same time, he shares my constitutional inability to restrain myself from saying something when I see something that I think is wrong. This is what, for example, led him to file a public records request for information that definitively showed how few students Cal State Online was reaching for all the money that was spent on the program. For the record, Cal State is a former consulting client of Phil’s. In fact, I’m pretty sure that he created the famous LMS market share squid diagram while consulting for Cal State. As consultants in our particular niche, any critical post that we write of just about anyone runs the risk of alienating a potential client. (More on that in a bit.) Anyway, Phil is now co-publisher of e-Literate. The blog is every bit as much his as it is mine.

And so e-Literate continues to evolve. When I look at Phil’s list of our top 20 posts from 2014, it strikes me that there are a few things we are trying to do with our writing that I think are fairly unusual in ed tech reporting and analysis at the moment:

  • We provide critical analysis and long-form reporting on ed tech companies. There are lots of good pieces written by academic bloggers on ed tech products or the behavior of ed tech companies, but many of them are essentially cultural studies-style critiques of either widely reported news items or personal experiences. There’s nothing wrong with that, but it doesn’t give us the whole picture without some supplementation. On the other hand, the education news outlets break stories but don’t do a lot of in-depth analysis. Because Phil and I have eclectic backgrounds, we have some insight into how these companies work that academics or even reporters often don’t. We’ve been doing this long enough that we have a lot of contacts who are willing to talk to us so, even though we’re not in the business of breaking stories, we sometimes get important details that others don’t. Also, as you can tell from this blog post (if you’ve made it this far), we’re not afraid of writing long pieces.
  • We provide critical analysis and long-form reporting on colleges’ and universities’ (mis)adventures in ed tech. One of things that really bugs me about the whole ed tech blogging and reporting world is that some of the most ferocious critics of corporate misbehavior are often strangely muted on the dysfunction of colleges and universities and completely silent on the dysfunction of faculty. I’m as proud of our work digging into the back room deals of school administrators that circumvent faculty governance or the ways in which faculty behavior impedes progress in areas like better learning platforms or OER as I am of our analysis of corporate malfeasance.
  • We demystify. I was particularly honored to be invited to write a piece on adaptive learning for the American Federation of Teachers. The AFT tends to take a skeptical view of ed tech, so I took their invitation as validation that at least some of the writing we do here is of as much value to the skeptics as to the enthusiasts. When I write a piece about Pearson and I get compliments on it from both people inside the company and people who despise the company, I know that I’ve managed to explain something clearly in a way that clarifies while letting readers make their own judgments. A lot of the coverage of ed tech tends to be either reflexively positive or reflexively negative, and in neither case do we get a lot of details about what the product is, how it actually works, and how people are using it in practice.

One other thing that I feel good about on e-Literate and that I am completely amazed by is our community of commenters. We frequently get 5 or 10 comments on a given post (either in the WordPress comments thread or on Google+), and it’s not terribly uncommon for us to get 50 or even 100 comments on a post. And yet, I can count on one hand the number of times that we’ve ever had personalized attacks or unproductive behaviors from our commenters. I have no idea why this is so and take no credit for it. Even after ten years, I can’t predict which blog posts will generate a lot of discussion and which ones will not. It’s just a magic thing that happens sometimes. I’m still surprised and grateful every time that it does.

But of all the astonishing, wonderful things that have happened to me because of the blog, one of the most astonishing and wonderful is the way that it turned into a fulfilling job. When Phil asked me to join him as a consultant two years ago, I frankly didn’t give high odds that we would be be in business a for very long. I thought the most likely scenario was that we would fail, hopefully in an interesting way, and have some fun in the process. (Please don’t tell Phil I said that either.) But we have been not only pretty consistently busy with work but also growing the business about as fast as we would want it to grow, despite an almost complete lack of sales or marketing effort on our part. The overwhelming majority of our work comes to us through people who read our blog, find something helpful in what we wrote, and contacts us to see if we can help more. We’ve made it a policy to mostly not blog about our consulting except where we need to make conflict-of-interest disclosures, but sometimes I wonder if that’s the right thing to do. The tag line of e-Literate is “What We Are Learning About Online Learning…Online”, and a lot of what we are learning comes from our jobs. If anything, that is more true now than ever, given that so much of our work springs directly from our blogging. Our clients tend to hire us to help them with problems related to issues that we have cared about enough to write about. We also seem to gain more clients than we lose by writing honestly and critically, and our relationships with our clients are better because of it. People who come to us for help expect us to be blunt and are not surprised or offended when we offer them advice which is critical of the way they have been doing things.

Honestly, this is the most fulfilled that I have felt, professionally, since I left the classroom. I will go back to teaching at some point before I retire, but in the meantime I feel really good about what I’m doing for the first time in a long time. I get to work with schools, foundations, and companies on interesting and consequential education problems—and increasingly on systemic and cultural problems. I get to do a lot of it in an open way, with people who I like and respect. I get to speak my mind about the things I care about without fear that it will get me in (excessive) trouble. And I even get paid.

Who knew that such a thing is possible?

The post A 2014 (Personal) Blogging Retrospective appeared first on e-Literate.

Watch: HBase vs. Cassandra

Pythian Group - 6 hours 59 min ago

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“When we look at HBase and Cassandra, they can look very similar,” Alex says. “They’re both part of the NoSQL ecosystem.” Although they’re capable of handling very similar workloads, Alex explains that there are also quite a few differences. “Cassandra is designed from the ground up to handle very high, concurrent, write-intensive workloads.” HBase on the other hand, has its limitations in scalability, and may require a bit more thinking to achieve the same quality of service, Alex explains. Watch his video HBase vs. Cassandra for specific use cases.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

On Demand: Innovation in Managing the Chaos of Everyday Project Management

WebCenter Team - 7 hours 56 min ago

Controlled chaos - this phrase sums up most enterprise-wide projects as workers, documents, and materials move from task to task. To effectively manage this chaos, project-centric organizations need to consider a new set of tools to allow for speedy access to all project assets and to ensure accurate and up-to-date information is provided to the entire project team.

Fishbowl Solutions and Oracle would invite you to view an on-demand webinar on an exciting new solution for enterprise project management. This solution transforms how project-based tools like Oracle Primavera, and project assets, such as documents and diagrams, are accessed and shared.

If you missed Fishbowl’s recent webinar on their new Enterprise Information Portal for Project Management, you can now view a recording of it on YouTube or below. You can learn more about this solution and how to contact Fishbowl on their blog.

&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

TIMESTAMPS and Presentation Variables

Rittman Mead Consulting - 9 hours 6 min ago

TIMESTAMPS and Presentation Variables can be some of the most useful tools a report creator can use to invent robust, repeatable reports while maximizing user flexibility.  I intend to transform you into an expert with these functions and by the end of this page you will certainly be able to impress your peers and managers, you may even impress Angus MacGyver.  In this example we will create a report that displays a year over year analysis for any rolling number of periods, by week or month, from any date in time, all determined by the user.  This entire document will only use values from a date and revenue field.

Final Month DS

The TIMESTAMP is an invaluable function that allows a user to define report limits based on a moving target. If the goal of your report is to display Month-to-Date, Year-to-Date, rolling month or truly any non-static period in time, the TIMESTAMP function will allow you to get there.  Often users want to know what a report looked like at some previous point in time, to provide that level of flexibility TIMESTAMPS can be used in conjunction with Presentation Variables.

To create robust TIMESTAMP functions you will first need to understand how the TIMESTAMP works. Take the following example:

Filter Day -7 DS

Here we are saying we want to include all dates greater than or equal to 7 days ago, or from the current date.

  • The first argument, SQL_TSI_DAY, defines the TimeStamp Interval (TSI). This means that we will be working with days.
  • The second argument determines how many of that interval we will be moving, in this case -7 days.
  • The third argument defines the starting point in time, in this example, the current date.

So in the end we have created a functional filter making Date >= 1 week ago, using a TIMESTAMP that subtracts 7 days from today.

Results -7 Days DS

Note: it is always a good practice to include a second filter giving an upper limit like “Time”.”Date” < CURRENT_DATE. Depending on the data that you are working with you might bring in items you don’t want or put unnecessary strain on the system.

We will now start to build this basic filter into something much more robust and flexible.

To start, when we subtracted 7 days in the filter above, let’s imagine that the goal of the filter was to always include dates >= the first of the month. In this scenario, we can use the DAYOFMONTH() function. This function will return the calendar day of any date. This is useful because we can subtract this amount to give us the first of the month from any date by simply subtracting it from that date and adding 1.

Our new filter would look like this:

DayofMonth DS

For example if today is December 18th, DAYOFMONTH(CURRENT_DATE) would equal 18. Thus, we would subtract 18 days from CURRENT_DATE, which is December 18th, and add 1, giving us December 1st.

MTD Dates DS

(For a list of other similar functions like DAYOFYEAR, WEEKOFYEAR etc. click here.)

To make this even better, instead of using CURRENT_DATE you could use a prompted value with the use of a Presentation Variable (for more on Presentation Variables, click here). If we call this presentation variable pDate, for prompted date, our filter now looks like this:

pDate DS

A best practice is to use default values with your presentation variables so you can run the queries you are working on from within your analysis. To add a default value all you do is add the value within braces at the end of your variable. We will use CURRENT_DATE as our default, @{pDate}{CURRENT_DATE}.  Will will refer to this filter later as Filter 1.

{Filter 1}:

pDateCurrentDate DS

As you can see, the filter is starting to take shape. Now lets say we are going to always be looking at a date range of the most recent completed 6 months. All we would need to do is create a nested TIMESTAMP function. To do this, we will “wrap” our current TIMESTAMP with another that will subtract 6 months. It will look like this:

Month -6 DS

Now we have a filter that is greater than or equal to the first day of the month of any given date (default of today) 6 months ago.

Month -6 Result DS

To take this one step further, you can even allow the users to determine the amount of months to include in this analysis by making the value of 6 a presentation variable, we will call it “n” with a default of 6, @{n}{6}.  We will refer to the following filter as Filter 2:

{Filter 2}:

n DS

For more on how to create a prompt with a range of values by altering a current column, like we want to do to allow users to select a value for n, click here.

Our TIMESTAMP function is now fairly robust and will give us any date greater than or equal to the first day of the month from n months ago from any given date. Now we will see what we just created in action by creating date ranges to allow for a Year over Year analysis for any number of months.

Consider the following filter set:

 Robust1 DS

This appears to be pretty intimidating but if we break it into parts we can start to understand its purpose.

Notice we are using the exact same filters from before (Filter 1 and Filter 2).  What we have done here is filtered on two time periods, separated by the OR statement.

The first date range defines the period as being the most recent complete n months from any given prompted date value, using a presentation variable with a default of today, which we created above.

The second time period, after the OR statement, is the exact same as the first only it has been wrapped in another TIMESTAMP function subtracting 1 year, giving you the exact same time frame for the year prior.

YoY Result DS

This allows us to create a report that can run a year over year analysis for a rolling n month time frame determined by the user.

A note on nested TIMESTAMPS:

You will always want to create nested TIMESTAMPS with the smallest interval first. Due to syntax, this will always be the furthest to the right. Then you will wrap intervals as necessary. In this case our smallest increment is day, wrapped by month, wrapped by year.

Now we will start with some more advanced tricks:

  • Instead of using CURRENT_DATE as your default value, use yesterday since most data are only as current as yesterday.  If you use real time or near real time reporting, using CURRENT_DATE may be how you want to proceed. Using yesterday will be valuable especially when pulling reports on the first day of the month or year, you generally want the entire previous time period rather than the empty beginning of a new one.  So, to implement, wherever you have @{pDate}{CURRENT_DATE} replace it with @{pDate}{TIMESTAMPADD(SQL_TSI_DAY,-1,CURRENT_DATE)}
  • Presentation Variables can also be used to determine if you want to display year over year values by month or by week by inserting a variable into your SQL_TSI_MONTH and DAYOFMONTH statements.  Changing MONTH to a presentation variable, SQL_TSI_@{INT}{MONTH} and DAYOF@{INT}{MONTH}, where INT is the name of our variable.  This will require you to create a dummy variable in your prompt to allow users to select either MONTH or WEEK.  You can try something like this: CASE MOD(DAY(“Time”.”Date”),2) WHEN 0 ‘WEEK’ WHEN 1 THEN ‘MONTH’ END

INT DS

MOD DS

DropDown DS

In order for our interaction between Month and Week to run smoothly we have to make one more consideration.  If we are to take the date December 1st, 2014 and subtract one year we get December 1st, 2013, however, if we take the first day of this week, Sunday December 14, 2014 and subtract one year we get Saturday December 14, 2014.  In our analysis this will cause an extra partial week to show up for prior years.  To get around this we will add a case statement determining if ‘@{INT}{MONTH}’ = ‘Week’ THEN subtract 52 weeks from the first of the week ELSE subtract 1 year from the first of the month.

Our final filter set will look like this:

Final Filter DS

With the use of these filters and some creative dashboarding you can end up with a report that easily allows you to view a year over year analysis from any date in time for any number of periods either by month or by week.

Final Month Chart DS

Final Week Chart DS

That really got out of hand in a hurry! Surely, this will impress someone at your work, or even Angus MacGyver, if for nothing less than he or she won’t understand it, but hopefully, now you do!

Also, a colleague of mine Spencer McGhin just wrote a similar article on year over year analyses using a different approach. Feel free to review and consider your options.

 

Calendar Date/Time Functions

These are functions you can use within OBIEE and within TIMESTAMPS to extract the information you need.

  • Current_Date
  • Current_Time
  • Current_TimeStamp
  • Day_Of_Quarter
  • DayName
  • DayOfMonth
  • DayOfWeek
  • DayOfYear
  • Hour
  • Minute
  • Month
  • Month_Of_Quarter
  • MonthName
  • Now
  • Quarter_Of_Year
  • Second
  • TimestampAdd
  • TimestampDiff
  • Week_Of_Quarter
  • Week_Of_Year
  • Year

Back to section

 

Presentation Variables

The only way you can create variables within the presentation side of OBIEE is with the use of presentation variables. They can only be defined by a report prompt. Any value selected by the prompt will then be sent to any references of that filter throughout the dashboard page.

In the prompt:

Pres Var DS

From the “Set a variable” dropdown, select “Presentation Variable”. In the textbox below the dropdown, name your variable (named “n” above).

When calling this variable in your report, use the syntax @{n}{default}

If your variable is a string make sure to surround the variable in single quotes: ‘@{CustomerName]{default}’

Also, when using your variable in your report, it is good practice to assign a default value so that you can work with your report before publishing it to a dashboard. For variable n, if we want a default of 6 it would look like this @{n}{6}

Presentation variables can be called in filters, formulas and even text boxes.

Back to section

 

Dummy Column Prompt

For situations where you would like users to select a numerical value for a presentation variable, like we do with @{n}{6} above, you can convert something like a date field into values up to 365 by using the function DAYOFYEAR(“Time”.”Date”).

As you can see we are returning the SQL Choice List Values of DAYOFYEAR(“Time”.”Date”) <= 52.  Make sure to include an ORDER BY statement to ensure your values are well sorted.

Dummy Script DS

Back to Section

Categories: BI & Warehousing

Will the Grinch steal Linux’s Christmas?

Chris Foot - 14 hours 44 min ago

For retailers, a vulnerability in their servers' operating systems could mean millions of dollars in losses, depending on how quick hackers are to react to newly discovered bugs. 

As Linux is affordable, efficient and versatile, many e-commerce and brick-and-mortar merchants use the OS as their go-to system, According to Alert Logic's Tyler Borland and Stephen Coty. The duo noted Linux also provides a solid platform on which e-commerce and point-of-sale software can run smoothly. 

The Grinch targeting Linux? 
Due to Linux's popularity among retailers, it's imperative they assess a vulnerability that was recently discovered – a bug that has been given the nickname "Grinch" by researchers. Dark Reading's Kelly Jackson Higgins noted the fault hasn't been labeled as an "imminent threat," but it's possible that some malicious actors would be able to leverage Grinch to escalate permissions on Linux machines and then install malware. 

Coty and Borland noted that Alert Logic's personnel discovered the bug, asserting that the bug exploits the "su" command, which enables a figure to masquerade as another user. The su command is part of the wheel user group. When a Linux solution is constructed, the default user is considered a member of the wheel group, providing them with administrative rights. 

"Anyone who goes with a default configuration of Linux is susceptible to this bug," he told Jackson Higgins. "We haven't seen any active attacks on it as of yet, and that is why we wanted to get it patched before people started exploiting it." 

Where the flaw lies 
Jackson Higgins maintained the Grinch is "living" in the Polkit, a.k.a. PolicyKit for Linux. Polkit is a privilege management system that allows administrators to assign authorizations to general users. Coty and Borland outlined the two main concepts experts should glean from Polkit:

  1. One of Polkit's uses lies in the ability to determine whether the program should initiate privileged operations for a user who requested the action to take place. 
  2. Polkit access and task permission tools can identify multiple active sessions and seats, the latter of which is described as an "untrusted user's reboot request."

"Each piece of this ecosystem exposes possible vulnerabilities through backend D-Bus implementation, the front end Polkit daemon, or even userland tools that use Polkit for privilege authorization." wrote Coty and Borland.

Despite these concerns, Coty informed Jackson Higgins that this vulnerability won't have to be patched until after the holiday season, and only inexperienced Linux users are likely to encounter serious problems. 

The post Will the Grinch steal Linux’s Christmas? appeared first on Remote DBA Experts.

New Version Of XPLAN_ASH Utility

Randolf Geist - Sun, 2014-12-21 16:40
A new version 4.2 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

There were no too significant changes in this release, mainly some new sections related to I/O figures were added.

One thing to note is that some of the sections in recent releases may require a linesize larger than 700, so the script's settings have been changed to 800. If you use corresponding settings for CMD.EXE under Windows for example you might have to adjust accordingly to prevent ugly line wrapping.

Here are the notes from the change log:

- New sections "Concurrent activity I/O Summary based on ASH" and "Concurrent activity I/O Summary per Instance based on ASH" to see the I/O activity summary for concurrent activity

- Bug fixed: When using MONITOR as source for searching for the most recent SQL_ID executed by a given SID due to some filtering on date no SQL_ID was found. This is now fixed

- Bug fixed: In RAC GV$ASH_INFO should be used to determine available samples

- The "Parallel Execution Skew ASH" indicator is now weighted - so far any activity level per plan line and sample below the actual DOP counted as one, and the same if the activity level was above
The sum of the "ones" was then set relative to the total number of samples the plan line was active to determine the "skewness" indicator

Now the actual difference between the activity level and the actual DOP is calculated and compared to the number of total samples active times the actual DOP
This should give a better picture of the actual impact the skew has on the overall execution

- Most queries now use a NO_STATEMENT_QUEUING hint for environments where AUTO DOP is enabled and the XPLAN_ASH queries could get queued otherwise

- The physical I/O bytes on execution plan line level taken from "Real-Time SQL Monitoring" has now the more appropriate heading "ReadB" and "WriteB", I never liked the former misleading "Reads"/"Writes" heading

Installing VirtualBox on Mint with a CentOS Guest

The Anti-Kyte - Sun, 2014-12-21 12:48

Christmas is almost upon us. Black Friday has been followed by Small Business Saturday and Cyber Monday.
The rest of the month obviously started on Skint Tuesday.
Fortunately for all us geeks, Santa Claus is real. He’s currently posing as Richard Stallman.
I mean, look at the facts. He’s got the beard, he likes to give stuff away for free, and he most definitely has a “naughty” list.

Thanks to Santa Stallman and others like him, I can amuse myself in the Holidays without putting any more strain on my Credit Card.

My main machine is currently running Mint 17 with the Cinnamon desktop. Whilst I’m very happy with this arrangement, I would like to play with other Operating Systems, but without all the hassle of installing/uninstalling etc.
Now, I do have Virtualbox on a Windows partition, but I would rather indulge my OS promiscuity from the comfort of Linux… sorry Santa – GNU/Linux.

So what I’m going to cover here is :

  • Installing VirtualBox on a Debian-based distro
  • Installing CentOS as a Guest Operating System
  • Installing VirtualBox Guest Additions Drivers on CentOS

I’ve tried to stick to the command-line for the installation steps for VirtaulBox so they should be generic to any Debian based host.

Terminology

Throughout this post I’ll be referring to the Host OS and the Guest OS, as well as Guest Additions. These terms can be defined as :

  • Host OS – the Operating System of the physical machine that Virtualbox is running on ( Mint in my case)
  • Guest OS – the Operating System of the virtual machine that is running in VirtualBox (CentOS here)
  • Guest Additions – drivers that are installed on the Guest OS to enable file sharing, viewport resizing etc
Options for getting VirtualBox

Before I get into the installation steps it’s probably worth explaining why I chose the method I did for getting VirtualBox in the first place.
You can get VirtualBox from a repository, instructions for which are on the VirtualBox site itself. However, the version currently available ( 4.3.12 at the time of writing) does not play nicely with Red Hat based guests when it comes to Guest Additions. This issue is fixed in the latest version of Virtualbox (4.3.20) which can be downloaded directly from the site. Therefore, this is the approach I ended up taking.

Right, now that’s out of the way…

Installing VirtualBox Step 1 – Prepare the Host

Before we download VirtualBox, we need to ensure that the dkms package is installed and up to date. So, fire up good old terminal and type :

sudo apt-get install dkms

Running this, I got :

Reading package lists... Done
Building dependency tree       
Reading state information... Done
dkms is already the newest version.
0 to upgrade, 0 to newly install, 0 to remove and 37 not to upgrade.

One further step is to make sure that your system is up-to-date. For Debian based distros, this should do the job :

sudo apt-get update
Step 2 – Get the software

Now, head over to the VirtualBox Downloads Page and select the appropriate file.

NOTE – you will have the choice of downloading either the i386 or the AMD64 versions.
The difference is simply that i386 is 32-bit and AMD64 is 64-bit.

In my case, I’m running a 64-bit version of Mint (which is based on Ubuntu), so I selected :

Ubuntu 13.04( “Raring Ringtail”)/ 13.10(“Saucy Salamander”)/14.04(“Trusty Tahr”)/14.10(“Utopic Unicorn”) – the AMD64 version.

NOTE – if you’re not sure whether you’re running on 32 or 64-bit, simply type the following in a terminal session :

uname -i

If this comment returns x86_64 then you’re running a 64-bit version of your OS. If it returns i686, then you’re running a 32-bit version.

A short time later, you’ll find that Santa has descended the chimney that is your browser and in the Downloads folder that is your living room you have present. Run…

ls -lh $HOME/Downloads/virtualbox*

… and you’ll find the shiny new :

-rw-r--r-- 1 mike mike 63M Dec  5 16:22 /home/mike/Downloads/virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb
Step 3 – Installation

To virtually unwrap this virtual present….

cd $HOME/Downloads
sudo dpkg -i virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb

On running this the output should be similar to :

(Reading database ... 148385 files and directories currently installed.)
Preparing to unpack virtualbox-4.3_4.3.20-96996~Ubuntu~raring_amd64.deb ...
Stopping VirtualBox kernel modules ...done.
Unpacking virtualbox-4.3 (4.3.20-96996~Ubuntu~raring) over (4.3.12-93733~Ubuntu~raring) ...
Setting up virtualbox-4.3 (4.3.20-96996~Ubuntu~raring) ...
Installing new version of config file /etc/init.d/vboxdrv ...
addgroup: The group `vboxusers' already exists as a system group. Exiting.
Stopping VirtualBox kernel modules ...done.
Uninstalling old VirtualBox DKMS kernel modules ...done.
Trying to register the VirtualBox kernel modules using DKMS ...done.
Starting VirtualBox kernel modules ...done.
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
Processing triggers for shared-mime-info (1.2-0ubuntu3) ...
Processing triggers for gnome-menus (3.10.1-0ubuntu2) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu1) ...
Processing triggers for mime-support (3.54ubuntu1) ...

Note As this was not my first attempt at installing VirtualBox, there are some feedback lines here that you probably won’t get.

Anyway, once completed, you should have a new VirtualBox icon somewhere in your menu.
In my case (Cinnamon desktop on Mint 17, remember), it’s appeared in the Administration Menu :

vbox_menu

As part of the installation, a group called vboxusers has now been created.
You’ll want to add yourself to this group so that you can access the shared folders, which is something I’ll come onto in a bit. For now though…

sudo usermod -a -G vboxusers username

… where username is your user.

Now, finally, we’ve set it up and can start playing. Click on the menu icon. Alternatively, if you can’t find the icon, or if you just prefer the terminal, the following command should have the same effect :

VirtualBox

Either way, you should now see this :

vbox_welcome

One present unwrapped, assembled and ready to play with…and you don’t even need to worry about cleaning up the discarded wrapping paper.

Installing the CentOS Guest

I fancy having a play with a Red Hat-based distro for a change. CentOS fits the bill perfectly.
Additionally, I happen to have an iso lying around on a cover disk.
If you’re not so lucky, you can get the latest version of CentOS (currently 7) from the website here.

I’ve created a directory called isos and put the CentOS iso there :

ls -lh CentOS*
-rw------- 1 mike mike 687M Jul  9 22:53 CentOS-7.0-1406-x86_64-livecd.iso

Once again, I’ve downloaded the 64-bit version, as can be seen from the x86-64 in the filename.

Now for the installation.

Open VirtualBox and click New :

In the Name and operating system window enter :

Name : CentOS7
Type : Linux
Version Red Hat(64 bit)

vb_new

In the Memory Size Window :

Settings here depend on the resources available to the host machine and what you want to use the VM for.
In my case, my host machine has 8GB RAM.
Also, I want to install Oracle XE on this VM.
Given that, I’m going to allocate 2GB to this image :

vb_new3

In the Hard Drive Window :

I’ve got plenty of space available so I’ll just accept the default to Create a virtual hard drive of 8GB now.

Hard Drive File Type :

Accept the default ( VDI (VirtualBox Disk Image))

and hit Next…

Storage on physical hard drive :

I’ll leave this as the default – Dynamically allocated
Click Next…

File location and size :

I’ve left the size at the default…

vbnew_4

I now have a new VirtualBox image :
The vdi file created to act as the VM’s hard drive is in my home directory under VirtualBox VMs/CentOS7

summary

Now to point it at the iso file we want to use.

Hit Start and ….

fisrt_start

choose_iso

You should now see the chosen .iso file identified as the startup disk :

iso

Now hit start….
live_cd_desktop

Don’t worry too much about the small viewport for now. Guest Additions should resolve that issue once we get it installed.
You probably do need to be aware of the fact that you can transfer the mouse pointer between the Guest and Host by holding down the right CTRL key on your keyboard and left-clicking the mouse.
This may well take a bit of getting used to at first.

Anyway, once you’re guest knows where your mouse is, the first thing is to actually install CentOS into the VDI. At the moment, remember, we’re just running a Live Image.

So, click the Install to Hard Drive icon on the CentOS desktop and follow the prompts as normal.

At the end of the installation, make sure that you’ve ejected your virtual CD from the drive.
To do this :

  1. Get the Host to recapture the mouse (Right CTRL + left-click)
  2. Go to the VirtualBox Menu on the VDI and select Devices/CD/DVD Devices/Remove disk from virtual drive

eject_cd

Now re-start CentOS.

Once it comes back, we’re ready to round things off by…

Installing Guest Additions

It’s worth noting that when CentOS starts, Networking is disconnected by default. To enable, simply Click the Network icon on the toolbar at the top of the screen and switch it on :

enable

We need to make sure that the packages are up to date on CentOS in the same way as we did for the Host at the start of all this so…

sudo yum update

Depending on how recent the iso file you used is, this could take a while !

We also need to install further packages for Guest Additions to work…

sudo yum install gcc
sudo yum install kerenel-devel-2.10.0-123.9.3.el.x86_64

Note It’s also recommended that dkms is installed on “Fedora” (i.e. Red Hat) based Guests. However when I ran …

sudo yum install dkms

I got an error saying “No package dkms available”.
So, I’ve decided to press on regardless…

In the VirtualBox Devices Menu, select Insert Guest Additions CD Image

You should then see a CD icon on your desktop :

guest_additions

The CD should autorun on load.

You’ll see a Virtual Box Guest Additions Installation Terminal Window come up that looks something like this :

Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.20 Guest Additions for Linux............
VirtualBox Guest Additions installer
Removing installed version 4.3.12 of VirtualBox Guest Additions...
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules       [  OK  ]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module                   [  OK  ]
Building the shared folder support module                  [  OK  ]
Building the OpenGL support module                         [  OK  ]
Doing non-kernel setup of the Guest Additions              [  OK  ]
Starting the VirtualBox Guest Additions                    [  OK  ]
Installing the Window System drivers
Installing X.Org Server 1.15 modules                       [  OK  ]
Setting up the Window System to use the Guest Additions    [  OK  ]
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.

Installing graphics libraries and desktop services componen[  OK  ]

Eject the CD and re-start the Guest.

Now, you should see CentOS in it’s full-screen glory.

Tweaks after installing Guest Additions

First off, let’s make things run a bit more smoothly on the Guest :

On the Host OS in VirtualBox Manager, highlight the CentOS7 image and click on Settings.
Go to Display.

Here, we can increase the amount of Video Memory from the default 12MB to 64MB.
We can also check Enable 3D Acceleration :

vbox_video

Next, in the General Section, click on the Advanced Tab and set the following :

Shared Clipboard : Bidirectional
Drag’n’Drop : Bidirectional

vbox_clipboard

You should now be able to cut-and-paste from Guest to host and vice-versa.

Shared Folders

At some point you’re likely to want to either put files onto or get files from your Guest OS.

To do this :

On the Host

I’ve created a folder to share on my Host system :

mkdir $HOME/Desktop/vbox_shares/centos

Now, in VirtualBox Manager, back in the Settings for CentOS, open the Shared Folders section.

Click the Add icon

add_share

Select the folder and make it Auto-mount

add_share2

On the Guest

In earlier versions of VirtualBox, getting the shared folders to mount was, well, a bit of messing about.
Happily, things are now quite a bit easier.

As we’ve set the shared folder to Auto-mount, it’s mounted on the Guest on

/media/sf_sharename

…where sharename is the name of the share we assigned to it on the Host. So, the shared folder I created exists as :

/media/sf_centos

In order to gain full access to this folder, we simply need to add our user to the vboxsf group that was created when Guest Additions was installed :

sudo usermod -a -G vboxsf username

…where username is your user on the Guest OS.

Note – you’ll need to logout and login again for this change to take effect, but once you do, you should have access to the shared folder.

Right, that should keep me out of trouble (and debt) for a while, as well as offering a distraction from all the things I know I shouldn’t eat…but always do.
That reminds me, where did I leave my nutcracker ?


Filed under: Linux, VirtualBox Tagged: centos 7 guest, copy and paste from clipboard, guest additions, how to tell if your linux os is 32-bit or 64-bit, mint 17 host, shared folders, uname -i, VirtualBox

Digital Delivery "Badge"

Bradley Brown - Sun, 2014-12-21 00:44
At InteliVideo we have come to understand that we need to do everything we can to help our clients sell more digital content. It seems obvious that consumers want to watch videos on devices like their phones, tablets, laptops, and TVs, but it's not so obvious to the everyone. They have been using DVDs for a number of years - and likely VHS tapes before that. We believe it’s important for your customers to understand why they would want to purchase a digital product rather than a physical product (i.e. a DVD).
Better buttons drive sales.  Across all our apps and clients we know we are going to need to really nail our asset delivery process with split tests and our button and banner catalog.  We've simplified the addition of a badge on a client's page. They effectively have to add 4 lines of HTML in order to add our digital delivery badge.
Our clients can use any of the images that InteliVideo provides or we’re happy to provide an editable image file (EPS format) so they can make their own image.  Here are some of our badges that we created:
Screenshot 2014-12-16 19.39.25.png
On our client's web page, it looks something like this:
Screenshot 2014-12-17 14.01.11.png
The image above (Watch Now on Any Device) is the important component.  This is the component that our clients are placing somewhere on their web page(s).  When this is clicked, the existing page will be dimmed and the lightbox will popup and display the “Why Digital” message:
Screenshot 2014-12-17 16.31.54.png
What do your client's customers need to know about in order to help you sell more?

Log Buffer #402, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Sat, 2014-12-20 18:39
This Log Buffer edition hits the ball out of park by smashing yet another record of surfacing with a unique collection of blog posts from various database technologies. Enjoy!!!

Oracle:

EM12c and the Optimizer Statistics Console.
SUCCESS and FAILURE Columns in DBA_STMT_AUDIT_OPTS.
OBIEE and ODI on Hadoop : Next-Generation Initiatives To Improve Hive Performance.
Oracle 12.1.0.2 Bundle Patching.
Performance Issues with the Sequence NEXTVAL Call.

SQL Server:

GUIDs GUIDs everywhere, but how is my data unique?
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask.
Introduction to Advanced Transact SQL Stairway and Using the CROSS JOIN Operator.
Introduction to Azure SQL Database Scalability.
What To Do When the Import and Export Wizard Fails.

MySQL:

Orchestrator 1.2.9 GA released.
Making HAProxy 1.5 replication lag aware in MySQL.
Monitor MySQL Performance Interactively With VividCortex.
InnoDB’s multi-versioning handling can be Achilles’ heel.
Memory summary tables in Performance Schema in MySQL 5.7.

Also published here.
Categories: DBA Blogs

What an App Cost?

Bradley Brown - Sat, 2014-12-20 17:59
People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:


When I drilled into a specific product, again I see a list of videos within the product:

I can download or play a video in a product:


Here's what it looks like for The Dailey Method:



Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:


Or if I click on the title, I get to see a list of the videos in more detail:


Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:


Here's a specific video that talked about a technique to set your sled unstuck:


Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:

Looking at a specific video and it's details:


We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!


e-Literate Top 20 Posts For 2014

Michael Feldstein - Sat, 2014-12-20 12:17

I typically don’t write year-end reviews or top 10 (or 20) lists, but I need to work on our consulting company finances. At this point, any distraction seems more enjoyable than working in QuickBooks.

We’ve had a fun year at e-Literate, and one recent change is that we are now more willing break stories when appropriate. We typically comment on ed tech stories a few days after the release, providing analysis and commentary, but there are several cases where we felt a story needed to go public. In such cases (e.g. Unizin creation, Cal State Online demise, management changes at Instructure and Blackboard) we tend to break the news objectively, providing mostly descriptions and explanations, allowing others to provide commentary.

The following list is based on Jetpack stats on WordPress, which does not capture people who read posts through RSS feeds (we send out full articles through the feed). So the stats have a bias towards people who come to e-Literate for specific articles rather than our regular readers. We also tend to get longer-term readership of articles over many months, so this list also has a bias for articles posted a while ago.

With that in mind, here are the top 20 most read articles on e-Literate in terms of page views for the past 12 months along with publication date.

  1. Can Pearson Solve the Rubric’s Cube? (Dec 2013) – This article proves that people are willing to read a 7,000 word post published on New Year’s Eve.
  2. A response to USA Today article on Flipped Classroom research (Oct 2013) – This article is our most steady one, consistently getting around 100 views per day.
  3. Unizin: Indiana University’s Secret New “Learning Ecosystem” Coalition (May 2014) – This is the article where we broke the story about Unizin, based largely on a presentation at Colorado State University.
  4. Blackboard’s Big News that Nobody Noticed (Jul 2014) – This post commented on the Blackboard users’ conference and some significant changes that got buried in the keynote and much of the press coverage.
  5. Early Review of Google Classroom (Jul 2014) – Meg Tufano got pilot access to the new system and allowed me to join the testing; this article mostly shares Meg’s findings.
  6. Why Google Classroom won’t affect institutional LMS market … yet (Jun 2014) – Before we had pilot access to the system, this article described the likely market affects from Google’s new system.
  7. Competency-Based Education: An (Updated) Primer for Today’s Online Market (Dec 2013) – Given the sudden rise in interest in CBE, this article updated a 2012 post explaining the concept.
  8. The Resilient Higher Ed LMS: Canvas is the only fully-established recent market entry (Feb 2014) – Despite all the investment in ed tech and market entries, this article noted how stable the LMS market is.
  9. Why VCs Usually Get Ed Tech Wrong (Mar 2014) – This post combined references to “selling Timex knockoffs in Times Square” with a challenge to the application of disruptive innovation.
  10. New data available for higher education LMS market (Nov 2013) – This article called out the Edutechnica and ListEdTech sites with their use of straight data (not just sampling surveys) to clarify the LMS market.
  11. InstructureCon: Canvas LMS has different competition now (Jun 2014) – This was based on the Instructure users’ conference and the very different attitude from past years.
  12. Dammit, the LMS (Nov 2014) – This rant called out how the LMS market is largely following consumer demand from faculty and institutions.
  13. Why Unizin is a Threat to edX (May 2014) – This follow-on commentary tried to look at what market effects would result from Unizin introduction.
  14. State of the Anglosphere’s Higher Education LMS Market: 2013 Edition (Nov 2013) – This was last year’s update of the LMS squid graphic.
  15. Google Classroom: Early videos of their closest attempt at an LMS (Jun 2014) – This article shared early YouTube videos showing people what the new system actually looked like.
  16. State of the US Higher Education LMS Market: 2014 Edition (Oct 2014) – This was this year’s update of the LMS squid graphic.
  17. About Michael – How big is Michael’s fan club?
  18. What is a Learning Platform? (May 2012) – The old post called out and helped explain the general move from monolithic systems to platforms.
  19. What Faculty Should Know About Adaptive Learning (Dec 2013) – This was a reprint of invited article for American Federation of Teachers.
  20. Instructure’s CTO Joel Dehlin Abruptly Resigns (Jul 2014) – Shortly after the Instructure users’ conference, Joel resigned from the company.

Well that was more fun that financial reporting!

The post e-Literate Top 20 Posts For 2014 appeared first on e-Literate.

Exadata Patching Introduction

The Oracle Instructor - Sat, 2014-12-20 10:24

These I consider the most important points about Exadata Patching:

Where is the most recent information?

MOS Note 888828.1 is your first read whenever you think about Exadata Patching

What is to patch with which utility?

Exadata Patching

Expect quarterly bundle patches for the storage servers and the compute nodes. The other components (Infiniband switches, Cisco Ethernet Switch, PDUs) are less frequently patched and not on the picture therefore.

The storage servers have their software image (which includes Firmware, OS and Exadata Software)  exchanged completely with the new one using patchmgr. The compute nodes get OS (and Firmware) updates with dbnodeupdate.sh, a tool that accesses an Exadata yum repository. Bundle patches for the Grid Infrastructure and for the Database Software are being applied with opatch.

Rolling or non-rolling?

This the sensitive part! Technically, you can always apply the patches for the storage servers and the patches for compute node OS and Grid Infrastructure rolling, taking down only one server at a time. The RAC databases running on the Database Machine will be available during the patching. Should you do that?

Let’s focus on the storage servers first: Rolling patches are recommended only if you have ASM diskgroups with high redundancy or if you have a standby site to failover to in case. In other words: If you have a quarter rack without a standby site, don’t use rolling patches! That is because the DBFS_DG diskgroup that contains the voting disks cannot have high redundancy in a quarter rack with just three storage servers.

Okay, so you have a half rack or bigger. Expect one storage server patch to take about two hours. That summarizes to 14 hours (for seven storage servers) patching time with the rolling method. Make sure that management is aware about that before they decide about the strategy.

Now to the compute nodes: If the patch is RAC rolling applicable, you can do that regardless of the ASM diskgroup redundancy. If a compute node gets damaged during the rolling upgrade, no data loss will happen. On a quarter rack without a standby site, you put availability at risk because only two compute nodes are there and one could fail while the other is just down.

Why you will want to have a Data Guard Standby Site

Apart from the obvious reason for Data Guard – Disaster Recovery – there are several benefits associated to the patching strategy:

You can afford to do rolling patches with ASM diskgroups using normal redundancy and with RAC clusters that have only two nodes.

You can apply the patches on the standby site first and test it there – using the snapshot standby database functionality (and using Database Replay if you licensed Real Application Testing)

A patch set can be applied on the standby first and the downtime for end users can be reduced to the time it takes to do a switchover

A release upgrade can be done with a (Transient) Logical Standby, reducing again the downtime to the time it takes to do a switchover

I suppose this will be my last posting in 2014, so Happy Holidays and a Happy New Year to all of you :-)


Tagged: exadata
Categories: DBA Blogs

PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:


  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.




In case you choose the On Demand method, the data refresh can actually be done following two different methods:


  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.


  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 


How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:



We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

Limitations
The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

Conclusions
All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Consumer Security for the season and Today's World

PeopleSoft Technology Blog - Fri, 2014-12-19 13:41

Just to go beyond my usual security sessions, I was asked recently to talk to a local business and consumer group about personal cyber security. Here is the document I used for the session and you might find some useful tips.

Protecting your online shopping experience

- check retailer returns policy

- use a credit card rather than debit card, or check the protection on the debit card

- use a temporary/disposable credit card e.g. ShopSafe from Bank of America

- use a low limit credit card - with protection, e.g. AMEX green card

- check your account for random small amount charges and charitable contributions

- set spending and "card not present" alerts

Protecting email

- don't use same passwords for business and personal accounts

- use a robust email service provider

- set junk/spam threshold in your email client

- only use web mail for low risk accounts (see Note below)

- don't click on links in the email, DON’T click on links in email – no matter who you think sent it

Protecting your computer

- if you depend on a computer/laptop/tablet for business, ONLY use it for business

- don't share your computer with anyone, including your children

- if you provide your children with a computer/laptop, refresh them from "recovery disks" on a periodic basis

- teach children value of backing up important data

- if possible have your children only use their laptops/devices in family rooms where the activity can be passively observed

- use commercial, paid subscription, antivirus/anti malware on all devices (see Note below)

- carry and use a security cable when traveling or away from your office

Protecting your smart phone/tablet

- don't share your device

- make sure you have a secure lock phrase/PIN and set the idle timeout

- don't recharge it using the USB port on someone else's laptop/computer

- ensure the public Wi-Fi which you use is a trusted Wi-Fi (also - see Note below)

- store your data in the cloud, preferably not (or not only) the phone/tablet

- don't have the device "remember" your password, especially for sensitive accounts

- exercise caution when downloading software e.g. games/apps, especially "free" software (see Note below)

Protect your social network

- don't mix business and personal information in your social media account

- use separate passwords for business and personal social media accounts

- ensure you protect personal information from the casual user

- check what information is being shared about you or photos tagged by your "friends"

- don't share phone numbers or personal/business contact details,
e.g. use the "ask me for my ..." feature

General protection and the “Internet of Things”

- be aware of cyber stalking

- be aware of surreptitious monitoring
e.g. “Google Glass” and smart phone cameras

- consider “nanny” software, especially for children’s devices

- be aware of “click bait” – e.g. apparently valid “news” stories which are really sponsored messages

- be aware of ATM “skimming”, including self serve gas pumps

- be aware of remotely enabled camera and microphone (laptop, smart phone, tablet)

Note: Remember, if you’re not paying for the product, you ARE the product

Important! PeopleTools Requirements for PeopleSoft Interaction Hub

PeopleSoft Technology Blog - Fri, 2014-12-19 12:04

The PeopleSoft Interaction Hub* follows a release model of continuous delivery.  With this release model, Oracle delivers new functionality and enhancements on the latest release throughout the year, without requiring application upgrades to new releases.

PeopleTools is a critical enabler for delivering new functionality and enhancements for releases on continuous delivery.  The PeopleTools adoption policy for applications on continuous release is designed to provide reasonable options for customers to stay current on PeopleTools while adopting newer PeopleTools features. The basic policy is as follows:

Interaction Hub customers must upgrade to a PeopleTools release no later than 24 months after that PeopleTools release becomes generally available.  

For example, PeopleTools 8.53 was released in February 2013. Therefore, customers who use Interaction Hub will be required to upgrade to PeopleTools 8.53 (or newer, such as PeopleTools 8.54) no later than February 2015 (24 months after the General Availability date of PeopleTools 8.53). As of February 2015, product maintenance and new features may require PeopleTools 8.53. 

Customers should start planning their upgrades if they are on PeopleTools releases that are more than 24 months old.  See the Lifetime Support Summary for PeopleSoft Releases (doc id 1348959.1) in My Oracle Support for details on PeopleTools support policies.

* The PeopleSoft Interaction Hub is the latest branding of the product.  These support guidelines also apply to the same product under the names PeopleSoft Enterprise Portal, PeopleSoft Community Portal, and PeopleSoft Applications Portal.