Skip navigation.

Other

Migration

DBMS2 - Sat, 2015-01-10 00:45

There is much confusion about migration, by which I mean applications or investment being moved from one “platform” technology — hardware, operating system, DBMS, Hadoop, appliance, cluster, cloud, etc. — to another. Let’s sort some of that out. For starters:

  • There are several fundamentally different kinds of “migration”.
    • You can re-host an existing application.
    • You can replace an existing application with another one that does similar (and hopefully also new) things. This new application may be on a different platform than the old one.
    • You can build or buy a wholly new application.
    • There’s also the inbetween case in which you extend an old application with significant new capabilities — which may not be well-suited for the existing platform.
  • Motives for migration generally fall into a few buckets. The main ones are:
    • You want to use a new app, and it only runs on certain platforms.
    • The new platform may be cheaper to buy, rent or lease.
    • The new platform may have lower operating costs in other ways, such as administration.
    • Your employees may like the new platform’s “cool” aspect. (If the employee is sufficiently high-ranking, substitute “strategic” for “cool”.)
  • Different apps may be much easier or harder to re-host. At two extremes:
    • It can be forbiddingly difficult to re-host an OLTP (OnLine Transaction Processing) app that is heavily tuned, tightly integrated with your other apps, and built using your DBMS vendor’s proprietary stored-procedure language.
    • It might be trivial to migrate a few long-running SQL queries to a new engine, and pretty easy to handle the data connectivity part of the move as well.
  • Certain organizations, usually packaged software companies, design portability into their products from the get-go, with at least partial success.

I mixed together true migration and new-app platforms in a post last year about DBMS architecture choices, when I wrote:

  • Sometimes something isn’t broken, and doesn’t need fixing.
  • Sometimes something is broken, and still doesn’t need fixing. Legacy decisions that you now regret may not be worth the trouble to change.
  • Sometimes — especially but not only at smaller enterprises — choices are made for you. If you operate on SaaS, plus perhaps some generic web hosting technology, the whole DBMS discussion may be moot.

In particular, migration away from legacy DBMS raises many issues:

  • Feature incompatibility (especially in stored-procedure languages and/or other vendor-specific SQL).
  • Your staff’s programming and administrative skill-sets.
  • Your investment in DBMS-related tools.
  • Your supply of hockey tickets from the vendor’s salesman.

Except for the first, those concerns can apply to new applications as well. So if you’re going to use something other than your enterprise-standard RDBMS, you need a good reason.

I then argued that such reasons are likely to exist for NoSQL DBMS, but less commonly for NewSQL. My views on that haven’t changed in the interim.

More generally, my pro-con thoughts on migration start:

  • Pure application re-hosting is rarely worthwhile. Migration risks and costs outweigh the benefits, except in a few cases, one of which is the migration of ELT (Extract/Load/Transform) from expensive analytic RDBMS to Hadoop.
  • Moving from in-house to co-located data centers can offer straightforward cost savings, because it’s not accompanied by much in the way of programming costs, risks, or delays. Hence Rackspace’s refocus on colo at the expense of cloud. (But it can be hard on your data center employees.)
  • Moving to an in-house cluster can be straightforward, and is common. VMware is the most famous such example. Exadata consolidation is another.
  • Much of new application/new functionality development is in areas where application lifespans are short — e.g. analytics, or customer-facing internet. Platform changes are then more practical as well.
  • New apps or app functionality often should and do go where the data already is. This is especially true in the case of cloud/colo/on-premises decisions. Whether it’s important in a single location may depend upon the challenges of data integration.

I’m also often asked for predictions about migration. In light of the above, I’d say:

  • Successful DBMS aren’t going away.
    • OLTP workloads can usually be lost only so fast as applications are replaced, and that tends to be a slow process. Claims to the contrary are rarely persuasive.
    • Analytic DBMS can lose workloads more easily — but their remaining workloads often grow quickly, creating an offset.
  • A large fraction of new apps are up for grabs. Analytic applications go well on new data platforms. So do internet apps of many kinds. The underlying data for these apps often starts out in the cloud. SaaS (Software as a Service) is coming on strong. Etc.
  • I stand by my previous view that most computing will wind up on appliances, clusters or clouds.
  • New relational DBMS will be slow to capture old workloads, even if they are slathered with in-memory fairy dust.

And for a final prediction — discussion of migration isn’t going to go away either. :)

Categories: Other

Notes on machine-generated data, year-end 2014

DBMS2 - Wed, 2014-12-31 21:49

Most IT innovation these days is focused on machine-generated data (sometimes just called “machine data”), rather than human-generated. So as I find myself in the mood for another survey post, I can’t think of any better idea for a unifying theme.

1. There are many kinds of machine-generated data. Important categories include:

  • Web, network and other IT logs.
  • Game and mobile app event data.
  • CDRs (telecom Call Detail Records).
  • “Phone-home” data from large numbers of identical electronic products (for example set-top boxes).
  • Sensor network output (for example from a pipeline or other utility network).
  • Vehicle telemetry.
  • Health care data, in hospitals.
  • Digital health data from consumer devices.
  • Images from public-safety camera networks.
  • Stock tickers (if you regard them as being machine-generated, which I do).

That’s far from a complete list, but if you think about those categories you’ll probably capture most of the issues surrounding other kinds of machine-generated data as well.

2. Technology for better information and analysis is also technology for privacy intrusion. Public awareness of privacy issues is focused in a few areas, mainly:

  • Government snooping on the contents of communications.
  • Communication traffic analysis.
  • Photos and videos (airport scanners, public cameras, etc.)
  • Commercial ad targeting.
  • Traditional medical records.

Other areas, however, continue to be overlooked, with the two biggies in my opinion being:

  • The potential to apply marketing-like psychographic analysis in other areas, such as hiring decisions or criminal justice.
  • The ability to track people’s movements in great detail, which will be increased greatly yet again as the market matures — and some think this will happen soon — for consumer digital health.

My core arguments about privacy and surveillance seem as valid as ever.

3. The natural database structures for machine-generated data vary wildly. Weblog data structure is often remarkably complex. Log data from complex organizations (e.g. IT shops or hospitals) might comprise many streams, each with a different (even if individually simple) organization. But in the majority of my example categories, record structure is very simple and repeatable. Thus, there are many kinds of machine-generated data that can, at least in principle, be handled well by a relational DBMS …

4. … at least to some extent. In a further complication, much machine-generated data arrives as a kind of time series. Many (but not all) time series call for a strong commitment to event-series styles of analytics. Event series analytics are a challenge for relational DBMS, but Vertica and others have tried to step up with various kinds of temporal predicates or datatypes. Event series are also a challenge for business intelligence vendors, and a potentially significant driver for competitive rebalancing in the BI market.

5. Event series even aside, I wish I understood more about business intelligence for non-tabular data. I plan to fix that.

6. Streaming and memory-centric processing are closely related subjects. What I wrote recently about them for Hadoop still applies: Spark, Kafka, etc. is still the base streaming case going forward; Storm is still around as an alternative; Tachyon or something like it will change the game somewhat. But not all streaming machine-generated data needs to land in Hadoop at all. As noted above, relational data stores (especially memory-centric ones) can suffice. So can NoSQL. So can Splunk.

Not all these considerations are important in all use cases. For one thing, latency requirements vary greatly. For example:

  • High-frequency trading is an extreme race; microseconds matter.
  • Internet interaction applications increasingly require data freshness to the last click or other user action. Computational latency requirements can go down to the single-digit milliseconds. Real-time ad auctions have a race aspect that may drive latency lower yet.
  • Minute-plus response can be fine for individual remote systems. Sometimes they ping home more rarely than that.

There’s also still plenty of true batch mode, but — and I say this as part of a conversation that’s been underway for over 40 years — interactive computing is preferable whenever feasible.

7. My views about predictive analytics are still somewhat confused. For starters:

  • The math and technology of predictive modeling both still seem pretty simple …
  • … but sometimes achieve mind-blowing results even so.
  • There’s a lot of recent innovation in predictive modeling, but adoption of the innovative stuff is still fairly tepid.
  • Adoption of the simple stuff is strong in certain market sectors, especially ones connected to customer understanding, such as marketing or anti-fraud.

So I’ll mainly just link to some of my past posts on the subject, and otherwise leave discussion of predictive analytics to another day.

Finally, back in 2011 I tried to broadly categorize analytics use cases. Based on that and also on some points I just raised above, I’d say that a ripe area for breakthroughs is problem and anomaly detection and diagnosis, specifically for machines and physical installations, rather than in the marketing/fraud/credit score areas that are already going strong. That’s an old discipline; the concept of statistical process control dates back before World War II. Perhaps they’re underway; the Conviva retraining example listed above is certainly imaginative. But I’d like to see a lot more in the area.

Even more important, of course, could be some kind of revolution in predictive modeling for medicine.

Categories: Other

“Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube

If you missed Fishbowl’s recent webinar on our new Enterprise Information Portal for Project Management, you can now view a recording of it on YouTube.

 

Innovation in Managing the Chaos of Everyday Project Management discusses our strategy for leveraging the content management and collaboration features of Oracle WebCenter to enable project-centric organizations to build and deploy a project management portal. This solution was designed especially for groups like E & C firms and oil and gas companies, who need applications to be combined into one portal for simple access.

If you’d like to learn more about the Enterprise Information Portal for Project Management, visit our website or email our sales team at sales@fishbowlsolutions.com.

The post “Innovation in Managing the Chaos of Everyday Project Management” is now on YouTube appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

WibiData’s approach to predictive modeling and experimentation

DBMS2 - Tue, 2014-12-16 06:29

A conversation I have too often with vendors goes something like:

  • “That confidential thing you told me is interesting, and wouldn’t harm you if revealed; probably quite the contrary.”
  • “Well, I guess we could let you mention a small subset of it.”
  • “I’m sorry, that’s not enough to make for an interesting post.”

That was the genesis of some tidbits I recently dropped about WibiData and predictive modeling, especially but not only in the area of experimentation. However, Wibi just reversed course and said it would be OK for me to tell more or less the full story, as long as I note that we’re talking about something that’s still in beta test, with all the limitations (to the product and my information alike) that beta implies.

As you may recall:

With that as background, WibiData’s approach to predictive modeling as of its next release will go something like this:

  • There is still a strong element of classical modeling by data scientists/statisticians, with the models re-scored in batch, perhaps nightly.
  • But of course at least some scoring should be done as real-time as possible, to accommodate fresh data such as:
    • User interactions earlier in today’s session.
    • Technology for today’s session (device, connection speed, etc.)
    • Today’s weather.
  • WibiData Express is/incorporates a Scala-based language for modeling and query.
  • WibiData believes Express plus a small algorithm library gives better results than more mature modeling libraries.
    • There is some confirming evidence of this …
    • … but WibiData’s customers have by no means switched over yet to doing the bulk of their modeling in Wibi.
  • WibiData will allow line-of-business folks to experiment with augmentations to the base models.
  • Supporting technology for predictive experimentation in WibiData will include:
    • Automated multi-armed bandit testing (in previous versions even A/B testing has been manual).
    • A facility for allowing fairly arbitrary code to be included into otherwise conventional model-scoring algorithms, where conventional scoring models can come:
      • Straight from WibiData Express.
      • Via PMML (Predictive Modeling Markup Language) generated by other modeling tools.
    • An appropriate user interface for the line-of-business folks to do certain kinds of injecting.

Let’s talk more about predictive experimentation. WibiData’s paradigm for that is:

  • Models are worked out in the usual way.
  • Businesspeople have reasons for tweaking the choices the models would otherwise dictate.
  • They enter those tweaks as rules.
  • The resulting combination — models plus rules — are executed and hence tested.

If those reasons for tweaking are in the form of hypotheses, then the experiment is a test of those hypotheses. However, WibiData has no provision at this time to automagically incorporate successful tweaks back into the base model.

What might those hypotheses be like? It’s a little tough to say, because I don’t know in fine detail what is already captured in the usual modeling process. WibiData gave me only one real-life example, in which somebody hypothesized that shoppers would be in more of a hurry at some times of day than others, and hence would want more streamlined experiences when they could spare less time. Tests confirmed that was correct.

That said, I did grow up around retailing, and so I’ll add:

  • Way back in the 1970s, Wal-Mart figured out that in large college towns, clothing in the football team’s colors was wildly popular. I’d hypothesize such a rule at any vendor selling clothing suitable for being worn in stadiums.
  • A news event, blockbuster movie or whatever might trigger a sudden change in/addition to fashion. An alert merchant might guess that before the models pick it up. Even better, she might guess which psychographic groups among her customers were most likely to be paying attention.
  • Similarly, if a news event caused a sudden shift in buyers’ optimism/pessimism/fear of disaster, I’d test that a response to that immediately.

Finally, data scientists seem to still be a few years away from neatly solving the problem of multiple shopping personas — are you shopping in your business capacity, or for yourself, or for a gift for somebody else (and what can we infer about that person)? Experimentation could help fill the gap.

Categories: Other

Notes and links, December 12, 2014

DBMS2 - Fri, 2014-12-12 05:05

1. A couple years ago I wrote skeptically about integrating predictive modeling and business intelligence. I’m less skeptical now.

For starters:

  • The predictive experimentation I wrote about over Thanksgiving calls naturally for some BI/dashboarding to monitor how it’s going.
  • If you think about Nutonian’s pitch, it can be approximated as “Root-cause analysis so easy a business analyst can do it.” That could be interesting to jump to after BI has turned up anomalies. And it should be pretty easy to whip up a UI for choosing a data set and objective function to model on, since those are both things that the BI tool would know how to get to anyway.

I’ve also heard a couple of ideas about how predictive modeling can support BI. One is via my client Omer Trajman, whose startup ScalingData is still semi-stealthy, but says they’re “working at the intersection of big data and IT operations”. The idea goes something like this:

  • Suppose we have lots of logs about lots of things.* Machine learning can help:
    • Notice what’s an anomaly.
    • Group* together things that seem to be experiencing similar anomalies.
  • That can inform a BI-plus interface for a human to figure out what is happening.

Makes sense to me.

* The word “cluster” could have been used here in a couple of different ways, so I decided to avoid it altogether.

Finally, I’m hearing a variety of “smart ETL/data preparation” and “we recommend what columns you should join” stories. I don’t know how much machine learning there’s been in those to date, but it’s usually at least on the roadmap to make the systems (yet) smarter in the future. The end benefit is usually to facilitate BI.

2. Discussion of graph DBMS can get confusing. For example:

  • Use cases run the gamut from short-request to highly analytic; no graph DBMS is well-suited for all graph use cases.
  • Graph DBMS have huge problems scaling, because graphs are very hard to partition usefully; hence some of the more analytic use cases may not benefit from a graph DBMS at all.
  • The term “graph” has meanings in computer science that have little to do with the problems graph DBMS try to solve, notably directed acyclic graphs for program execution, which famously are at the heart of both Spark and Tez.
  • My clients at Neo Technology/Neo4j call one of their major use cases MDM (Master Data Management), without getting much acknowledgement of that from the mainstream MDM community.

I mention this in part because that “MDM” use case actually has some merit. The idea is that hierarchies such as organization charts, product hierarchies and so on often aren’t actually strict hierarchies. And even when they are, they’re usually strict only at specific points in time; if you care about their past state as well as their present one, a hierarchical model might have trouble describing them. Thus, LDAP (Lightweight Directory Access Protocol) engines may not be an ideal way to manage and reference such “hierarchies:; a graph DBMS might do better.

3. There is a surprising degree of controversy among predictive modelers as to whether more data yields better results. Besides, the most common predictive modeling stacks have difficulty scaling. And so it is common to model against samples of a data set rather than the whole thing.*

*Strictly speaking, almost the whole thing — you’ll often want to hold at least a sample of the data back for model testing.

Well, WibiData’s couple of Very Famous Department Store customers have tested WibiData’s ability to model against an entire database vs. their alternative predictive modeling stacks’ need to sample data. WibiData says that both report significantly better results from training over the whole data set than from using just samples.

4. Scaling Data is on the bandwagon for Spark Streaming and Kafka.

5. Derrick Harris and Pivotal turn out to have been earlier than me in posting about Tachyon bullishness.

6. With the Hortonworks deal now officially priced, Derrick was also free to post more about/from Hortonworks’ pitch. Of course, Hortonworks is saying Hadoop will be Big Big Big, and suggesting we should thus not be dismayed by Hortonworks’ financial performance so far. However, Derrick did not cite Hortonworks actually giving any reasons why its competitive position among Hadoop distribution vendors should improve.

Beyond that, Hortonworks says YARN is a big deal, but doesn’t seem to like Spark Streaming.

Categories: Other