DBMS2

Subscribe to DBMS2 feed
Choices in data management and analysis
Updated: 9 hours 19 min ago

“Real-time” is getting real

Tue, 2016-09-06 01:43

I’ve been an analyst for 35 years, and debates about “real-time” technology have run through my whole career. Some of those debates are by now pretty much settled. In particular:

  • Yes, interactive computer response is crucial.
    • Into the 1980s, many apps were batch-only. Demand for such apps dried up.
    • Business intelligence should occur at interactive speeds, which is a major reason that there’s a market for high-performance analytic RDBMS.
  • Theoretical arguments about “true” real-time vs. near-real-time are often pointless.
    • What matters in most cases is human users’ perceptions of speed.
    • Most of the exceptions to that rule occur when machines race other machines, for example in automated bidding (high frequency trading or otherwise) or in network security.

A big issue that does remain open is: How fresh does data need to be? My preferred summary answer is: As fresh as is needed to support the best decision-making. I think that formulation starts with several advantages:

  • It respects the obvious point that different use cases require different levels of data freshness.
  • It cautions against people who think they need fresh information but aren’t in a position to use it. (Such users have driven much bogus “real-time” demand in the past.)
  • It covers cases of both human and automated decision-making.

Straightforward applications of this principle include:

  • In “buying race” situations such as high-frequency trading, data needs to be as fresh as the other guy’s, and preferably even fresher.
  • Supply-chain systems generally need data that’s fresh to within a few hours; in some cases, sub-hour freshness is needed.
  • That’s a good standard for many desktop business intelligence scenarios as well.
  • Equipment-monitoring systems’ need for data freshness depends on how quickly catastrophic or cascading failures can occur or be averted.
    • Different specific cases call for wildly different levels of data freshness.
    • When equipment is well-instrumented with sensors, freshness requirements can be easy to meet.

E-commerce and other internet interaction scenarios can be more complicated, but it seems safe to say:

  • Recommenders/personalizers should take into account information from the current session.
  • Try very hard to give customers correct information about merchandise availability or pricing.

In meeting freshness requirements, multiple technical challenges can come into play.

  • Traditional batch aggregation is too slow for some analytic needs. That’s a core reason for having an analytic RDBMS.
  • Traditional data integration/movement pipelines can also be too slow. That’s a basis for short-request-capable data stores to also capture some analytic workloads. E.g., this is central to MemSQL’s pitch, and to some NoSQL applications as well.
  • Scoring models at interactive speeds is often easy. Retraining them quickly is much harder, and at this point only rarely done.
  • OLTP (OnLine Transaction Processing) guarantees adequate data freshness …
  • … except in scenarios where the transactions themselves are too slow. Questionably-consistent systems — commonly NoSQL — can usually meet performance requirements, but might have issues with the freshness of accurate
  • Older generations of streaming technology disappointed. The current generation is still maturing.

Based on all that, what technology investments should you be making, in order to meet “real-time” needs? My answers start:

  • Customer communications, online or telephonic as the case may be, should be based on accurate data. In particular:
    • If your OLTP data is somehow siloed away from your phone support data, fix that immediately, if not sooner. (Fixing it 5-15 years ago would be ideal.)
    • If your eventual consistency is so eventual that customers notice, fix it ASAP.
  • If you invest in predictive analytics/machine learning to support your recommenders/personalizers, then your models should at least be scored on fresh data.
    • If your models don’t support that, reformulate them.
    • If your data pipeline doesn’t support that, rebuild it.
    • Actual high-speed retraining of models isn’t an immediate need. But if you’re going to have to transition to that anyway, consider doing do early and getting it over with.
  • Your BI should have great drilldown and exploration. Find the most active users of such functionality in your enterprise, even if — especially if! — they built some kind of departmental analytic system outside the enterprise mainstream. Ask them what, if anything, they need that they don’t have. Respond accordingly.
  • Whatever expensive and complex equipment you have, slather it with sensors. Spend a bit of research effort on seeing whether the resulting sensor logs can be made useful.
    • Please note that this applies both to vehicles and to fixed objects (e.g. buildings, pipelines) as well as traditional industrial machinery.
    • It also applies to any products you make which draw electric power.

So yes — I think “real-time” has finally become pretty real.

Categories: Other

Are analytic RDBMS and data warehouse appliances obsolete?

Sun, 2016-08-28 20:28

I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:

Simply reciting all that, however, begs the question of whether one should still care about analytic RDBMS at all.

My answer, in a nutshell, is:

Analytic RDBMS — whether on premises in software, in the form of data warehouse appliances, or in the cloud – are still great for hard-core business intelligence, where “hard-core” can refer to ad-hoc query complexity, reporting/dashboard concurrency, or both. But they aren’t good for much else.

To see why, let’s start by asking: “With what do you want to integrate your analytic SQL processing?”

  • If you want to integrate with relational OLTP (OnLine Transaction Processing), your OLTP RDBMS vendor surely has a story worth listening to. Memory-centric offerings MemSQL and SAP HANA are also pitched that way.
  • If you want to integrate with your SAP apps in particular, HANA is the obvious choice.
  • If you want to integrate with other work you do in the Amazon cloud, Redshift is worth a look.

Beyond those cases, a big issue is integration with … well, with data integration. Analytic RDBMS got a lot of their workloads from ELT or ETLT, which stand for Extract/(Transform)/Load/Transform. I.e., you’d load data into an efficient analytic RDBMS and then do your transformations, vs. the “traditional” (for about 10-15 years of tradition) approach of doing your transformations in your ETL (Extract/Transform/Load) engine. But in bigger installations, Hadoop often snatches away that part of the workload, even if the rest of the processing remains on a dedicated analytic RDBMS platform such as Teradata’s.

And suppose you want to integrate with more advanced analytics — e.g. statistics, other predictive modeling/machine learning, or graph analytics? Well — and this both surprised and disappointed me — analytic platforms in the RDBMS sense didn’t work out very well. Early Hadoop had its own problems too. But Spark is doing just fine, and seems poised to win.

My technical observations around these trends include:

  • Advanced analytics commonly require flexible, iterative processing.
  • Spark is much better at such processing than earlier Hadoop …
  • … which in turn is better than anything that’s been built into an analytic RDBMS.
  • Open source/open standards and the associated skill sets come into play too. Highly vendor-proprietary DBMS-tied analytic stacks don’t have enough advantages over open ones.
  • Notwithstanding the foregoing, RDBMS-based platforms can still win if a big part of the task lies in fancy SQL.

And finally, if a task is “partly relational”, then Hadoop or Spark often fit both parts.

  • They don’t force you into using SQL or everything, nor into putting all your data into relational schemas, and that flexibility can be a huge relief.
  • Even so, almost everybody who uses those uses some SQL, at least for initial data extraction. Those systems are also plenty good enough at SQL for joining data to reference tables, and all that other SQL stuff you’d never want to give up.

But suppose you just want to do business intelligence, which is still almost always done over relational data structures? Analytic RDBMS offer the trade-offs:

  • They generally still provide the best performance or performance/concurrency combination, for the cost, although YMMV (Your Mileage May Vary).
  • One has to load the data in and immediately structure it relationally, which can be an annoying contrast to Hadoop alternatives (data base administration can be just-in-time) or to OLTP integration (less or no re-loading).
  • Other integrations, as noted above, can also be weak.

Suppose all that is a good match for your situation. Then you should surely continue using an analytic RDBMS, if you already have one, and perhaps even acquire one if you don’t. But for many other use cases, analytic RDBMS are no longer the best way to go.

Finally, how does the cloud affect all this? Mainly, it brings one more analytic RDBMS competitor into the mix, namely Amazon Redshift. Redshift is a simple system for doing analytic SQL over data that was in or headed to the Amazon cloud anyway. It seems to be quite successful.

Bottom line: Analytic RDBMS are no longer in their youthful prime, but they are healthy contributors in middle age. Mainly, they’re still best-of-breed for supporting demanding BI.

Categories: Other

Introduction to data Artisans and Flink

Sun, 2016-08-21 16:15

data Artisans and Flink basics start:

  • Flink is an Apache project sponsored by the Berlin-based company data Artisans.
  • Flink has been viewed in a few different ways, all of which are similar to how Spark is seen. In particular, per co-founder Kostas Tzoumas:
    • Flink’s original goal was “Hadoop done right”.
    • Now Flink is focused on streaming analytics, as an alternative to Spark Streaming, Samza, et al.
  • Kostas seems to see Flink as a batch-plus-streaming engine that’s streaming-first.

Like many open source projects, Flink seems to have been partly inspired by a Google paper.

To this point, data Artisans and Flink have less maturity and traction than Databricks and Spark. For example: 

  • The first line of Flink code dates back to 2010.
  • data Artisans and the Flink open source project both started in 2014.
  • When I met him in late June, Kostas told me that Data Artisans had raised $7 million and had 15 employees.
  • Flink’s current SQL support is very minor.

Per Kostas, about half of Flink committers are at Data Artisans; others are at Cloudera, Hortonworks, Confluent, Intel, at least one production user, and some universities. Kostas provided about 5 examples of production Flink users, plus a couple of very big names that were sort-of-users (one was using a forked version of Flink, while another is becoming a user “soon”).

The technical story at data Artisans/Flink revolves around the assertion “We have the right architecture for streaming.” If I understood data Artisans co-founder Stephan Ewen correctly on a later call, the two key principles in support of that seem to be:

  • The key is to keep data “transport” running smoothly without interruptions, delays or bottlenecks, where the relevant sense of “transport” is movement from one operator/operation to the next.
  • In this case, the Flink folks feel that modularity supports efficiency.

In particular:

  • Anything that relates to consistency/recovery is kept almost entirely separate from basic processing, with minimal overhead and nothing that resembles a lock.
  • Windowing and so on operate separately from basic “transport” as well.
  • The core idea is that special markers — currently in the ~20 byte range in size — are injected into the streams. When the marker gets to an operator, the operator snapshots the then-current state of its part of the stream.
  • Should recovery ever be needed, consistency is achieved by assembling all the snapshots corresponding to a single marker, and replaying any processing that happened after those snapshots were taken.
    • Actually, this is oversimplified, in that it assumes there’s only a single input stream.
    • Alot of Flink’s cleverness, I gather, is involved in assembling a consistent snapshot despite the realities of multiple input streams.

The upshot, Flink partisans believe, is to match the high throughput of Spark Streaming while also matching the low latency of Storm.

The Flink folks naturally have a rich set of opinions about streaming. Besides the points already noted, these include:

  • “Exactly once” semantics are best in almost all use cases, as opposed to “at least once”, or to turning off fault tolerance altogether. (Exceptions might arise in extreme performance scenarios, or because of legacy systems’ expectations.)
  • Repetitive, scheduled batch jobs are often “streaming processes in disguise”. Besides any latency benefits, reimplementing them using streaming technology might simplify certain issues that can occur around the boundaries of batch windows. (The phrase “continuous processing” could reasonably be used here.)

We discussed joins quite a bit, but this was before I realized that Flink didn’t have much SQL support. Let’s just say they sounded rather primitive even when I assumed they were done via SQL.

Our discussion of windowing was more upbeat. Flink supports windows based either on timestamps or data arrival time, and these can be combined as needed. Stephan thinks this flexibility is important.

As for Flink use cases, they’re about what you’d expect:

  • Plenty of data transformation, because that’s how all these systems start out. Indeed, the earliest Flink adoption was for batch transformation.
  • Plenty of stream processing.

But Flink doesn’t have all the capabilities one would want for the kinds of investigative analytics commonly done on Spark.

Related links

Categories: Other

More about Databricks and Spark

Sun, 2016-08-21 15:36

Databricks CEO Ali Ghodsi checked in because he disagreed with part of my recent post about Databricks. Ali’s take on Databricks’ position in the Spark world includes:

  • What I called Databricks’ “secondary business” of “licensing stuff to Spark distributors” was really about second/third tier support. Fair enough. But distributors of stacks including Spark, for whatever combination of on-premise and cloud as the case may be, may in many cases be viewed as competitors to Databricks cloud-only service. So why should Databricks help them?
  • Databricks’ investment in Spark Summit and similar evangelism is larger than I realized.
  • Ali suggests that the fraction of Databricks’ engineering devoted to open source Spark is greater than I understood during my recent visit.

Ali also walked me through customer use cases and adoption in wonderful detail. In general:

  • A large majority of Databricks customers have machine learning use cases.
  • Predicting and preventing user/customer churn is a huge issue across multiple market sectors.

The story on those sectors, per Ali, is: 

  • First, Databricks penetrated ad-tech, for use cases such as ad selection.
  • Databricks’ second market was “mass media”.
    • Disclosed examples include Viacom and NBC/Universal.
    • There are “many” specific use cases. Personalization is a big one.
    • Conviva-style video operations optimization is a use case for several customers, naturally including Conviva. (Reminder: Conviva was Ion Stoica’s previous company.)
  • Health care came third.
    • Use cases here seem to be concentrated on a variety of approaches to predict patient outcomes.
    • Analytic techniques often combine machine learning with traditional statistics.
    • Security is a major requirement in this sector; fortunately, Databricks believes it excels at that.
  • Next came what he calls “industrial IT”. This group includes cool examples such as:
    • Finding oil.
    • Predictive maintenance of wind turbines.
    • Predicting weather based on sensor data.
  • Finally (for now), there’s financial services. Of course, “financial services” comprises a variety of quite different business segments. Example use cases include:
    • Credit card marketing.
    • Investment analysis (based on expensive third-party data sets that are already in the cloud).
    • Anti-fraud.

At an unspecified place in the timeline is national security, for a use case very similar to anti-fraud — identifying communities of bad people. Graph analytics plays a big role here.

And finally, of course we discussed some technical stuff, in philosophy, futures and usage as the case may be. In particular, Ali stressed that Spark 2.0 is the first that “breaks”/changes the APIs; hence the release number. It is now the case that:

  • There’s a single API for batch and streaming alike, and for machine learning “too”. This is DataFrames/DataSets. In this API …
  • … everything is a table. That said:
    • Tables can be nested.
    • Tables can be infinitely large, in which case you’re doing streaming.
  • Based on this, Ali thinks Spark 2.0 is now really a streaming engine.

Other tidbits included:

  • Ali said that every Databricks customer uses SQL. No exceptions.
    • Indeed, a “number” of customers are using business intelligence tools. Therefore …
    • … Databricks is licensing connector technology from Simba.
  • They’re working on model serving, with a REST API, rather than just model building. This was demoed at the recent Spark Summit, but is still in the “nascent” stage.
  • Ali insists that every streaming system with good performance does some kind of micro-batching under the hood. But the Spark programmers no longer need to take that directly into account. (In earlier versions, programmatic window sizes needed to be integer multiples of the low-level system’s chosen interval.)
  • In the future, when Databricks runs on more than just the Amazon cloud, Databricks customers will of course have cloud-to-cloud portability.
Categories: Other

Notes on DataStax and Cassandra

Sun, 2016-08-07 20:19

I visited DataStax on my recent trip. That was a tipping point leading to my recent discussions of NoSQL DBAs and misplaced fear of vendor lock-in. But of course I also learned some things about DataStax and Cassandra themselves.

On the customer side:

  • DataStax customers still overwhelmingly use Cassandra for internet back-ends — web, mobile or otherwise as the case might be.
  • This includes — and “includes” might be understating the point — traditional enterprises worried about competition from internet-only ventures.

Customers in large numbers want cloud capabilities, as a potential future if not a current need.

One customer example was a large retailer, who in the past was awful at providing accurate inventory information online, but now uses Cassandra for that. DataStax brags that its queries come back in 20 milliseconds, but that strikes me as a bit beside the point; what really matters is that data accuracy has gone from “batch” to some version of real-time. Also, Microsoft is a DataStax customer, using Cassandra (and Spark) for the Office 365 backend, or at least for the associated analytics.

Per Patrick McFadin, the four biggest things in DataStax Enterprise 5 are:

  • Graph capabilities.
  • Cassandra 3.0, which includes a complete storage engine rewrite.
  • Tiered storage/ILM (Information Lifecycle Management).
  • Policy-based replication.

Some of that terminology is mine, but perhaps my clients at DataStax will adopt it too. :)

We didn’t go into as much technical detail as I ordinarily might, but a few notes on that tiered storage/ILM bit are:

  • It’s a way to have some storage that’s more expensive (e.g. flash) and some that’s cheaper (e.g. spinning disk). Duh.
  • Since Cassandra has a strong time-series orientation, it’s easy to imagine how those policies might be specified.
  • Technologically, this is tightly integrated with Cassandra’s compaction strategy.

DataStax Enterprise 5 also introduced policy-based replication features, not all of which are in open source Cassandra. Data sovereignty/geo-compliance is improved, which is of particular importance in financial services. There’s also hub/spoke replication now, which seems to be of particular value in intermittently-connected use cases. DataStax said the motivating use case in that area was oilfield operations, where presumably there are Cassandra-capable servers at all ends of the wide-area network.

Related links

  • I wrote in detail about Cassandra architecture in December, 2013.
  • I wrote about intermittently-connected data management via the relational gold standard SQL Anywhere in July, 2010.
Categories: Other

Notes on Spark and Databricks — technology

Sun, 2016-07-31 09:30

During my recent visit to Databricks, I of course talked a lot about technology — largely with Reynold Xin, but a bit with Ion Stoica as well. Spark 2.0 is just coming out now, and of course has a lot of enhancements. At a high level:

  • Using the new terminology, Spark originally assumed users had data engineering skills, but Spark 2.0 is designed to be friendly to data scientists.
  • A lot of this is via a focus on simplified APIs, based on
    • Unlike similarly named APIs in R and Python, Spark DataFrames work with nested data.
    • Machine learning and Spark Streaming both work with Spark DataFrames.
  • There are lots of performance improvements as well, some substantial. Spark is still young enough that Bottleneck Whack-A-Mole yields huge benefits, especially in the SparkSQL area.
  • SQL coverage is of course improved. For example, SparkSQL can now perform all TPC-S queries.

The majority of Databricks’ development efforts, however, are specific to its cloud service, rather than being donated to Apache for the Spark project. Some of the details are NDA, but it seems fair to mention at least:

  • Databricks’ notebooks feature for organizing and launching machine learning processes and so on is a biggie. Jupyter is an open source analog.
  • Databricks has been working on security, and even on the associated certifications.

Two of the technical initiatives Reynold told me about seemed particularly cool. One, on the machine learning side, was a focus on training models online as new data streams in. In most cases this seems to require new algorithms for old model types, with a core idea being that the algorithm does a mini gradient descent for each new data point.

The other cool idea fits the trend of alternatives to the “lambda architecture”. Under the name “structured streaming”, which seems to be a replacement for “DStreaming”, the idea is to do set-based SQL processing even though membership of the set changes over time. Result sets are extracted on a snapshot basis; you can keep either all the results from each snapshot query or just the deltas.

Despite all this, there’s some non-trivial dissatisfaction with Spark, fair or otherwise.

  • Some of the reason is that SparkSQL is too immature to be great.
  • Some is annoyance that Databricks isn’t putting everything it has into open source.
  • Some is that everything has its architectural trade-offs.

To the last point, I raised one of the biggest specifics with Reynold, namely Spark’s lack of a strong built-in data persistence capability. Reynold’s answer was that they’re always working to speed up reading and writing from other forms of persistent storage. E.g., he cited a figure of ~100 million rows/core/second decoded from Parquet.

Categories: Other

Notes on Spark and Databricks — generalities

Sun, 2016-07-31 09:29

I visited Databricks in early July to chat with Ion Stoica and Reynold Xin. Spark also comes up in a large fraction of the conversations I have. So let’s do some catch-up on Databricks and Spark. In a nutshell:

  • Spark is indeed the replacement for Hadoop MapReduce.
  • Spark is becoming the default platform for machine learning.
  • SparkSQL (nee’ Shark) is puttering along predictably.
  • Databricks reports good success in its core business of cloud-based machine learning support.
  • Spark Streaming has strong adoption, but its position is at risk.
  • Databricks, the original authority on Spark, is not keeping a tight grip on that role.

I shall explain below. I also am posting separately about Spark evolution, especially Spark 2.0. I’ll also talk a bit in that post about Databricks’ proprietary/closed-source technology.

Spark is the replacement for Hadoop MapReduce.

This point is so obvious that I don’t know what to say in its support. The trend is happening, as originally decreed by Cloudera (and me), among others. People are rightly fed up with the limitations of MapReduce, and — niches perhaps aside — there are no serious alternatives other than Spark.

The greatest use for Spark seems to be the same as the canonical first use for MapReduce: data transformation. Also in line with the Spark/MapReduce analogy: 

  • Data-transformation-only use cases are important, but they don’t dominate.
  • Most other use cases typically have a data transformation element as well …
  • … which has to be started before any other work can be done.

And so it seems likely that, at least for as long as Spark is growing rapidly, data transformation will appear to be the biggest Spark use case.

Spark is becoming the default platform for machine learning.

Largely, this is a corollary of:

  • The previous point.
  • The fact that Spark was originally designed with machine learning as its principal use case.

To do machine learning you need two things in your software:

  • A collection of algorithms. Spark, I gather, is one of numerous good alternatives there.
  • Support for machine learning workflows. That’s where Spark evidently stands alone.

And thus I have conversations like:

  • “Are you doing anything with Spark?”
  • “We’ve gotten more serious about machine learning, so yes.”

SparkSQL (nee’ Shark) is puttering along.

SparkSQL is pretty much following the Hive trajectory.

  • Useful from Day One as an adjunct to other kinds of processing.
  • A tease and occasionally useful as a SQL engine for its own sake, but really not very good, pending years to mature.

Databricks reports good success in its core business of cloud-based machine learning support.

Databricks, to an even greater extent than I previously realized, is focused on its cloud business, for which there are well over 200 paying customers. Notes on that include:

  • As you might expect based on my comments above, the majority of usage is for data transformation, but a lot of that is in anticipation of doing machine learning/predictive modeling in the near future.
  • Databricks customers typically already have their data in the Amazon cloud.
  • Naturally, a lot of Databricks customers are internet companies — ad tech startups and the like. Databricks also reports “strong” traction in the segments:
    • Media
    • Financial services (especially but not only insurance)
    • Health care/pharma
  • The main languages Databricks customers use are R and Python. Ion said that Python was used more on the West Coast, while R was used more in the East.

Databricks’ core marketing concept seems to be “just-in-time data platform”. I don’t know why they picked that, as opposed to something that emphasizes Spark’s flexibility and functionality.

Spark Streaming’s long-term success is not assured.

To a first approximation, things look good for Spark Streaming.

  • Spark Streaming is definitely the leading companion to Kafka, and perhaps also to cloud equivalents (e.g. Amazon Kinesis).
  • The “traditional” alternatives of Storm and Samza are pretty much done.
  • Newer alternatives from Twitter, Confluent and Flink aren’t yet established.
  • Cloudera is a big fan of Spark Streaming.
  • Even if Spark Streaming were to generally decline, it might keep substantial “good enough” usage, analogously to Hive and SparkSQL.
  • Cool new Spark Streaming technology is coming out.

But I’m also hearing rumbles and grumbles about Spark Streaming. What’s more, we know that Spark Streaming wasn’t a core part of Spark’s design; the use case just happened to emerge. Demanding streaming use cases typically involve a lot of short-request inserts (or updates/upserts/whatever). And if you were designing a system to handle those … would it really be based on Spark?

Databricks is not keeping a tight grip on Spark leadership.

For starters:

  • Databricks’ main business, as noted above, is its cloud service. That seems to be going well.
  • Databricks’ secondary business is licensing stuff to Spark distributors. That doesn’t seem to amount to much; it’s too easy to go straight to the Apache distribution and bypass Databricks. No worries; this never seemed it would be a big revenue opportunity for Databricks.

At the moment, Databricks is pretty clearly the general leader of Spark. Indeed:

  • If you want the story on where Spark is going, you do what I did — you ask Databricks.
  • Similarly, if you’re thinking of pushing the boundaries on Spark use, and you have access to the Databricks folks, that’s who you’ll probably talk to.
  • Databricks employs ~1/3 of Spark committers.
  • Databricks organizes the Spark Summit.

But overall, Databricks doesn’t seem to care much about keeping Spark leadership. Its marketing efforts in that respect are minimal. Word-of-mouth buzz paints a similar picture. My direct relationship with the company gives the same impression. Oh, I’m sure Databricks would like to remain the Spark leader. But it doesn’t seem to devote much energy toward keeping the role.

Related links

Starting with my introduction to Spark, previous overview posts include those in:

Categories: Other

Terminology: Data scientists vs. data engineers

Sun, 2016-07-31 09:10

I learned some newish terms on my recent trip. They’re meant to solve the problem that “data scientists” used to be folks with remarkably broad skill sets, few of whom actually existed in ideal form. So instead now it is increasingly said that:

  • “Data engineers” can code, run clusters, and so on, in support of what’s always been called “data science”. Their knowledge of the math of machine learning/predictive modeling and so on may, however, be limited.
  • “Data scientists” can write and run scripts on single nodes; anything more on the engineering side might strain them. But they have no-apologies skills in the areas of modeling/machine learning.

Related link

Categories: Other

Notes on vendor lock-in

Tue, 2016-07-19 20:35

Vendor lock-in is an important subject. Everybody knows that. But few of us realize just how complicated the subject is, nor how riddled it is with paradoxes. Truth be told, I wasn’t fully aware either. But when I set out to write this post, I found that it just kept growing longer.

1. The most basic form of lock-in is:

  • You do application development for a target set of platform technologies.
  • Your applications can’t run without those platforms underneath.
  • Hence, you’re locked into those platforms.

2. Enterprise vendor standardization is closely associated with lock-in. The basic idea is that you have a mandate or strong bias toward having different apps run over the same platforms, because:

  • That simplifies your environment, requiring less integration and interoperability.
  • That simplifies your staffing; the same skill sets apply to multiple needs and projects.
  • That simplifies your vendor support relationships; there’s “one throat to choke”.
  • That simplifies your price negotiation.

3. That last point is double-edged; you have more power over suppliers to whom you give more business, but they also have more power over you. The upshot is often an ELA (Enterprise License Agreement), which commonly works:

  • For a fixed period of time, the enterprise may use as much of a given product set as they want, with costs fixed in advance.
  • A few years later, the price is negotiated, based on current levels of usage.

Thus, doing an additional project using ELAed products may appear low-cost.

  • Incremental license and maintenance fees may be zero in the short-term.
  • Incremental personnel costs may be controlled because the needed skills are already in-house.

Often those appearances are substantially correct. That’s a big reason why incumbent software is difficult to supplant unless the upstart substitute is superior in fundamental and important ways.

4. Subscriptions are closely associated with lock-in.

  • Most obviously, the traditional software industry gets its profits from high-margin support/maintenance services.
  • Cloud lock-in has rapidly become a big deal.
  • The open source vendors meeting lock-in resistance, noted above, have subscription business models.

Much of why customers care about lock-in is the subscription costs it’s likely to commit them to.

5. Also related to lock-in are thick single-vendor technology stacks. If you run Oracle applications, you’re going to run the Oracle DBMS too. And if you run that, you’re likely to run other Oracle software, and perhaps use Exadata hardware as well. The cloud ==> lock-in truism is an example of this point as well.

6. There’s a lot of truth to the generality that central IT cares about overall technology architecture, while line-of-business departments just want to get the job done. This causes departments to both:

  • Oppose standardization.
  • Like thick technology stacks.

Thus, departmental influence on IT both encourages and discourages lock-in.

7. IBM is all about lock-in. IBM’s support for Linux, Eclipse and so on don’t really contradict that. IBM’s business models is to squeeze serve its still-large number of strongly loyal customers as well as it can.

8. Microsoft’s business model over the decades has also greatly depended on lock-in.

  • Indeed, it exploited Windows/Office lock-in so vigorously as to incur substantial anti-trust difficulties.
  • Server-side Windows tends to be involved in thick stacks — DBMS, middleware, business intelligence, SharePoint and more. Many customers (smaller enterprises or in some cases departments) are firmly locked into these stacks.
  • Microsoft is making a strong cloud push with Azure, which inherently involves lock-in.

Yet sometimes, Microsoft is more free and open.

  • Office for Macintosh allowed the Mac to be a viable Windows competitor. (And Microsoft was well-paid for that, generating comparable revenue for per Mac to what it got for each Windows PC.)
  • Visual Studio is useful for writing apps to run against multiple DBMS.
  • Just recently, Microsoft SQL Server was ported to Linux.

9. SAP applications run over several different DBMS, including its own cheap MaxDB. That counteracts potential DBMS lock-in. But some of its newer apps are HANA-specific. That, of course, has the opposite effect.

10. And with that as background, we can finally get what led me to finally write this post. Multiple clients have complaints that may be paraphrased as:

  • Customers are locked into expensive traditional DBMS such as Oracle.
  • Yet they’re so afraid of lock-in now that they don’t want to pay for our vendor-supplied versions of open source database technologies; they prefer to roll their own.
  • Further confusing matters, they also are happy to use cloud technologies, including the associated database technologies (e.g. . Redshift or other Amazon offerings), creating whole new stacks of lock-in.

So open source vendors of NoSQL data managers and similar technologies felt like they were the only kind of vendor suffering from fear of lock-in.

I agree with them that enterprises who feel this way are getting it wrong. Indeed:

This is the value proposition that propelled Cloudera. It’s also a strong reason to give money to whichever MongoDB, DataStax, Neo Technology et al. sponsors open source technology that you use.

General disclosure: My fingerprints have been on this industry strategy since before the term “NoSQL” was coined. It’s been an aspect of many different consulting relationships.

Some enterprises push back, logically or emotionally as the case may be, by observing that the best internet companies — e.g., Facebook — are allergic to paying for software, even open source. My refutations of that argument include:

  • Facebook has more and better engineers than you do.
  • Facebook has a lot more servers than you do, and would presumably face much higher prices than you would if you each chose to forgo the in-house alternative.
  • Facebook pays for open source software in a different way than through subscription fees — it invents and enhances it. Multiple important projects have originated at Facebook, and it contributes to many others. Are you in a position to do the same thing?

And finally — most of Facebook’s users get its service for free. (Advertisers are the ones who pay cash; all others just pay in attention to the ads.) So if getting its software for free actually does screw up its SLAs (Service Level Agreements) — well, free generally comes with poorer SLAs than paid. But if you’re in the business of serving paying customers, then you might want to have paying-customer kinds of SLAs, even on the parts of your technology — e.g. websites urging people to do business with you — that you provide for free yourself.

Related links

Categories: Other

Notes from a long trip, July 19, 2016

Tue, 2016-07-19 20:34

For starters:

  • I spent three weeks in California on a hybrid personal/business trip. I had a bunch of meetings, but not three weeks’ worth.
  • The timing was awkward for most companies I wanted to see. No blame accrues to those who didn’t make themselves available.
  • I came back with a nasty cough. Follow-up phone calls aren’t an option until next week.
  • I’m impatient to start writing. Hence tonight’s posts. But it’s difficult for a man and his cough to be productive at the same time.

A running list of recent posts is:

  • As a companion to this post, I’m publishing a very long one on vendor lock-in.

Subjects I’d like to add to that list include:

  • Spark (it’s prospering).
  • Databricks (ditto, appearances to the contrary notwithstanding).
  • Flink (it’s interesting as the streaming technology it’s now positioned to be, rather than the overall Spark alternative it used to be positioned as but which the world didn’t need).
  • DataStax, MemSQL, Zoomdata, and Neo Technology (also prospering).
  • Cloudera (multiple topics, as usual).
  • Analytic SQL engines (“traditional” analytic RDBMS aren’t doing well).
  • Enterprises’ inconsistent views about vendor lock-in.
  • Microsoft’s reinvention (it feels real).
  • Metadata (it’s ever more of a thing).
  • Machine learning (it’s going to be a big portion of my research going forward).
  • Transitions to the cloud — this subject affects almost everything else.

I’ll edit these lists as appropriate when further posts go up.

Let’s cover some other subjects right here.

1. While Kafka is widely agreed to be the universal delivery mechanism for streams, the landscape for companion technologies is confused.

  • Back in January I wrote that the leaders were mainly Spark Streaming, followed by Storm.
  • I overlooked the fact that Storm creator Twitter was replacing Storm with something called Heron.*
  • If there’s any buzz about Confluent’s replacement for distant-third-place contender Samza, I missed it.
  • Opinions about Spark Streaming are mixed. Some folks want to get away from it; others like it just fine.

And of course Flink is hoping to blow everybody else in the space away.

*But that kind of thing is not necessarily a death knell. Cassandra inventor Facebook soon replaced Cassandra with HBase, yet Cassandra is doing just fine.

As for the “lambda architecture” — that has always felt like a kludge, and various outfits are trying to obsolete it in various ways. As just one example, Cloudera described that to me during my visit as one of the main points of Kudu.

2. The idea that NoSQL does away with DBAs (DataBase Administrators) is common. It also turns out to be wrong. DBAs basically do two things.

  • Handle the database design part of application development. In NoSQL environments, this part of the job is indeed largely refactored away. More precisely, it is integrated into the general app developer/architect role.
  • Manage production databases. This part of the DBA job is, if anything, a bigger deal in the NoSQL world than in more mature and automated relational environments. It’s likely to be called part of “devops” rather than “DBA”, but by whatever name it’s very much a thing.

I had a moment of clarity on this point while visiting my clients at DataStax, and discussing their goal — shared by numerous companies — of being properly appreciated for the management tools they provide. In the room with me were CEO Billy Bosworth and chief evangelist Patrick McFadin — both of whom are former DBAs themselves.

3. I visited ClearStory, and Sharmila Mulligan showed me her actual sales database, as well as telling me some things about funding. The details are all confidential, but ClearStory is clearly doing better than rumor might suggest.

4. Platfora insisted on meeting circumstances in which it was inconvenient for me to take notes. So I have no details to share. But they sounded happy.

5. Pneubotics — with a cool new video on its home page — has found its first excellent product/market fit. Traditional heavy metallic robots are great at painting and related tasks when they can remain stationary, or move on rigid metal rails. Neither of those options works well, however, for large curved or irregular surfaces as might be found in the aerospace industry. Customer success for the leading soft robot company has ensued.

This all seems pretty close to the inspection/maintenance/repair area that I previously suggested could be a good soft robotics fit.

Categories: Other

Challenges in anomaly management

Sun, 2016-06-05 12:35

As I observed yet again last week, much of analytics is concerned with anomaly detection, analysis and response. I don’t think anybody understands the full consequences of that fact,* but let’s start with some basics.

*me included

An anomaly, for our purposes, is a data point or more likely a data aggregate that is notably different from the trend or norm. If I may oversimplify, there are three kinds of anomalies:

  • Important signals. Something is going on, and it matters. Somebody — or perhaps just an automated system — needs to know about it. Time may be of the essence.
  • Unimportant signals. Something is going on, but so what?
  • Pure noise. Even a fair coin flip can have long streaks of coming up “heads”.

Two major considerations are:

  • Whether the recipient of a signal can do something valuable with the information.
  • How “costly” it is for the recipient to receive an unimportant signal or other false positive.

What I mean by the latter point is:

  • Something that sets a cell phone buzzing had better be important, to the phone’s owner personally.
  • But it may be OK if something unimportant changes one small part of a busy screen display.

Anyhow, the Holy Grail* of anomaly management is a system that sends the right alerts to the right people, and never sends them wrong ones. And the quest seems about as hard as that for the Holy Grail, although this one uses more venture capital and fewer horses.

*The Holy Grail, in legend, was found by 1-3 knights: Sir Galahad (in most stories), Sir Percival (in many), and Sir Bors (in some). Leading vendors right now are perhaps around the level of Sir Kay.

Difficulties in anomaly management technology include:

  • Performance is a major challenge. Ideally, you’re running statistical tests on all data — at least on all fresh data — at all times.
  • User experiences are held to high standards.
    • False negatives are very bad.
    • False positives can be very annoying.
    • Robust role-based alert selection is often needed.
    • So are robust visualization and drilldown.
  • Data quality problems can look like anomalies. In some cases, bad data screws up anomaly detection, by causing false positives. In others, it’s just another kind of anomaly to detect.
  • Anomalies are inherently surprising. We don’t know in advance what they’ll be.

Consequences of the last point include:

  • It’s hard to tune performance when one doesn’t know exactly how the system will be used.
  • It’s hard to set up role-based alerting if one doesn’t know exactly what kinds of alerts there will be.
  • It’s hard to choose models for the machine learning part of the system.

Donald Rumsfeld’s distinction between “known unknowns” and “unknown unknowns” is relevant here, although it feels wrong to mention Rumsfeld and Sir Galahad in the same post.

And so a reasonable summary of my views might be:

Anomaly management is an important and difficult problem. So far, vendors have done a questionable job of solving it.

But there’s a lot of activity, which I look forward to writing about in considerable detail.

Related link

  • The most directly relevant companies I’ve written about are probably Rocana and Splunk.
Categories: Other

Adversarial analytics and other topics

Mon, 2016-05-30 05:15

Five years ago, in a taxonomy of analytic business benefits, I wrote:

A large fraction of all analytic efforts ultimately serve one or more of three purposes:

  • Marketing
  • Problem and anomaly detection and diagnosis
  • Planning and optimization

That continues to be true today. Now let’s add a bit of spin.

1. A large fraction of analytics is adversarial. In particular:

  • Many of the analytics companies I talk with tell me that they have important use cases in security, anti-fraud or both.
  • Click fraud steals a large fraction of the revenue in online advertising and other promotion. Combating it is a major application need.
  • Spam is another huge, ongoing fight.
    • When Google et al. fight web spammers — which is of course a great part of what web search engine developers do — they’re engaged in adversarial information retrieval.
    • Blog comment spam is still a problem, even though the vast majority of instances can now be caught.
    • Ditto for email.
  • There’s an adversarial aspect to algorithmic trading. You’re trying to beat other investors. What’s more, they’re trying to identify your trading activity, so you’re trying to obscure it. Etc.
  • Unfortunately, unfree countries can deploy analytics to identify attempts to evade censorship. I plan to post much more on that point soon.
  • Similarly, de-anonymization can be adversarial.
  • Analytics supporting national security often have an adversarial aspect.
  • Banks deploy analytics to combat money-laundering.

Adversarial analytics are inherently difficult, because your adversary actively wants you to get the wrong answer. Approaches to overcome the difficulties include:

  • Deploying lots of data. Email spam was only defeated by large providers who processed lots of email and hence could see when substantially the same email was sent to many victims at once. (By the way, that’s why “spear-phishing” still works. Malicious email sent to only one or a few victims still can’t be stopped.)
  • Using unusual analytic approaches. For example, graph analytics are used heavily in adversarial situations, even though they have lighter adoption otherwise.
  • Using many analytic tests. For example, Google famously has 100s (at least) of sub-algorithms contributing to its search rankings. The idea here is that even the cleverest adversary might find it hard to perfectly simulate innocent behavior.

2. I was long a skeptic of “real-time” analytics, although I always made exceptions for a few use cases. (Indeed, I actually used a form of real-time business intelligence when I entered the private sector in 1981, namely stock quote machines.) Recently, however, the stuff has gotten more-or-less real. And so, in a post focused on data models, I highlighted some use cases, including:

  • It is increasingly common for predictive decisions to be made at [real-timeish] speeds. (That’s what recommenders and personalizers do.) Ideally, such decisions can be based on fresh and historical data alike.
  • The long-standing desire for business intelligence to operate on super-fresh data is, increasingly, making sense, as we get ever more stuff to monitor. However …
  • … most such analysis should look at historical data as well.
  • Streaming technology is supplying ever more fresh data.

Let’s now tie those comments into the analytic use case trichotomy above. From the standpoint of mainstream (or early-life/future-mainstream) analytic technologies, I think much of the low-latency action is in two areas:

  • Recommenders/personalizers.
  • Monitoring and troubleshooting networked equipment. This is generally an exercise in anomaly detection and interpretation.

Beyond that:

  • At sufficiently large online companies, there’s a role for low-latency marketing decision support.
  • Low-latency marketing-oriented BI can also help highlight system malfunctions.
  • Investments/trading has a huge low-latency aspect, but that’s somewhat apart from the analytic mainstream. (And it doesn’t fit well into my trichotomy anyway.)
  • Also not in the analytic mainstream are the use cases for low-latency (re)planning and optimization.

Related links

My April, 2015 post Which analytic technology problems are important to solve for whom? has a round-up of possibly relevant links.

Categories: Other

Surveillance data in ordinary law enforcement

Wed, 2016-05-18 22:45

One of the most important issues in privacy and surveillance is also one of the least-discussed — the use of new surveillance technologies in ordinary law enforcement. Reasons for this neglect surely include:

  • Governments, including in the US, lie about this subject a lot. Indeed, most of the reporting we do have is exposure of the lies.
  • There’s no obvious technology industry ox being gored. What I wrote in another post about Apple, Microsoft et al. upholding their customers’ rights doesn’t have a close analogue here.

One major thread in the United States is:

  • The NSA (National Security Agency) collects information on US citizens. It turns a bunch of this over to the “Special Operations Division” (SOD) of the Drug Enforcement Administration (NSA).
  • The SOD has also long collected its own clandestine intelligence.
  • The SOD turns over information to the DEA, FBI (Federal Bureau of Investigation), IRS (Internal Revenue Service) and perhaps also other law enforcement agencies.
  • The SOD mandates that the recipient agencies lie about the source of the information, even in trials and court filings. This is called “parallel construction”, in that the nature of the lie is to create another supposed source for the original information, which has the dual virtues of:
    • Making it look like the information was obtained by allowable means.
    • Protecting confidentiality of the information’s true source.
  • There is a new initiative to allow the NSA to share more surveillance information on US citizens with other agencies openly, thus reducing the “need” to lie, and hopefully gaining efficiency/effectiveness in information-sharing as well.

Similarly, StingRay devices that intercept cell phone calls (and thus potentially degrade service) are used by local police departments, who then engage in “parallel construction” for several reasons, one simply being an NDA with manufacturer Harris Corporation.

Links about these and other surveillance practices are below.

At this point we should note the distinction between intelligence/leads and admissible evidence.

  • Intelligence (or leads) is any information that can be used to point law enforcement or security forces at people who either plan to do or already have done unlawful and/or very harmful things.
  • Admissible evidence is information that can legally be used to convict people of crimes or otherwise bring down penalties and sanctions upon then.

I won’t get into the minutiae of warrants, subpoenas, probable cause and all that, but let’s just say:

  • In theory there’s a semi-bright line between intelligence and admissible evidence; i.e., there’s some blurring, but in most cases the line can be pretty easily seen.
  • In practice there’s a lot of blurring. Parallel construction is only one of the ways the semi-bright line gets scuffed over.
  • Even so, this distinction has great value. The number of people who have been badly harmed in the US by inappropriate use of inadmissible intelligence isn’t very high …
  • … yet.

“Yet” is the key word. My core message in this post is that — despite the lack of catastrophe to date — the blurring of the intelligence/evidence line needs to be greatly reversed:

Going forward, the line between intelligence and admissible evidence needs to be established and maintained in a super-bright state.

As you may recall, I’ve said that for years, in a variety of different phrasings. Still, it’s a big enough deal that I feel I should pound the table about it from time to time — especially now, when public policy in other aspects of surveillance is going pretty well, but this area is headed for disaster. My argument for this view can be summarized in two bullet points:

  • Massive surveillance is inevitable.
  • Unless the uses of the resulting information are VERY limited, freedoms will be chilled into oblivion.

I recapitulate the chilling effects argument frequently, so for the rest of this post let’s focus on the first bullet point. Massive surveillance will be a fact of life for reasons including:

  • As a practical political matter, domestic surveillance will be used at least for anti-terrorism. If you doubt that — please just consider the number of people who support Donald Trump.
  • Actually, the constituency for anti-terrorism surveillance is much more than just the paranoid idiots. Indeed — and notwithstanding the great excesses of anti-terrorism propaganda around the world — that constituency includes me. :) My reasons start:
    • In a country of well over 300 million people, there probably are a few who are both crazy and smart enough to launch Really Bad Attacks. Stopping them before they act is a Very Good Idea.
    • The alternative is security — or more likely security theater — measures that are intrusive across the board. I like unfettered freedom of movement, for example. But I can barely stand the TSA (Transportation Security Administration).
  • Commercial “surveillance” is intense. And it’s essential to the internet economy.

And so I return to the point I’ve been making for years: Surveillance WILL happen. So the use of surveillance information needs to be tightly limited.

Related links:

  • Reason’s recent rant about parallel construction contains a huge number of links. Ditto a calmer Rodney Balko blog for the Washington Post. (March, 2016).
  • Reuters gave details of the SOD’s thou-shalt-lie mandates in August, 2013.
  • If you have a clearance and work in the civilian sector, you may be subject to 24/7 surveillance, aka continuous evaluation, for fear that you might be the next Ed Snowden. (March, 2016)
  • License plate scanning databases are already a big deal in law enforcement. (October, 2015)
  • StingRay-type devices are powerful, and have been for quite a few years. They’re really powerful. Procedures related to StingRay surveillance are in flux. (2015)
  • Chilling effects are real. (April, 2016)
  • At least one federal court has decided that tracking URLs visited without a warrant is an illegal wiretap. Other courts think your URL visits, shopping history, etc. are fair game. (November, 2015)
  • Pakistan in effect bugged citizens’ cell phones to track their movements and force polio vaccines on them. (November, 2015)
  • This is not totally on-topic, but it does support worries about what the government can do with surveillance-based analytics — law enforcement can wildly exaggerate the significance of its “scientific” evidence, and gain bogus convictions as a result. (2015-2016).
  • The Electronic Frontier Foundation offers a dated but fact-filled overview of NSA domestic spying (2012-2013).
Categories: Other

Governments vs. tech companies — it’s complicated

Wed, 2016-05-18 22:42

Numerous tussles fit the template:

  • A government wants access to data contained in one or more devices (mobile/personal or server as the case may be).
  • The computer’s manufacturer or operator doesn’t want to provide it, for reasons including:
    • That’s what customers prefer.
    • That’s what other governments require.
    • Being pro-liberty is the right and moral choice. (Yes, right and wrong do sometimes actually come into play. :) )

As a general rule, what’s best for any kind of company is — pricing and so on aside — whatever is best or most pleasing for their customers or users. This would suggest that it is in tech companies’ best interest to favor privacy, but there are two important quasi-exceptions:

  • Recommendation/personalization. E-commerce and related businesses rely heavily on customer analysis and tracking.
  • When the customer is the surveiller. Governments pay well for technology that is used to watch over their citizens.

I used the “quasi-” prefix because screwing the public is risky, especially in the long term.

Something that is not even a quasi-exception to the tech industry’s actual or potential pro-privacy bias is governmental mandates to let their users be watched. In many cases, governments compel privacy violations, by threat of severe commercial or criminal penalties. Tech companies should and often do resist these mandates as vigorously as they can, in the courts and/or via lobbying as the case may be. Yes, companies have to comply with the law. However, it’s against their interests for the law to compel privacy violations, because those make their products and services less appealing.

The most visible example of all this right now is the FBI/Apple kerfuffle. To borrow a phrase — it’s complicated. Among other aspects:

  • Syed Rizwan Farook, one of the San Bernardino terrorist murderers, had 3 cell phones. He carefully destroyed his 2 personal phones before his attack, but didn’t bother with his iPhone from work.
  • Notwithstanding this clue that the surviving phone contained nothing of interest, the FBI wanted to unlock it. It needed technical help to do so.
  • The FBI got a court order commanding Apple’s help. Apple refused and appealed the order.
  • The FBI eventually hired a third party to unlock Farook’s phone, for a price that was undisclosed but >$1.3 million.
  • Nothing of interest was found on the phone.
  • Stories popped up of the FBI asking for Apple’s help unlocking numerous other iPhones. The courts backed Apple or not depending on how they interpreted the All Writs Act. The All Writs Act was passed in the first-ever session of the US Congress, in 1789, and can reasonably be assumed to reflect all the knowledge that the Founders possessed about mobile telephony.
  • It’s widely assumed that the NSA could have unlocked the phones for the FBI — but it didn’t.

Russell Brandom of The Verge collected links explaining most of the points above.

With that as illustration, let’s go to some vendor examples:

All of these cases seem consistent with my comments about vendors’ privacy interests above.

Bottom line: The technology industry is correct to resist government anti-privacy mandates by all means possible.

Categories: Other

Privacy and surveillance require our attention

Wed, 2016-05-18 22:41

This year, privacy and surveillance issues have been all over the news. The most important, in my opinion, deal with the tension among:

  • Personal privacy.
  • Anti-terrorism.
  • General law enforcement.

More precisely, I’d say that those are the most important in Western democracies. The biggest deal worldwide may be China’s movement towards an ever-more-Orwellian surveillance state.

The main examples on my mind — each covered in a companion post — are:

Legislators’ thinking about these issues, at least in the US, seems to be confused but relatively nonpartisan. Support for these assertions includes:

I do think we are in for a spate of law- and rule-making, especially in the US. Bounds on the possible outcomes likely include:

  • Governments will retrain broad powers for anti-terrorism If there was any remaining doubt, the ISIS/ISIL/Daesh-inspired threats guarantees that surveillance will be intense.
  • Little will happen in the US to clip the wings of internet personalization/recommendation. To a lesser extent, that’s probably true in other developed countries as well.
  • Non-English-speaking countries will maintain data sovereignty safeguards, both out of genuine fear of (especially) US snooping and as a pretext to support their local internet/cloud service providers.

As always, I think that the eventual success or failure of surveillance regulation will depend greatly on the extent to which it accounts for chilling effects. The gravity of surveillance’s longer-term dangers is hard to overstate, yet  they still seem broadly overlooked. So please allow me to reiterate what I wrote in 2013 — surveillance + analytics can lead to very chilling effects.

When government — or an organization such as your employer, your insurer, etc. — watches you closely, it can be dangerous to deviate from the norm. Even the slightest non-conformity could have serious consequences.

And that would be a horrific outcome.

So I stand by my privacy policy observations and prescriptions from the same year:

… direct controls on surveillance … are very weak; government has access to all kinds of information. … And they’re going to stay weak. … Consequently, the indirect controls on surveillance need to be very strong, for they are what stands between us and a grim authoritarian future. In particular:

  • Governmental use of private information needs to be carefully circumscribed, including in most aspects of law enforcement.
  • Business discrimination based on private information needs in most cases to be proscribed as well.

The politics of all this is hard to predict. But I’ll note that in the US:

  • There’s an emerging consensus that the criminal justice system is seriously flawed, on the side of harshness. However …
  • … criminal justice reform is typically very slow.
  • The libertarian movement (Ron Paul, Rand Paul, aspects of the Tea Party folks, etc.) seems to have lost steam.
  • The courts cannot be relied upon to be consistent. Questions about Supreme Court appointments even aside, Fourth Amendment jurisprudence in the US has long been confusing and confused.
  • Few legislators understand technology.

Realistically, then, the main plausible path to a good outcome is that the technology industry successfully pushes for one. That’s why I keep writing about this subject in what is otherwise a pretty pure technology blog.

Bottom line: The technology industry needs to drive privacy/ surveillance public policy in directions that protect individual liberties. If it doesn’t, we’re all screwed.

Categories: Other

I’m having issues with comment spam

Wed, 2016-05-18 15:12

My blogs are having a bad time with comment spam. While Akismet and other safeguards are intercepting almost all of the ~5000 attempted spam comments per day, the small fraction that get through are still a large absolute number to deal with.

There’s some danger I’ll need to restrict comments here to combat it. (At the moment they’ve been turned off almost entirely on Text Technologies, which may be awkward if I want to put a post up there rather than here.) If I do, I’ll say so in a separate post. I apologize in advance for any inconvenience.

Categories: Other

Some checklists for making technical choices

Mon, 2016-02-15 10:27

Whenever somebody asks for my help on application technology strategy, I start by trying to ascertain three things. The absolute first is actually a prerequisite to almost any kind of useful conversation, which is to ascertain in general terms what the hell it is that we are talking about. :)

My second goal is to ascertain technology constraints. Three common types are:

  • Compatible with legacy systems and/or enterprise standards.
  • Cheap, free and/or open source.
  • Proven, vetted by sufficiently many references, and/or generally having an “enterprise-y” reputation.

That’s often a short and straightforward discussion, except in those awkward situations when all three of my bullet points above are applicable at once.

The third item is usually more interesting. I try to figure out what is to be accomplished. That’s usually not a simple matter, because the initial list of goals and requirements is almost never accurate. It’s actually more common that I have to tell somebody to be more ambitious than that I need to rein them in.

Commonly overlooked needs include:

  • If you want to sell something and have happy users, you need a good UI.
  • You will also soon need tools and a UI for administration.
  • Customers demand low-latency/fresh data. Your explanation of why they don’t really need it doesn’t contradict the fact that they want it.
  • Providing data access and saying “You can hook up any BI tool you want and build charts” is not generally regarded as offering a good UI.
  • When “adding analytics” to something previously focused on short-request processing, it is common to underestimate the variety of things users will soon want to do. (One common reason for this under-estimate is that after years of being told it can’t be done, they’ve learned not to ask.)

And if you take one thing away from this post, then take this:

  • If you “know” exactly which features are or aren’t helpful to users, …
  • .. and if you supply only what you “know” they should use, …
  • … then you will discover that what you “knew” wasn’t really accurate.

I guarantee it.

So far what I’ve said can be summarized as “Figure out what you’re trying to do, and what constraints there are on your choices for doing it.” The natural next step is to list the better-thought-of choices that meet your constraints, and — voila! — you have a short list. That’s basically correct, but there’s one significant complication.

Speaking of complications, what I’m portraying as a kind of linear/waterfall decision process of course usually involves lots of iteration, meandering around and general wheel-spinning. Real life is messy.

Simply put, there are many different kinds of application project. Other folks’ experience may not be as applicable to your case as you hope, because your case is different. So the rest of this post contains a checklist of distinctions among various different kinds of application project.

For starters, there are at least two major kind(s) of software development.

  • Many projects fit the traditional development model, elements of which are:
    • You — and this is very much a plural “you” — code something up more or less from scratch, using whatever language(s) and/or framework(s) you think make sense.
    • You break the main project into pieces in obvious ways (e.g. server back end vs. mobile front), and then into further pieces for manageability.
    • There may also be database designs, test harnesses, connectors to other apps and so on.
  • But there are many other projects in which smaller bits of configuration and/or scripting are the essence of what you do.
    • This is particularly common in analytics, where there might be business intelligence tools, ETL tools, scripts running against Hadoop and so on. The original building of a data warehouse/hub/lake/reservoir may also fit this model.
    • It’s also what you do to get a major purchased packaged application into actual production.
    • It also is often what happens for websites that serve “content”.

Other significant distinctions include:

  • In-house vs. software-for-resale. If the developing organization is handing code to somebody else, then we’re probably talking about a more traditional kind of project. But if the whole thing is growing organically in-house, the script-spaghetti alternative may well be viable (in those projects for which it seems appropriate). Important subsidiary distinctions start with:
    • (If in-house) Truly in-house vs. out-sourced.
    • (If for resale) On-premises vs. SaaS. Or maybe not.
  • Kind(s) of analytics, if any. Technologies and development processes used can be very different depending upon whether the application features:
    • Business intelligence (not particularly real-time) as its essence.
    • Reporting or other BI as added functionality to an essentially operational app.
    • Low-latency BI, perhaps supported by (other) short-request processing.
    • Predictive model scoring.
  • The role(s) of the user(s). This influences how appealing and easy the UI needs to be.* Requirements are very different, for example, among:
    • Classic consumer-facing websites, with recommenders and so on.
    • Marketing websites targeted at a small group of business-to-business customers.
    • Data-sharing websites for existing consumer stakeholders.
    • Cheery benefits-information websites that the HR department wants employees to look at.
    • Purely internal apps meant to be used by (self-)important executives.
    • Internal apps meant to be used by line workers who will be given substantial training on them.
  • Certain kinds of application project stand almost separately from the rest of these considerations, because their starting point is legacy apps. Examples may be found among:
    • Migration/consolidation projects.
    • Refactoring projects.
    • Addition of incremental functionality.

*It also influences security, all good practices for securing internal apps notwithstanding.

Much also depends on the size and sophistication of the organization. What the “organization” is depends a bit on context:

  • In the case of software products, SaaS (Software as a Service) or other internet services, it is primarily the vendor. However …
  • … in B2B cases the sophistication of the customer organizations can also matter.
  • In the case of in-house enterprise development, there’s only one enterprise involved (duh). However …
  • … the “department” vs. “IT” distinction may be very important.

Specific considerations of this kind start:

  • Is me-too functionality enough, or does the enterprise seek competitive advantage through technology?
  • What kinds of technical risk does it seem prudent and desirable to take?

And that, in a nutshell, is why strategizing about application technology is often more complicated than it first appears.

Related links

Categories: Other

Kafka and more

Mon, 2016-01-25 05:28

In a companion introduction to Kafka post, I observed that Kafka at its core is remarkably simple. Confluent offers a marchitecture diagram that illustrates what else is on offer, about which I’ll note:

  • The red boxes — “Ops Dashboard” and “Data Flow Audit” — are the initial closed-source part. No surprise that they sound like management tools; that’s the traditional place for closed source add-ons to start.
  • “Schema Management”
    • Is used to define fields and so on.
    • Is not equivalent to what is ordinarily meant by schema validation, in that …
    • … it allows schemas to change, but puts constraints on which changes are allowed.
    • Is done in plug-ins that live with the producer or consumer of data.
    • Is based on the Hadoop-oriented file format Avro.

Kafka offers little in the way of analytic data transformation and the like. Hence, it’s commonly used with companion products. 

  • Per Confluent/Kafka honcho Jay Kreps, the companion is generally Spark Streaming, Storm or Samza, in declining order of popularity, with Samza running a distant third.
  • Jay estimates that there’s such a companion product at around 50% of Kafka installations.
  • Conversely, Jay estimates that around 80% of Spark Streaming, Storm or Samza users also use Kafka. On the one hand, that sounds high to me; on the other, I can’t quickly name a counterexample, unless Storm originator Twitter is one such.
  • Jay’s views on the Storm/Spark comparison include:
    • Storm is more mature than Spark Streaming, which makes sense given their histories.
    • Storm’s distributed processing capabilities are more questionable than Spark Streaming’s.
    • Spark Streaming is generally used by folks in the heavily overlapping categories of:
      • Spark users.
      • Analytics types.
      • People who need to share stuff between the batch and stream processing worlds.
    • Storm is generally used by people coding up more operational apps.

If we recognize that Jay’s interests are obviously streaming-centric, this distinction maps pretty well to the three use cases Cloudera recently called out.

Complicating this discussion further is Confluent 2.1, which is expected late this quarter. Confluent 2.1 will include, among other things, a stream processing layer that works differently from any of the alternatives I cited, in that:

  • It’s a library running in client applications that can interrogate the core Kafka server, rather than …
  • … a separate thing running on a separate cluster.

The library will do joins, aggregations and so on, and while relying on core Kafka for information about process health and the like. Jay sees this as more of a competitor to Storm in operational use cases than to Spark Streaming in analytic ones.

We didn’t discuss other Confluent 2.1 features much, and frankly they all sounded to me like items from the “You mean you didn’t have that already??” list any young product has.

Related links

Categories: Other