Feed aggregator

The Adventures of the Trickster developer - Aliases

Gary Myers - Sun, 2013-05-26 21:02
The Trickster is a mythological being who enjoys potentially dangerous counter-intuitive behaviour. You'll often find him deep within the source code of large systems.

The Alias Trap

Generally an alias in a query is there to make it easier to understand, either for the developer or the database. However the Trickster can reuse aliases within a query to make things more confusing.


desc scott.emp
       Name       Null?    Type
       ---------- -------- --------------------
1      EMPNO      NOT NULL NUMBER(4)
2      ENAME               VARCHAR2(10)
3      JOB                 VARCHAR2(9)
4      MGR                 NUMBER(4)
5      HIREDATE            DATE
6      SAL                 NUMBER(7,2)
7      COMM                NUMBER(7,2)
8      DEPTNO              NUMBER(2)

desc scott.salgrade
       Name      Null?    Type
       --------- -------- --------------------
1      GRADE              NUMBER
2      LOSAL              NUMBER
3      HISAL              NUMBER


desc scott.dept
       Name      Null?    Type
       --------- -------- --------------------
1      DEPTNO    NOT NULL NUMBER(2)
2      DNAME              VARCHAR2(14)
3      LOC                VARCHAR2(13)


The trickster can try queries such as

SELECT e.ename, e.grade
FROM scott.emp e 
       JOIN scott.salgrade e ON e.sal BETWEEN e.losal AND e.hisal;

and


SELECT x.ename, x.dname
from scott.emp x join scott.dept x using (deptno);



As long as any prefixed column names involved are unique to a table, the database can work out what to do. 

If you find this in a 'live' query, it is normally one with at least half a dozen tables where an extra join has been added without noticing that the alias is already in use. And you'll discover it when a column is added to one of those tables causing the database to throw up its hands in surrender. Sometimes you will find it in a dynamically constructed query, when it will fail seemingly at random. 

Once discovered, it isn't a mentally difficult bug to resolve. But first you have to get past the mental roadblock of "But all the columns already have an alias pointing to the table they come from".

Path To Understand Oracle Fusion Application

Oracle e-Business Suite - Sun, 2013-05-26 07:45

Path To Fusion


Categories: APPS Blogs

Review of Learning IPython for Interactive Computing and Data Visualization

Catherine Devlin - Sat, 2013-05-25 21:27
valuable but traditionalMay 25, 2013 by Catherine Devlinproductphoto of 'Learning IPython for Interactive Computing and Data Visualization' 4 stars (of 5)

Packt Publishing recently asked if I could review their new title, Learning IPython for Interactive Computing and Data Visualization. (I got the e-book free for doing the review, but they don't put any conditions on what I say about it.) I don't often do reviews like that, but I couldn't pass one this up because I'm so excited about the IPython Notebook.

It's a mini title, but it does contain a lot of information I was very pleased to see. First and foremost, this is the first book to focus on the IPython Notebook. That's huge. Also:

  • The installation section is thorough and goes well beyond the obvious, discussing options like using prepackaged all-in-one Python distributions like Anaconda.
  • Some of the improvements IPython can make to a programming workflow are nicely introduced, like the ease of debugging, source code inspection, and profiling with the appropriate magics.
  • The section on writing new IPython extensions is extremely valuable - it contains more complete examples than the official documentation does and would have saved me lots of time and excess code if I'd had it when I was writing ipython-sql.
  • There are introductions to all the classic uses that scientists doing numerical simulations value IPython for: convenience in array handling, Pandas integration, plotting, parallel computing, image processing, Cython for faster CPU-bound operations, etc. The book makes no claim to go deeply into any of these, but it gives introductory examples that at least give an idea of how the problems are approached and why IPython excels at them.

So what don't I like? Well, I wish for more. It's not fair to ask for more bulk in a small book that was brought to market swiftly, but I can wish for a more forward-looking, imaginative treatment. The IPython Notebook is ready to go far beyond IPython's traditional core usership in the SciPy community, but this book doesn't really make that pitch. It only touches lightly on how easily and beautifully IPython can replace shell scripting. It doesn't get much into the unexplored possibilities that IPython Notebook's rich display capabilities open up. (I'm thinking of IPython Blocks as a great example of things we can do with IPython Notebook that we never imagined at first glance). This book is a good introduction to IPython's uses as traditionally understood, but it's not the manifesto for the upcoming IPython Notebook Revolution.

The power of hybrid documentation/programs for learning and individual and group productivity is one more of IPython Notebook's emerging possibilities that this book only mentions in passing, and passes up a great chance to demonstrate. The sample code is downloadable as IPython Notebook .ipynb files, but the bare code is alone in the cells, with no use of Markdown cells to annotate or clarify. Perhaps this is just because Packt was afraid that more complete Notebook files would be pirated, but it's a shame.

Overall, this is a short book that achieves its modest goal: a technical introduction to IPython in its traditional uses. You should get it, because IPython Notebook is too important to sit around waiting for the ultimate book - you should be using the Notebook today. But save space on your bookshelf for future books, because there's much more to be said on the topic, some of which hasn't even been imagined yet.

This hReview brought to you by the hReview Creator.

PDF Reports with APEX at NYOUG

Marc Sewtz - Thu, 2013-05-23 16:25

The New York Oracle User Group Summer Meeting takes place on June 5th at St. John's University, right next to the World Trade Center in Downtown Manhattan. This looks like it’s going to be a very interesting meeting for Oracle DBAs and Oracle Developers. There are two session scheduled covering the Oracle Database 12c and also two APEX sessions. There’s going to be a session on building mobile apps with APEX, which looks to be very hands-on. And I’ll be showing how to build custom PDF reports with APEX. I’ll walk you step by step through creating the report in APEX, downloading the XML, creating a report layout with tools like Stylevision, loading the layout back into APEX and associating the layout with your report. If you have a laptop running APEX 4.2.2 and the APEX Listener 2.0.2, you’re welcome to bring it and follow along – you will need at least a trial version of either Altova Stylevision or Stylus Studio.

Details about this event can be found on the NYOUG website:


DOAG 2013 Development in Bonn am 19. Juni

Dietmar Aust - Wed, 2013-05-22 14:41

Am 19.Juni 2013 findet die nächste DOAG Development Konferenz in Bonn statt, diesmal mit dem Thema:
„Agile and Beyond – Projektmanagement in der Oracle-Softwareentwicklung“Das Motto der zweiten Auflage der Community Konferenz steht fest: Die effektive Durchführung von Softwareprojekten steht thematisch im Zentrum der Konferenz für Entwickler, Softwarearchitekten und Projektleiter.

Gerade Oracle APEX eignet sich hervorragend für die agile Durchführung von Softwareprojekten, ich habe damit seit Jahren hervorragende Ergebnisse erzielt. Mit Oracle APEX kann man in extrem kurzen Iterationszyklen von 1-4 Wochen sehr beachtliche Funktionalitäten implementieren. Gerade die Lieferung von Funktionalitäten in kurzen Abständen hilft, das Vertrauen der Fachseiten zu gewinnen. Dennoch gibt es einige Fallstricke zu beachten, damit auch alles rund läuft ;).

Weitere Informationen zur Konferenz finden Sie hier.

Bitte beachten: der Frühbucher-Rabatt endet heute!

Viele Grüße,
~Dietmar. 

Improving data move on EXADATA II

Mathias Magnusson - Wed, 2013-05-22 07:00

Writing log records

The last post in this series introduced the problem briefly. You find that post here.

In this post I’ll talk about the changes made to make that writing of log records fast enough. There were 50 million records that was written. Each of them pretty much in its own transaction. Of course the commit activity caused problem, as did log buffer issues. Some of this could be somewhat remedied with configuration.

The big issue though was that the writes themselves took too much time and too often many session ended up in long contention chains. Yes, it would have been great to have the luxury of redesigning the whole logging situation from the ground up. But, as is often the case, the solution was built such that all systems connecting were implemented in such a way that redesigning was not an option. Fixing the performance of this had to be done without requiring code changes to the systems performing the logging. Oh, joy.

So what caused the problem then? For the inserts it was pretty straight forward. Too many transactions making an insert and a commit. This caused indexes to be hotspots where all processes wanted to write at the same spot. Hash-partitioning had been introduced and that had led to less contention but slower performance. As the partitions existed on different parts on the disks the write head had to be constantly moved and that caused slower service times.

What could we do to make a big improvement while not affecting the code? We’re not talking about just 10-20% of improvement on any area in this case, and even more important was to make the performance stable. That is, the most important thing was to ensure that there were no spikes where an insert suddenly to 20 times longer than usual. The contention chains that was occurring made performance spike such that the whole system became unusable.

The solution here turned out to be something so far from advanced technologies as questioning assumptions. The first time I asked “why do we have these indexes”, most people in the room thought I was just joking around. Eventually they realised that I was serious. After an amusing period of silence where I could see them thinking “Do we need to inform him that indexes are needed to enforce uniqueness and to support referential integrity?”, someone went ahead and did just that. OK, now we were on to a productive discussion, as of course that wasn’t what I meant. The followup discussion about why we needed referential integrity and uniqueness for this set of data was very enlightening for everyone. To make a long story short, it was not needed at all. It was there because it had always been there and nobody had questioned the need before.

How come we didn’t need data to be unique? Well, this is log-data. That is it tells us what actions has been performed by the system. If some activity would be reported twice, it really wouldn’t be the end of the world. The possible problem that some activity isn’t logged cannot be handled with defining unique constraints. That is pure system design and nothing I could improve or worsen by removing some indexes.

Thus, the indexes was removed together with the foreign keys (referential integrity).

Sounds simple enough, but did it help? Did it ever! In one month after making the change, there has not been one report of one transaction that was anywhere close to take too long. This simple solution made the logging so fast that it is no longer a concern.

The next post in this series will discuss the solution for moving data to the history tables. This process took around 16 hours and it had to become at least three times as fast. As you’ll see, moving all these rows can be done much faster than so.


How to locate all interactive reports in my application?

Dietmar Aust - Wed, 2013-05-22 06:07
Today my client decided to remove the email functionality from all interactive reports in their APEX application.

I could step through all the pages of my application and check whether there are any interactive reports used. This very manual approach can take some time and I can make mistakes by overlooking some of the interactive reports to modify.

A different approach would be to use the (officially supported) data dictionary views in APEX. Using this simple sql query I can find all interactive reports in all pages of my application (having the id 20120618):
select   workspace
, application_id
, application_name
, page_id
, region_name
, download_formats
from apex_application_page_ir
where application_id=20120618
order by page_id;
This will show all interactive reports which are used in my application:


We can even look for all interactive reports that need to be fixed by specifically looking for all reports where the download format email is enabled:
select   workspace
, application_id
, application_name
, page_id
, region_name
, download_formats
from apex_application_page_ir
where application_id=20120618
and instr(download_formats, 'EMAIL') > 0
order by page_id;
These data dictionary views contain a wealth of information about your application. This is a real strength of Oracle Application Express, due to its nature being driven by a metadata repository.

Thus from a quality management perspective you could use the following query to make sure, all interactive reports are configured the same way, i.e. having flashback disabled, using the same display position for pagination (Top and Bottom-Left) and showing the dropdown list for selecting the rows per page:
select   workspace
, application_id
, application_name
, page_id
, region_name
, pagination_display_position
, show_rows_per_page
, show_flashback
from apex_application_page_ir
where application_id=20120618
and ( pagination_display_position <> 'Top and Bottom - Left'
or show_rows_per_page = 'No'
or show_flashback = 'Yes' )
order by page_id;
Here is the result:


But how do we find the appropriate views? The APEX data dictionary contains information about all available data dictionary views and all of their columns.

Typicall I use the following query to find all information about interactive reports, all related views contain _IR_ in the view name:
select *
from apex_dictionary
where apex_view_name like '%IR%'
This will show us all view including their respective columns:


If we want to display the view only without their columns we can add the filter for the column_id 0, in there the APEX team added the overall description of the view itself:
select *
from apex_dictionary
where apex_view_name like '%IR%'
and column_id=0;
Now, we get only the view names including their description:


Regards,
~Dietmar.

Target Verification

Antony Reynolds - Tue, 2013-05-21 13:02
Verifying the Target

I just built a combined OSB, SOA/BPM, BAM clustered domain.  The biggest hassle is validating that the resource targeting is correct.  There is a great appendix in the documentation that lists all the modules and resources with their associated targets.  The only problem is that the appendix is six pages of small print.  I manually went through the first page, verifying my targeting, until I thought ‘there must be a better way of doing this’.  So this blog post is the better way Smile

WLST to the Rescue

WebLogic Scripting Tool allows us to query the MBeans and discover what resources are deployed and where they are targeted.  So I built a script that iterates over each of the following resource types and verifies that they are correctly targeted:

  • Applications
  • Libraries
  • Startup Classes
  • Shutdown Classes
  • JMS System Resources
  • WLDF System Resources
Source Data

To get the data to verify my domain against, I copied the tables from the documentation into a text file.  The copy ended up putting the resource on the first line and the targets on the second line.  Rather than reformat the data I just read the lines in pairs, storing the resource as a string and splitting apart the targets into a list of strings.  I then stored the data in a dictionary with the resource string as the key and the target list as the value.  The code to do this is shown below:

# Load resource and target data from file created from documentation
# File format is a one line with resource name followed by
# one line with comma separated list of targets
# fileIn - Resource & Target File
# accum - Dictionary containing mappings of expected Resource to Target
# returns - Dictionary mapping expected Resource to expected Target
def parseFile(fileIn, accum) :
  # Load resource name
  line1 = fileIn.readline().strip('\n')
  if line1 == '':
    # Done if no more resources
    return accum
  else:
    # Load list of targets
    line2 = fileIn.readline().strip('\n')
    # Convert string to list of targets
    targetList = map(fixTargetName, line2.split(','))
    # Associate resource with list of targets in dictionary
    accum[line1] = targetList
    # Parse remainder of file
    return parseFile(fileIn, accum)

This makes it very easy to update the lists by just copying and pasting from the documentation.

Each table in the documentation has a corresponding file that is used by the script.

The data read from the file has the target names mapped to the actual domain target names which are provided in a properties file.

Listing & Verifying the Resources & Targets

Within the script I move to the domain configuration MBean and then iterate over the resources deployed and for each resource iterate over the targets, validating them against the corresponding targets read from the file as shown below:

# Validate that resources are correctly targeted
# name - Name of Resource Type
# filename - Filename to validate against
# items - List of Resources to be validated
def validateDeployments(name, filename, items) :
  print name+' Check'
  print "====================================================="
  fList = loadFile(filename)
  # Iterate over resources
  for item in items:
    try:
      # Get expected targets for resource
      itemCheckList = fList[item.getName()]
      # Iterate over actual targets
      for target in item.getTargets() :
        try:
          # Remove actual target from expected targets
          itemCheckList.remove(target.getName())
        except ValueError:
          # Target not found in expected targets
          print 'Extra target: '+item.getName()+': '+target.getName()
      # Iterate over remaining expected targets, if any
      for refTarget in itemCheckList:
        print 'Missing target: '+item.getName()+': '+refTarget
    except KeyError:
      # Resource not found in expected resource dictionary
      print 'Extra '+name+' Deployed: '+item.getName()
  print

Obtaining the Script
I have uploaded the script here.  It is a zip file containing all the required files together with a PDF explaining how to use the script.

To install just unzip VerifyTargets.zip. It will create the following files

  • verifyTargets.sh
  • verifyTargets.properties
  • VerifyTargetsScriptInstructions.pdf
  • scripts/verifyTargets.py
  • scripts/verifyApps.txt
  • scripts/verifyLibs.txt
  • scripts/verifyStartup.txt
  • scripts/verifyShutdown.txt
  • scripts/verifyJMS.txt
  • scripts/verifyWLDF.txt
Sample Output

The following is sample output from running the script:

Application Check
=====================================================
Extra Application Deployed: frevvo
Missing target: usermessagingdriver-xmpp: optional
Missing target: usermessagingdriver-smpp: optional
Missing target: usermessagingdriver-voicexml: optional
Missing target: usermessagingdriver-extension: optional
Extra target: Healthcare UI: soa_cluster
Missing target: Healthcare UI: SOA_Cluster ??
Extra Application Deployed: OWSM Policy Support in OSB Initializer Aplication

Library Check
=====================================================
Extra Library Deployed: oracle.bi.adf.model.slib#1.0-AT-11.1-DOT-1.2-DOT-0
Extra target: oracle.bpm.mgmt#11.1-DOT-1-AT-11.1-DOT-1: AdminServer
Missing target: oracle.bpm.mgmt#11.1.1-AT-11.1.1: soa_cluster
Extra target: oracle.sdp.messaging#11.1.1-AT-11.1.1: bam_cluster

StartupClass Check
=====================================================

ShutdownClass Check
=====================================================

JMS Resource Check
=====================================================
Missing target: configwiz-jms: bam_cluster

WLDF Resource Check
=====================================================

IMPORTANT UPDATE

Since posting this I have discovered a number of issues.  I have updated the configuration files to correct these problems.  The changes made are as follows:

  • Added WLS_OSB1 server mapping to the script properties file (verifyTargets.properties) to accommodate OSB singletons and modified script (verifyTargets.py) to use the new property.
  • Changes to verifyApplications.txt
    • Changed target from OSB_Cluster to WLS_OSB1 for the following applications:
      • ALSB Cluster Singleton Marker Application
      • ALSB Domain Singleton Marker Application
      • Message Reporting Purger
    • Added following application and targeted at SOA_Cluster
      • frevvo
    • Adding following application and targeted at OSB_Cluster & Admin Server
      • OWSM Policy Support in OSB Initializer Aplication
  • Changes to verifyLibraries.txt
    • Adding following library and targeted at OSB_Cluster, SOA_Cluster, BAM_Cluster & Admin Server
      • oracle.bi.adf.model.slib#1.0-AT-11.1.1.2-DOT-0
    • Modified targeting of following library to include BAM_Cluster
      • oracle.sdp.messaging#11.1.1-AT-11.1.1

Make sure that you download the latest version.  It is at the same location but now includes a version file (version.txt).  The contents of the version file should be:

FMW_VERSION=11.1.1.7

SCRIPT_VERSION=1.1

Conditional Unique Indexes

Karl Reitschuster - Fri, 2013-05-17 04:25

Matrix : What you must learn is that these rules are no different than rules of a computer system. Some of them can be bent, others can be broken. Understand?
 
Usually an unique index grants the uniqueness of all rows in a specific table which have non-null values; But what if some data depended on a given business type in the the row of table needs to be unique and some not?
combining function based index feature from Oracle with a unique index makes this possible.

--
-- SCOPE : OASIS
--
-- DDL:CALLITEMS_UK01           :INDEX.MOD              - TEST
--

DROP   INDEX CALLITEMS_UK01;
CREATE UNIQUE INDEX CALLITEMS_UK01
       ON    CALLITEMS
             ( CASE UNIQUE_REF WHEN 1 THEN UPPER(TRIM(ITEMCODE)) ELSE NULL END
             , CASE UNIQUE_REF WHEN 1 THEN ITEMTYPE ELSE NULL END
             , CASE UNIQUE_REF WHEN 1 THEN ACTIVESINCE ELSE NULL END  )
  TABLESPACE OASISIXM
;

 

Dependend on the uniqeness flag UNIQUE_REF a row may be unique to others or not. Maybe this makes sense for specific call item types. In our project a CALLITEMTYPES table controlled the unqueness of specifc call item types , populating the UNIQUE_REF flag to the CATLLITEMS table.

cheers

/K

 

 




Handling Service Calls from Oracle BPM

Jan Kettenis - Fri, 2013-05-17 03:52
In this posting I explain why in Oracle BPM Suite 11g you should consider using reusable subprocesses to handle service calls. The example is based on a synchronous service, but the same holds for asynchronous and fire & forget services.

If you have never experienced during process design with BPMN that the final process model became twice, if not three times more complex than you thought it would be, than you haven't been doing process design for real. You might for example have experienced how, what initially looked like a simple service call, finally exploded in your face. Let me tell you how it did in mine. I made up the example I'm using, but it is not far from reality.

It all started with a simple call to a synchronous service to store an order using some Insert and Update Order Service, or Upsert Order for short. The initial process model looked as simple as this.



This process is kicked off as a service with some initial customer data, after which a SalesRep enters an order that then gets stored using the Upsert Order service. But then I found that to use this service, I had to instantiate some custom request header. So I had to add a Script activity, as you cannot do that in the Service activity itself. The result was this:


But now I was exposing technical aspects in a business process, so to prevent the business audience to get distracted from that, I hid the complexity inside an embedded Store Order subprocess, like this:



And inside the embedded subprocess like this:

 
So far so good. But then I found that I also had to create a message id, log the request and response message, and catch exceptions to forward them to a generic exception handling process because the business might be involved to solve data issues. So before I knew it, my embedded subprocess looked something like this:


Hurray for the embedded subprocess, which hid all this complexity! But then I had to call the same Upsert Order service a second time, requiring me to do all the wiring again. Arghh!! Which brought me to the "brilliant" idea of pushing all this logic to a reusable subprocess, like this:


So in the business process itself, I only have to map the (context-specific) request and response messages to and from, what now has become the generic, and reusable wiring of the service call, like this:


Now you might ask yourself why we did not do the service wiring in PBEL. After all, implementing logic regarding service calls (like exception handling, more complex data transformations) is easier in BPEL. But not every BPM developer is also good in BPEL, making BPMN the better choice from a development process point of view. However, the next evolution could be to create some even more generic Service Request/Response Handler that is capable of calling any synchronous service, taking an anyType input and output argument.

ElasticSearch Server

Marcelo Ochoa - Thu, 2013-05-16 06:59

Continuing my previous post, second book that I was working for PacktPub as technical reviewer was "Elasticsearch Server".
A very good book for this development based on the library Lucene, with many examples and concepts in order to exploit the full potential of free text searches using ElasticSearch.
As with the book described in the previous post (Apache Solr 4 Cookbook) this book is a perfect resource used during the development of "Scotas's Push Connector" because it allows me to integrate and exploit their full potential.
Overview

  • Learn the basics of ElasticSearch like data indexing, analysis, and dynamic mapping
  • Query and filter ElasticSearch for more accurate and precise search results
  • Learn how to monitor and manage ElasticSearch clusters and troubleshoot any problems that arise
Table of Contents


  • Chapter 1: Getting Started with ElasticSearch Cluster
  • Chapter 2: Searching Your Data
  • Chapter 3: Extending Your Structure and Search
  • Chapter 4: Make Your Search Better
  • Chapter 5: Combining Indexing, Analysis, and Search
  • Chapter 6: Beyond Searching
  • Chapter 7: Administrating Your Cluster
  • Chapter 8: Dealing with Problems

Authors
Rafał KućRafał Kuć is a born team leader and software developer. Currently working as a Consultant and a Software Engineer at Sematext Inc, where he concentrates on open source technologies such as Apache Lucene and Solr, ElasticSearch, and Hadoop stack. He has more than 10 years of experience in various software branches, from banking software to e-commerce products. He is mainly focused on Java, but open to every tool and programming language that will make the achievement of his goal easier and faster. Rafał is also one of the founders of the solr.pl site, where he tries to share his knowledge and help people with their problems with Solr and Lucene. He is also a speaker for various conferences around the world such as Lucene Eurocon, Berlin Buzzwords, and ApacheCon. Rafał began his journey with Lucene in 2002 and it wasn't love at first sight. When he came back to Lucene later in 2003, he revised his thoughts about the framework and saw the potential in search technologies. Then Solr came and that was it. From then on, Rafał has concentrated on search technologies and data analysis. Right now Lucene, Solr, and ElasticSearch are his main points of interest. Rafał is also the author of Apache Solr 3.1 Cookbook and the update to it—Apache Solr 4 Cookbook—published by Packt Publishing.
Marek RogozińskiMarek Rogoziński is a software architect and consultant with more than 10 years of experience. His specialization concerns solutions based on open source projects such as Solr and ElasticSearch. He is also the co-funder of the solr.pl site, publishing information and tutorials about the Solr and Lucene library. He currently holds the position of Chief Technology Officer in Smartupz, the vendor of the Discourse™ social collaboration software.
Conclusion
Many developers know Lucene, but do not know the product ElasticSearch, to know or want to know this book is going to enter in the product and the potential of this.




Apache Solr 4 Cookbook

Marcelo Ochoa - Thu, 2013-05-16 06:34

During my summer I had the chance to work for Packtpub as technical reviewer of two great books.
First "Apache Solr 4 Cookbook" is a very good book to entered into the world of free-text search integrated to any development or portal with real and practical examples using the latest version of the Apache Solr search.
Overview

  • Learn how to make Apache Solr search faster, more complete, and comprehensively scalable
  • Solve performance, setup, configuration, analysis, and query problems in no time
  • Get to grips with, and master, the new exciting features of Apache Solr 4


Table of Contents
  • Chapter 1: Apache Solr Configuration
  • Chapter 2: Indexing Your Data
  • Chapter 3: Analyzing Your Text Data
  • Chapter 4: Querying Solr
  • Chapter 5: Using the Faceting Mechanism
  • Chapter 6: Improving Solr Performance
  • Chapter 7: In the Cloud
  • Chapter 8: Using Additional Solr Functionalities
  • Chapter 9: Dealing with Problems
  • Appendix: Real-life Situations
Author
Rafał Kuć is a born team leader and software developer. Currently working as a Consultant and a Software Engineer at Sematext Inc, where he concentrates on open source technologies such as Apache Lucene and Solr, ElasticSearch, and Hadoop stack. He has more than 10 years of experience in various software branches, from banking software to e-commerce products. He is mainly focused on Java, but open to every tool and programming language that will make the achievement of his goal easier and faster. Rafał is also one of the founders of the solr.pl site, where he tries to share his knowledge and help people with their problems with Solr and Lucene. He is also a speaker for various conferences around the world such as Lucene Eurocon, Berlin Buzzwords, and ApacheCon. Rafał began his journey with Lucene in 2002 and it wasn't love at first sight. When he came back to Lucene later in 2003, he revised his thoughts about the framework and saw the potential in search technologies. Then Solr came and that was it. From then on, Rafał has concentrated on search technologies and data analysis. Right now Lucene, Solr, and ElasticSearch are his main points of interest. Rafał is also the author of Apache Solr 3.1 Cookbook and the update to it—Apache Solr 4 Cookbook—published by Packt Publishing.


ConclusionIf you are about to start a development or entered into the world of free text this book is a very good investment in time and resources, practical examples really serve to acquire new concepts in a simple and practical with minimal effort.




SCNs and Timestamps

Gary Myers - Thu, 2013-05-16 06:05
The function ORA_ROWSCN returns an SCN from a row (or more commonly the block, unless ROWDEPENDENCIES has been used).

select distinct ora_rowscn from PLAN_TABLE;

But unless you're a database, that SCN doesn't mean much. You can put things in some sort of order, but not much more.

Much better is

select sys.scn_to_timestamp(ora_rowscn) from PLAN_TABLE;

unless it gives you

ORA-08181: specified number is not a valid system change number

which is database-speak for "I can't remember exactly".

That's when you might be able to fall back on this, plugging the SCN in place of the **** :

select * from 
  (select first_time, first_change# curr_change, 
          lag(first_change#) over (order by first_change#) prev_change,
          lead(first_change#) over (order by first_change#) next_change
  FROM v$log_history)
where **** between curr_change and next_change

It won't be exact, and it doesn't stretch back forever. But it is better than nothing.

PS. This isn't a perfect way to find when a row was really inserted/updated. It is probably at the block level, and there's 'stuff' that can happen which doesn't actually change the row but might still reset the SCN. If you're looking for perfection, you at the wrong blog :)



Faster data move on EXADATA I

Mathias Magnusson - Wed, 2013-05-15 07:00

Introduction

In my work among other things I tune and tweak solutions for EXADATA. Today I’ll write about a big improvement we achieved with a process that moves data from the operational tables to the ones where the history is stored.

This will not be a technical post. While I talk about using advanced technologies, I will not discuss code or deep details of them in this post.

And yes, when I say post, I mean a series of posts. This will be too long to be a single post. I’ll break it up into an introduction and then a post on each area of improvement.

Let’s first discuss the before situation. This set of tables are logged to during the day. These log records are needed both to investigate how transactions were executed as well as to satisfy legal requirements. It is in a highly regulated industry and for good reason as mistakes could put someone’s life in danger.

In this situation the solutions were writing around 50 million log records per day to five tables. These tables all had a primary key based on a sequence and there was also referential integrity set up. This means that for the indexes, all processes were writing to the same place on disk. The lookup on the referential integrity was also looking at the same place. An attempt to remedy some of this had been made by hash partitioning the tables. The write activity was intense enough during the day that most of the logging had to be turned off as the solution otherwise was too slow. This of course has legal as well as diagnostic implications.

What’s worse is that once all that data was written, it had to be moved to another database where the history is kept. This process was even slower and the estimate for how long it would take to move one days worth of data was 16 hours. It never did run for that long as it was not allowed to run during the day, it had to start after midnight and finish before 7 am. As a result the volume built up every night until logging was turned off for a while and the move then caught up a little every night.

This series will have the following parts:

  1. Introduction (this post)
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up and summary

The plan is to publish one part each week. Hopefully I’ll have time to publish some more technical posts between the posts in this series.


Faster data move on EXADATA I

Mathias Magnusson - Wed, 2013-05-15 07:00

Introduction

In my work among other things I tune and tweak solutions for EXADATA. Today I’ll write about a big improvement we achieved with a process that moves data from the operational tables to the ones where the history is stored.

This will not be a technical post. While I talk about using advanced technologies, I will not discuss code or deep details of them in this post.

And yes, when I say post, I mean a series of posts. This will be too long to be a single post. I’ll break it up into an introduction and then a post on each area of improvement.

Let’s first discuss the before situation. This set of tables are logged to during the day. These log records are needed both to investigate how transactions were executed as well as to satisfy legal requirements. It is in a highly regulated industry and for good reason as mistakes could put someone’s life in danger.

In this situation the solutions were writing around 50 million log records per day to five tables. These tables all had a primary key based on a sequence and there was also referential integrity set up. This means that for the indexes, all processes were writing to the same place on disk. The lookup on the referential integrity was also looking at the same place. An attempt to remedy some of this had been made by hash partitioning the tables. The write activity was intense enough during the day that most of the logging had to be turned off as the solution otherwise was too slow. This of course has legal as well as diagnostic implications.

What’s worse is that once all that data was written, it had to be moved to another database where the history is kept. This process was even slower and the estimate for how long it would take to move one days worth of data was 16 hours. It never did run for that long as it was not allowed to run during the day, it had to start after midnight and finish before 7 am. As a result the volume built up every night until logging was turned off for a while and the move then caught up a little every night.

This series will have the following parts:

  1. Introduction (this post)
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up and summary

The plan is to publish one part each week. Hopefully I’ll have time to publish some more technical posts between the posts in this series.


Documentation for Cloudera Impala 1.0.1

Tahiti Views - Tue, 2013-05-14 23:13
The Cloudera docs have been undergoing some reorganization lately. That applies double to the Impala documentation, which moved from beta to GA status and has been restructured at the same time. The June 1.0.1 release brings some new reference information, so I've broken out the SQL language links too. For posterity (and a little Googlerank mojo), here's the current layout of the Impala docs in John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Adobe Fireworks Colour Picker Problem on Mac OS X Mountain Lion

Andrew Tulley - Tue, 2013-05-14 07:53

I’ve had a problem with Adobe Fireworks CS5 ever since upgrading to a Retina Display MacBook. The problem is that the eye-dropper colour picker tool just doesn’t work any more. At all. Very frustrating.

I came across a very simple workaround today which can be found here:

http://simianstudios.com/blog/post/colour-picker-bug-workaround-for-adobe-fireworks-cs4-in-os-x-lion

 

Image


Exalytics - Now with 2.4 Tb of Flash

Look Smarter Than You Are - Mon, 2013-05-13 22:26
I'm not sure why there wasn't a major announcement about this, but as of April 9, customers buying an Exalytics machine to speed up their Oracle Business Intelligence can get 2.4 Tb of PCIe flash drives from Oracle certified and engineered to run on Exalytics.  The cost (as of April 9's price list) is $35,000 (search for "flash upgrade kit").

While I haven't seen one in action yet, the flash pack seems to be 6 Sun Flash Accelerator F40 PCIe Cards each of which has a capacity of 400 Gb.  These cards run amazingly fast with read times of more than 2 GB/second (write time is about half that speed at 1+ GB/second).  These cards normally sell for almost $6K each, so Oracle is providing the flash add-on pack for no more markup than you'd get if you bought them on your own (but you'd then have to get them into the Exalytics machine all on your own).
This Matters If You Own EssbaseWhy would you want this?  Essbase, primarily.  Essbase uses a ton of disk I/O and one of the ways Exalytics can speed up Essbase is by pulling your cubes into a RAMDisk (since you have 1 Tb of RAM to play with).  At some point, though, it has to get that data from physical drives to a RAMDisk (unless you're building all your cubes at start up in memory each time).  Having blazingly speedy flash drives with .25 millisecond read latency allows you to store your cubes on the flash drive and then pull into RAM much more quickly than reading from traditional drives.

We have tested Essbase running on flash drives and it helps everything (particularly minimizes the negative effects of fragmentation since seek time drops to basically nothing on flash).  For customers buying Exalytics primarily for Essbase, the Exalytics Flash Upgrade Kit should be strongly considered with every Exalytics purchase (and if you already own Exalytics, buy it to put on top).

OBIEE is much less affected by hard drives, so while it may help OBIEE, this really matters a lot more to Essbase customers.
Oracle EPM Fully Supported on ExalyticsSince we're on the subject of Exalytics, now that 11.1.2.3 is out, all Oracle EPM/Hyperion components certified to run on Linux will run on Exalytics PS2.  These include:

  • Administration Services
  • Calculation Manager
  • EPM Workspace
  • Essbase Server
  • Essbase Studio Server
  • Financial Reporting
  • Interactive Reporting (32-bit only)
  • Oracle HTTP Server
  • Planning
  • Profitability and Cost Management
  • Production Reporting (32-bit only)
  • Provider Services
  • Reporting and Analysis Framework Services and Web Application 
  • Shared Services
  • Web Analysis
Categories: BI & Warehousing

I Love Standards…There Are So Many Of Them

Mary Ann Davidson - Mon, 2013-05-13 16:32

The title is not an original bon mot by me – it’s been said often, by others, and by many with more experience than I have in developing standards.  It is with mixed emotions that I feel compelled to talk about a (generally good and certainly well-intentioned) standards organization: the US National Institute of Standards and Technology (NIST). I should state at the outset that I have a lot of respect for NIST. In the past, I have even urged a Congressional committee (House Science and Technology, if memory serves) to try to allocate more money to NIST for cybersecurity standards work.  I’ve also met a number of people who work at NIST – some of whom have since left NIST and brought their considerable talents to other government agencies, one of whom I ran into recently and mentioned how I still wore a black armband years after he had left NIST because he had done such great work there and I missed working with him. All that said, I’ve seen a few trends at NIST recently that are – of concern.

When in Doubt, Hire a Consultant

I’ve talked in other blog entries about the concern I have that so much of NIST’s outwardly-visible work seems to be done not by NIST but by consultants. I’m not down on consultants for all purposes, mind you – what is having your tires rotated and your oil changed except “using a car consultant?” However, in the area of developing standards or policy guidance it is of concern, especially when, as has been the case recently, the number of consultants working on a NIST publication or draft document is greater than the number of NIST employees contributing to it.  There are business reasons, often, to use consultants. But you cannot, should not, and must not “outsource” a core mission, or why are you doing it? This is true in spades for government agencies.  Otherwise, there is an entire beltway’s worth of people just aching to tell you about a problem you didn’t know you had, propose a standard (or regulation) for it, write the standard/regulation, interpret it and “certify” that Other Entities meet it. To use a song title, “Nice Work If You Can Get It.”* Some recent consultant-heavy efforts are all over the map, perhaps because there isn’t a NIST employee to say, "you say po-TAY-to, I sy po-TAH-to, let's call the whole thing off." ** (Or at least make sure the potato standard is Idaho russet – always a good choice.)

Another explanation – not intentionally sinister but definitely a possibility – is that consultants’ business models are often tied to repeat engagements.  A short, concise, narrowly-tailored and readily understandable standard isn’t going to generate as much business for them as a long, complex and “subject to interpretation – and five people will interpret this six different ways” – document. 

In short: I really don’t like reading a document like NISTIR 7622 (more on which below) where most of the people who developed it are consultants. NIST’s core mission is standards development: NIST needs to own their core mission and not farm it out.

Son of FISMA

I have no personal experience with the Federal Information Security Management Act of 2002 (FISMA) except the amount of complaining I hear about it second hand, which is considerable.  The gist of the complaints is that FISMA asks people to do a lot of stuff that looks earnestly security oriented, not all of which is equally important.

Why should we care? To quote myself (in an obnoxiously self-referential way): “time, money and (qualified security) people are always limited.” That is, the more security degenerates into a list of the 3000 things you Must Do To Appease the Audit Gods, the less real security we will have (really, who keeps track of 3000 Must Dos, much less does them? It sounds like a demented Girl Scout merit badge). And, in fact, the one thing you read about FISMA is that many government agencies aren’t actually compliant because they missed a bunch of FISMA checkboxes.  Especially since knowledgeable resources (that is, good security people) are limited, it’s much better to do the important things well then maintain the farce that you can check 3000 boxes, which certainly cannot all be equally important. (It’s not even clear how many of these requirements contribute to actual security as opposed to supporting the No Auditor Left Behind Act.)

If the scuttlebutt I hear is accurate, the only thing that could make FISMA worse is – you guessed it –adding more checkboxes. It is thus with considerable regret that I heard recently that NIST updated NIST Special Publication 800-53 (which NIST has produced as part of its statutory responsibilities under FISMA). The Revision 4 update included more requirements in the area of supply chain risk management and software assurance and trustworthiness.  Now why would I, a maven of assurance, object to this? Because a) we already have actual standards around assurance, b) having FISMA-specific requirements means that pretty much every piece of Commercial Off-the-Shelf (COTS) software will have to be designed and built to be FISMA compliant or COTS software/hardware vendors can’t sell into the Federal government and (c) we don’t want a race by other governments to come up with competing standards, to the point where we’re checking not 3000 but 9000 or 12000 boxes and probably can’t come up with a single piece of COTS globally let alone one that meets all 12000 requirements. (Another example is the set of supply chain/assurance requirements in the telecom sector in India that include a) asking for details about country of origin and b) specific contractual terms that buyers anywhere in the supply chain are expected to use.  An unintended result is that a vendor will need to (a) disclose sensitive supply chain data (which itself may be a trade secret) and (b) modify processes around global COTS to sell into one country.)

Some of the new NIST guidance is problematic for any COTS supplier. To provide one example, consider:

“The artifacts generated by these development activities (e.g., functional specifications, high-level/low-level designs, implementation representations [source code and hardware schematics], the results from static/dynamic testing and code analysis (emphasis mine)) can provide important evidence that the information systems (including the components that compose those systems) will be more reliable and trustworthy. Security evidence can also be generated from security testing conducted by independent, accredited, third-party assessment organizations (e.g., Common Criteria Testing Laboratories (emphasis mine), Cryptographic/Security Testing Laboratories, and other assessment activities by government and private sector organizations.)”

For a start, to the extent that components are COTS, such “static testing” is certainly not going to happen by a third party nor will the results be provided to a customer. Once you allow random customers – especially governments – access to your source code or to static analysis results, you might as well gift wrap your code and send it to a country that engages in industrial espionage, because no vendor, having agreed to this for one government, will ever be able to say no to Nation States That Steal Stuff.  (And static analysis results, to the extent some vulnerabilities are not fixed yet, just provide hackers a road map for how and where to break in.) Should vendors do static analysis themselves? Sure, and many do. It’s fair for customers to ask whether this is done, and how a supplier ensures that the worst stuff is fixed before the supplier ships product. But it is worth noting – again – that if these tools were easy to use and relatively error free, everyone would be at a high level of tools usage maturity years ago. Using static analysis tools is like learning Classic Greek – very hard, indeed. (OK, koinic Greek isn’t too bad but Homeric Greek or Linear B, fuhgeddabout it.)

With reference to the Common Criteria (CC), the difficulty now is that vendors have a much harder time doing CC evaluations than in the past because of other forces narrowing CC evaluations into a small set of products that have Protection Profiles (PPs). The result has been and will be for the foreseeable future – fewer evaluated products. The National Information Assurance Partnership (NIAP) – the US evaluation scheme – has ostensibly good reasons for their “narrowed/focused” CC-directions. But it is more than a little ironic that the NIST 800-53 revision should mention CC evaluations as an assurance measure at a time when the pipeline of evaluated products is shrinking, in large part due to the directions taken by another government entity (NIAP). What is industry to make of this apparent contradiction? Besides corporate head scratching, that is.

There are other – many other sections – I could comment upon, but one sticks out as worthy of notice:

“Supply chain risk is part of the advanced persistent threat (APT).”

It’s bad enough that “supply chain risk” is such a vague term that it encompasses basically any and all risk of buying from a third party. (Including “buying a crummy product” which is not actually a supply chain-specific risk but a risk of buying any and all products.) Can bad guys try to corrupt the supply chain? Sure. Does that make any and all supply chain risks “part of APT?” Heck, no. We have enough hysteria about supply chain risk and APT without linking them together for Super-Hysteria. 

To sum up, I don’t disagree that customers in some cases – and for some, not all applications – may wish higher levels of assurance or have a heightened awareness of cyber-specific supply chain threats (e.g., counterfeiting and deliberate insertion of malware in code). However, incorporation of supply chain provisions and assurance requirements into NIST 800-53 has the unintended effect of requiring any and all COTS products to be sold to government agencies – which is all of them as far as I know – to be subject to FISMA.

What if the state of Idaho decided that every piece of software had to attest to the fact that No Actual Moose were harmed during the production of this software and that any moose used in code production all had background checks? What if every other state enumerated specific assurance requirements and specific supply chain risk management practices? What if they conflict with each other, or with the NIST 800-53 requirements? I mean really, why are these specific requirements called out in NIST 800-53 at all? There really aren’t that many ways to build good software.  FISMA as interpreted by NIST 800-53 really, really shouldn’t roll its own.

IT Came from Outer Space – NISTIR 7622

I’ve already opined at length about how bad the NIST Interagency Report (NISTIR) 7622 is. I had 30 pages of comments on the first 80-page draft. The second draft only allowed comments of the Excel Spreadsheet form:  “Section A.b, change ‘must’ to ‘should,’ for the reason ‘because ‘must’ is impossible’” and so on. This format didn’t allow for wholesale comments such as “it’s unclear what problem this section is trying to solve and represents overreach, fuzzy definition and fuzzier thinking.”  NISTIR 7622 was and is so dreadful that an industry association signed a letter that said, in effect, NISTIR 7622 was not salvageable, couldn’t be edited to something that could work, and needed to be scrapped in toto.

I have used NISTIR 7622 multiple times as a negative example: most recently, to an audience of security practitioners as to why they need to be aware of what regulations are coming down the pike and speak up early and often.  I also used it in the context of a (humorous) paper I did at the recent RSA Conference with a colleague, the subject of which was described as “doubtless-well-intentioned legislation/regulation-that-has-seriously-unfortunate-yet-doubtless-unintended-consequences.”  That’s about as tactful as you can get.

Alas, Dracula does rise from the grave,*** because I thought I heard noises at a recent Department of Homeland Security event that NISTIR 7622 was going to move beyond “good advice” and morph into a special publication. (“Run for your lives, store up garlic and don’t go out after dark without a cross!”)  The current version of NISTIR 7622 – after two rounds of edits and heaven knows how many thousands of hours of scrutiny – is still unworkable, overscoped and completely unclear: you have a better chance of reading Linear B**** than understanding this document (and for those who don’t already know, Linear B is not merely “all Greek to me” – it’s actually all Greek to anybody).  Ergo, NISTIR 7622 needs to die the true death: the last thing anyone should do with it is make a special publication out of it. It’s doubling down on dreck. Make it stop. Now. Please.

NIST RFI

The last section is, to be fair, not really about NIST per se. NIST has been tasked, by virtue of a recent White House Executive Order, with developing a framework for improving cybersecurity. As part of that tasking, NIST has published a Request For Information (RFI) seeking industry input on said framework. NIST has also scheduled several meetings to actively draw in thoughts and comments from those outside NIST. As a general rule, and NISTIR 7622 notwithstanding, NIST is very good at eliciting and incorporating feedback from a broad swath of stakeholders. It’s one of their strengths and one of the things I like about them. More importantly, I give major kudos to NIST and its Director Pat Gallagher for forcefully making the point that NIST would not interfere with IT design, development and manufacture, in the speech he gave when he kicked off NIST’s work on the Framework: “the Framework must be technology neutral and it must enable critical infrastructure sectors to benefit from a competitive [technology] market. (…) In other words, we will not be seeking to tell industry how to build your products or how to run your business.”

The RFI responses are posted publicly and are, well, all over the map.  What is concerning to me is the apparent desire of some respondents to have the government tell industry how to run their businesses. More specifically, how to build software, how to manage supply chain risk, and so forth. No, no, and no. (Maybe some of the respondents are consultants lobbying the government to require businesses to hire these consultants to comply with this or that mandate.)

For one thing, “security by design” concepts have already been working their way into development for a number of years: many companies are now staking their reputations on the security of their products and services. Market forces are working. Also, it’s a good time to remind people that more transparency is reasonable – for example, to enable purchasers to make better risk-based acquisition decisions – but when you buy COTS you don’t get to tell the provider how to build it. That’s called “custom code” or “custom development.” Just like, I don’t get to walk into <insert name of low-end clothing retailer here> and tell them, that I expect my “standard off-the-shelf blue jeans” to ex post facto be tailored to me specifically, made of “organic, local and sustainable cotton” (leaving aside the fact that nobody grows cotton in Idaho), oh, and embroidered with not merely rhinestones but diamonds.  The retailer’s response should be “pound sand/good luck with that.” It’s one thing to ask your vendor “tell me what you did to build security into this product” and “tell me how you help mitigate counterfeiting” but something else for a non-manufacturing entity – the government – to dictate exactly how industry should build products and manage risk. Do we really want the government telling industry how to build products? Further, do we really want a US-specific set of requirements for how to build products for a global marketplace? What’s good for the (US) goose is good for the (European/Brazilian/Chinese/Russian/Indian/Korean/name your foreign country) gander.

An illustrative set of published responses to the NIST RFI – and my response to the response – follows:

1. “NIST should likewise recognize that Information Technology (IT) products and services play a critical role in addressing cybersecurity vulnerabilities, and their exclusion from the Framework will leave many critical issues unaddressed.”

Comment: COTS is general purpose software and not built for all threat environments. If I take my regular old longboard and attempt to surf Maverick’s on a 30 foot day and “eat it,” as I surely will, not merely because of my lack of preparation for 30-foot waves but because you need, as every surfer knows, a “rhino chaser” or “elephant gun” board for those conditions, is it the longboard shaper’s fault? Heck, no. No surfboard is designed for all surf conditions; neither is COTS designed for all threat environments. Are we going to insist on products designed for one-size-fits-all threat conditions? If so, we will all, collectively, “wipe out.” (Can’t surf small waves well on a rhino chaser. Can’t walk the board on one, either.)

Nobody agrees on what, precisely, constitutes critical infrastructure. Believe it or not, some governments appear to believe that social media should be part of critical national infrastructure. (Clearly, the World As We Know It will come to an end if I can’t post a picture of my dog Koa on Facebook.) And even if certain critical infrastructure functions – say, power generation – depend on COTS hardware and software, the surest way to weaken their security is to apply an inflexible and country-specific regulatory framework to that COTS hardware and software. We have an existing standard for the evaluation of COTS IT, it’s called the Common Criteria (see below): let’s use it rather than reinvent the digital wheel.  

2. “Software that is purchased or built by critical infrastructure operators should have a reasonable protective measures applied during the software development process.”

Comment: Thus introducing an entirely new and undefined term into the assurance lexicon: “protective measures.” I’ve worked in security – actually, the security of product development – for 20 years and I have no idea what this means. Does it mean that every product should self defend? I confess, I rather like the idea of applying the Marine Corps ethos – “every Marine a rifleman” – to commercial software. Every product should understand when it is under attack and every product should self-defend. It is a great concept but we do not, as an industry, know how to do that - yet. Does “protective measures” mean “quality measures?” Does it mean “standard assurance measures?” Nobody knows. And any term that is this nebulous will be interpreted by every reader as Something Different. 

3. “Ultimately, <Company X> believes that the public-private establishment of baseline security assurance standards for the ICT industry should cover all key components of the end-to-end lifecycle of ICT products, including R&D, product development, procurement, supply chain, pre-installation product evaluation, and trusted delivery/installation, and post-installation updates and servicing.”

Comment:  I can see the religious wars over tip-of-tree vs. waterfall vs. agile development methodologies. There is no single development methodology, there is no single set of assurance practices that will work for every organization (for goodness’ sake, you can’t even find a single vulnerability analysis tool that works well against all code bases).

Too many in government and industry cannot express concerns or problem statements in simple, declarative sentences, if at all. They don’t, therefore, have any business attempting to standardize how all commercial products are built (what problem will this solve, exactly?). Also, if there is an argument for baseline assurance requirements, it certainly can’t be for everything, or are we arguing that “FindEasyRecipes.com” is critical infrastructure and need to be built to withstand hostile nation state attacks that attempt to steal your brioche recipe if not tips on how to get sugar to caramelize at altitude?

 4. “Application of this technique to the Common Criteria for Information Technology Security Evaluation revealed a number of defects in that standard.  The journal Information and Software Technology will soon publish an article describing our technique and some of the defects we found in the Common Criteria.”

Comment: Nobody ever claimed the Common Criteria was perfect. What it does have going for it is a) it’s an ISO standard and b) by virtue of the Common Criteria Recognition Arrangement (CCRA), evaluating once against the Common Criteria gains you recognition in 20-some other countries. Putting it differently, the quickest way to make security much, much worse is to have a Balkanization of assurance requirements. (Taking a horse and jumping through mauve, pink, and yellow hoops doesn’t make the horse any better, but it does enrich the hoop manufacturers, quite nicely.)  In the security realm, doing the same thing four times doesn’t give you four times the security, it reduces security by four times, as limited (skilled) resource goes to doing the same thing four different ways. If we want better security, improve the Common Criteria and, by the way, major IT vendors and the Common Criteria national schemes – which come from each CCRA member country’s information assurance agency, like the NSA in the US – have been hard at work for the last few years applying their considerable security expertise and resources to do just that. Having state-by-state or country-by-country assurance requirements will make security worse – much, much worse.

 5. “…vendor adoption of industry standard security models.  In addition, we also believe that initiatives to motivate vendors to more uniformly adopt vulnerability and log data categorization, reporting and detection automation ecosystems will be a significant step in ensuring security tools can better detect, report and repair security vulnerabilities.”

Comment: There are so many flaws in this, one hardly knows where to start. There are existing vulnerability “scoring” standards – namely, the Common Vulnerability Scoring System (CVSS), *****  though there are some challenges with it, such as the fact that the value of the data compromised should make a difference in the score: a “breach” of Aunt Gertrude’s Whiskey Sauce Recipe is not, ceteris paribus, as dire as a breach of Personally Identifiable Information (PII) if for no other reason than a company can incur large fines for the latter, far exceeding Aunt Gertrude’s displeasure at the former. Even if she cuts you out of her will.

Also, there is work going on to standardize descriptions of product vulnerabilities (that is, the format and type). However, not all vendors release the exact same amount of information when they announce security vulnerabilities and should not be required to.  Oracle believes that it is not necessary to release either exploit code or the exact type of vulnerability; e.g., buffer overflow, cross-site request forgery (CSRF) or the like because this information does not help customers decide whether to apply a patch or not: it merely enables hackers to break into things faster. Standardize how you refer to particular advisory bulletin elements and make them machine readable? Sure. Insist on dictating business practices (e.g., how much information to release) – heck, no. That’s between a vendor and its customer base. Lastly, security tools cannot, in general “repair” security vulnerabilities – typically, only patch application can do that.

6. “All owners and operators of critical infrastructure face risk from the supply chain. Purchasing hardware and software potentially introduce security risk into the organization. Creation of a voluntary vendor certification program may help drive innovation and better security in the components that are essential to delivery of critical infrastructure services.”

Comment:  The insanity of the following comment astounds: “Purchasing hardware and software potentially introduce security risk into the organization.” News flash: all business involves “risk.” Not doing something is a risk. So, what else is new? Actually, attempting to build everything yourself also involves risk – not being able to find qualified people, the cost (and ability) to maintain a home-grown solution, and so forth. To quote myself again: “Only God created something from nothing: everyone else has a supply chain.”****** In short, everyone purchases something from outside their own organization. Making all purchases into A Supply Chain Risk as opposed to, say, a normal business risk is silly and counterproductive.  It also makes it far less likely that specific, targeted supply chain threats can be addressed at all if “buying something – anything – is a risk” is the threat definition.

At this point, I think I’ve said enough. Maybe too much. Again, I appreciate NIST as an organization and as I said above the direction they have set for the Framework (not to $%*& with IT innovation) is really to their credit. I believe NIST needs to in-source more of their standards/policy development, because it is their core mission and because consultants have every incentive to create perpetual work for themselves (and none whatsoever to be precise and focused). NIST should adopt a less-is-more mantra vis-a-vis security. It is better to ask organizations do a few critical things well than to ask them to do absolutely everything – with not enough resource (which is a collective industry problem and not one likely to be solved any time soon). Lastly, we need to remember that we are a proud nation of innovators. Governments generally don’t do well when they tell industry how to do their core mission – innovate – and, absent a truly compelling public policy argument for so doing, they shouldn’t try. 

*”Nice Work If You Can Get It,” lyrics by Ira Gershwin, music  by George Gershwin. Don’t you just love Gershwin?

** “Let’s Call The Whole Thing Off.” Another gem by George and Ira Gershwin.

*** Which reminds me – I really hate the expression “there are no silver bullets.” Of course there are silver bullets. How many vampires and werewolves do you see wandering around?

****Speaking of which, I just finished a fascinating if short read: The Man Who Deciphered Linear B: The Story of Michael Ventris.

*****CVSS is undergoing revision.

****** If you believe the account in Genesis, that is.

I Love Standards…There Are So Many Of Them

Mary Ann Davidson - Mon, 2013-05-13 16:32

The title is not an original bon mot by me – it’s been said often, by others, and by many with more experience than I have in developing standards.  It is with mixed emotions that I feel compelled to talk about a (generally good and certainly well-intentioned) standards organization: the US National Institute of Standards and Technology (NIST). I should state at the outset that I have a lot of respect for NIST. In the past, I have even urged a Congressional committee (House Science and Technology, if memory serves) to try to allocate more money to NIST for cybersecurity standards work.  I’ve also met a number of people who work at NIST – some of whom have since left NIST and brought their considerable talents to other government agencies, one of whom I ran into recently and mentioned how I still wore a black armband years after he had left NIST because he had done such great work there and I missed working with him. All that said, I’ve seen a few trends at NIST recently that are – of concern.


When in Doubt, Hire a Consultant


I’ve talked in other blog entries about the concern I have that so much of NIST’s outwardly-visible work seems to be done not by NIST but by consultants. I’m not down on consultants for all purposes, mind you – what is having your tires rotated and your oil changed except “using a car consultant?” However, in the area of developing standards or policy guidance it is of concern, especially when, as has been the case recently, the number of consultants working on a NIST publication or draft document is greater than the number of NIST employees contributing to it.  There are business reasons, often, to use consultants. But you cannot, should not, and must not “outsource” a core mission, or why are you doing it? This is true in spades for government agencies.  Otherwise, there is an entire beltway’s worth of people just aching to tell you about a problem you didn’t know you had, propose a standard (or regulation) for it, write the standard/regulation, interpret it and “certify” that Other Entities meet it. To use a song title, “Nice Work If You Can Get It.”* Some recent consultant-heavy efforts are all over the map, perhaps because there isn’t a NIST employee to say, "you say po-TAY-to, I sy po-TAH-to, let's call the whole thing off." ** (Or at least make sure the potato standard is Idaho russet – always a good choice.)


Another explanation – not intentionally sinister but definitely a possibility – is that consultants’ business models are often tied to repeat engagements.  A short, concise, narrowly-tailored and readily understandable standard isn’t going to generate as much business for them as a long, complex and “subject to interpretation – and five people will interpret this six different ways” – document. 


In short: I really don’t like reading a document like NISTIR 7622 (more on which below) where most of the people who developed it are consultants. NIST’s core mission is standards development: NIST needs to own their core mission and not farm it out.


Son of FISMA


I have no personal experience with the Federal Information Security Management Act of 2002 (FISMA) except the amount of complaining I hear about it second hand, which is considerable.  The gist of the complaints is that FISMA asks people to do a lot of stuff that looks earnestly security oriented, not all of which is equally important.


Why should we care? To quote myself (in an obnoxiously self-referential way): “time, money and (qualified security) people are always limited.” That is, the more security degenerates into a list of the 3000 things you Must Do To Appease the Audit Gods, the less real security we will have (really, who keeps track of 3000 Must Dos, much less does them? It sounds like a demented Girl Scout merit badge). And, in fact, the one thing you read about FISMA is that many government agencies aren’t actually compliant because they missed a bunch of FISMA checkboxes.  Especially since knowledgeable resources (that is, good security people) are limited, it’s much better to do the important things well then maintain the farce that you can check 3000 boxes, which certainly cannot all be equally important. (It’s not even clear how many of these requirements contribute to actual security as opposed to supporting the No Auditor Left Behind Act.)


If the scuttlebutt I hear is accurate, the only thing that could make FISMA worse is – you guessed it –adding more checkboxes. It is thus with considerable regret that I heard recently that NIST updated NIST Special Publication 800-53 (which NIST has produced as part of its statutory responsibilities under FISMA). The Revision 4 update included more requirements in the area of supply chain risk management and software assurance and trustworthiness.  Now why would I, a maven of assurance, object to this? Because a) we already have actual standards around assurance, b) having FISMA-specific requirements means that pretty much every piece of Commercial Off-the-Shelf (COTS) software will have to be designed and built to be FISMA compliant or COTS software/hardware vendors can’t sell into the Federal government and (c) we don’t want a race by other governments to come up with competing standards, to the point where we’re checking not 3000 but 9000 or 12000 boxes and probably can’t come up with a single piece of COTS globally let alone one that meets all 12000 requirements. (Another example is the set of supply chain/assurance requirements in the telecom sector in India that include a) asking for details about country of origin and b) specific contractual terms that buyers anywhere in the supply chain are expected to use.  An unintended result is that a vendor will need to (a) disclose sensitive supply chain data (which itself may be a trade secret) and (b) modify processes around global COTS to sell into one country.)


Some of the new NIST guidance is problematic for any COTS supplier. To provide one example, consider:


“The artifacts generated by these development activities (e.g., functional specifications, high-level/low-level designs, implementation representations [source code and hardware schematics], the results from static/dynamic testing and code analysis (emphasis mine)) can provide important evidence that the information systems (including the components that compose those systems) will be more reliable and trustworthy. Security evidence can also be generated from security testing conducted by independent, accredited, third-party assessment organizations (e.g., Common Criteria Testing Laboratories (emphasis mine), Cryptographic/Security Testing Laboratories, and other assessment activities by government and private sector organizations.)”


For a start, to the extent that components are COTS, such “static testing” is certainly not going to happen by a third party nor will the results be provided to a customer. Once you allow random customers – especially governments – access to your source code or to static analysis results, you might as well gift wrap your code and send it to a country that engages in industrial espionage, because no vendor, having agreed to this for one government, will ever be able to say no to Nation States That Steal Stuff.  (And static analysis results, to the extent some vulnerabilities are not fixed yet, just provide hackers a road map for how and where to break in.) Should vendors do static analysis themselves? Sure, and many do. It’s fair for customers to ask whether this is done, and how a supplier ensures that the worst stuff is fixed before the supplier ships product. But it is worth noting – again – that if these tools were easy to use and relatively error free, everyone would be at a high level of tools usage maturity years ago. Using static analysis tools is like learning Classic Greek – very hard, indeed. (OK, koinic Greek isn’t too bad but Homeric Greek or Linear B, fuhgeddabout it.)


With reference to the Common Criteria (CC), the difficulty now is that vendors have a much harder time doing CC evaluations than in the past because of other forces narrowing CC evaluations into a small set of products that have Protection Profiles (PPs). The result has been and will be for the foreseeable future – fewer evaluated products. The National Information Assurance Partnership (NIAP) – the US evaluation scheme – has ostensibly good reasons for their “narrowed/focused” CC-directions. But it is more than a little ironic that the NIST 800-53 revision should mention CC evaluations as an assurance measure at a time when the pipeline of evaluated products is shrinking, in large part due to the directions taken by another government entity (NIAP). What is industry to make of this apparent contradiction? Besides corporate head scratching, that is.


There are other – many other sections – I could comment upon, but one sticks out as worthy of notice:


“Supply chain risk is part of the advanced persistent threat (APT).”


It’s bad enough that “supply chain risk” is such a vague term that it encompasses basically any and all risk of buying from a third party. (Including “buying a crummy product” which is not actually a supply chain-specific risk but a risk of buying any and all products.) Can bad guys try to corrupt the supply chain? Sure. Does that make any and all supply chain risks “part of APT?” Heck, no. We have enough hysteria about supply chain risk and APT without linking them together for Super-Hysteria. 


To sum up, I don’t disagree that customers in some cases – and for some, not all applications – may wish higher levels of assurance or have a heightened awareness of cyber-specific supply chain threats (e.g., counterfeiting and deliberate insertion of malware in code). However, incorporation of supply chain provisions and assurance requirements into NIST 800-53 has the unintended effect of requiring any and all COTS products to be sold to government agencies – which is all of them as far as I know – to be subject to FISMA.


What if the state of Idaho decided that every piece of software had to attest to the fact that No Actual Moose were harmed during the production of this software and that any moose used in code production all had background checks? What if every other state enumerated specific assurance requirements and specific supply chain risk management practices? What if they conflict with each other, or with the NIST 800-53 requirements? I mean really, why are these specific requirements called out in NIST 800-53 at all? There really aren’t that many ways to build good software.  FISMA as interpreted by NIST 800-53 really, really shouldn’t roll its own.


IT Came from Outer Space – NISTIR 7622


I’ve already opined at length about how bad the NIST Interagency Report (NISTIR) 7622 is. I had 30 pages of comments on the first 80-page draft. The second draft only allowed comments of the Excel Spreadsheet form:  “Section A.b, change ‘must’ to ‘should,’ for the reason ‘because ‘must’ is impossible’” and so on. This format didn’t allow for wholesale comments such as “it’s unclear what problem this section is trying to solve and represents overreach, fuzzy definition and fuzzier thinking.”  NISTIR 7622 was and is so dreadful that an industry association signed a letter that said, in effect, NISTIR 7622 was not salvageable, couldn’t be edited to something that could work, and needed to be scrapped in toto.


I have used NISTIR 7622 multiple times as a negative example: most recently, to an audience of security practitioners as to why they need to be aware of what regulations are coming down the pike and speak up early and often.  I also used it in the context of a (humorous) paper I did at the recent RSA Conference with a colleague, the subject of which was described as “doubtless-well-intentioned legislation/regulation-that-has-seriously-unfortunate-yet-doubtless-unintended-consequences.”  That’s about as tactful as you can get.


Alas, Dracula does rise from the grave,*** because I thought I heard noises at a recent Department of Homeland Security event that NISTIR 7622 was going to move beyond “good advice” and morph into a special publication. (“Run for your lives, store up garlic and don’t go out after dark without a cross!”)  The current version of NISTIR 7622 – after two rounds of edits and heaven knows how many thousands of hours of scrutiny – is still unworkable, overscoped and completely unclear: you have a better chance of reading Linear B**** than understanding this document (and for those who don’t already know, Linear B is not merely “all Greek to me” – it’s actually all Greek to anybody).  Ergo, NISTIR 7622 needs to die the true death: the last thing anyone should do with it is make a special publication out of it. It’s doubling down on dreck. Make it stop. Now. Please.


NIST RFI


The last section is, to be fair, not really about NIST per se. NIST has been tasked, by virtue of a recent White House Executive Order, with developing a framework for improving cybersecurity. As part of that tasking, NIST has published a Request For Information (RFI) seeking industry input on said framework. NIST has also scheduled several meetings to actively draw in thoughts and comments from those outside NIST. As a general rule, and NISTIR 7622 notwithstanding, NIST is very good at eliciting and incorporating feedback from a broad swath of stakeholders. It’s one of their strengths and one of the things I like about them. More importantly, I give major kudos to NIST and its Director Pat Gallagher for forcefully making the point that NIST would not interfere with IT design, development and manufacture, in the speech he gave when he kicked off NIST’s work on the Framework: “the Framework must be technology neutral and it must enable critical infrastructure sectors to benefit from a competitive [technology] market. (…) In other words, we will not be seeking to tell industry how to build your products or how to run your business.”


The RFI responses are posted publicly and are, well, all over the map.  What is concerning to me is the apparent desire of some respondents to have the government tell industry how to run their businesses. More specifically, how to build software, how to manage supply chain risk, and so forth. No, no, and no. (Maybe some of the respondents are consultants lobbying the government to require businesses to hire these consultants to comply with this or that mandate.)


For one thing, “security by design” concepts have already been working their way into development for a number of years: many companies are now staking their reputations on the security of their products and services. Market forces are working. Also, it’s a good time to remind people that more transparency is reasonable – for example, to enable purchasers to make better risk-based acquisition decisions – but when you buy COTS you don’t get to tell the provider how to build it. That’s called “custom code” or “custom development.” Just like, I don’t get to walk into <insert name of low-end clothing retailer here> and tell them, that I expect my “standard off-the-shelf blue jeans” to ex post facto be tailored to me specifically, made of “organic, local and sustainable cotton” (leaving aside the fact that nobody grows cotton in Idaho), oh, and embroidered with not merely rhinestones but diamonds.  The retailer’s response should be “pound sand/good luck with that.” It’s one thing to ask your vendor “tell me what you did to build security into this product” and “tell me how you help mitigate counterfeiting” but something else for a non-manufacturing entity – the government – to dictate exactly how industry should build products and manage risk. Do we really want the government telling industry how to build products? Further, do we really want a US-specific set of requirements for how to build products for a global marketplace? What’s good for the (US) goose is good for the (European/Brazilian/Chinese/Russian/Indian/Korean/name your foreign country) gander.


An illustrative set of published responses to the NIST RFI – and my response to the response – follows:


1. “NIST should likewise recognize that Information Technology (IT) products and services play a critical role in addressing cybersecurity vulnerabilities, and their exclusion from the Framework will leave many critical issues unaddressed.”


Comment: COTS is general purpose software and not built for all threat environments. If I take my regular old longboard and attempt to surf Maverick’s on a 30 foot day and “eat it,” as I surely will, not merely because of my lack of preparation for 30-foot waves but because you need, as every surfer knows, a “rhino chaser” or “elephant gun” board for those conditions, is it the longboard shaper’s fault? Heck, no. No surfboard is designed for all surf conditions; neither is COTS designed for all threat environments. Are we going to insist on products designed for one-size-fits-all threat conditions? If so, we will all, collectively, “wipe out.” (Can’t surf small waves well on a rhino chaser. Can’t walk the board on one, either.)


Nobody agrees on what, precisely, constitutes critical infrastructure. Believe it or not, some governments appear to believe that social media should be part of critical national infrastructure. (Clearly, the World As We Know It will come to an end if I can’t post a picture of my dog Koa on Facebook.) And even if certain critical infrastructure functions – say, power generation – depend on COTS hardware and software, the surest way to weaken their security is to apply an inflexible and country-specific regulatory framework to that COTS hardware and software. We have an existing standard for the evaluation of COTS IT, it’s called the Common Criteria (see below): let’s use it rather than reinvent the digital wheel.  


2. “Software that is purchased or built by critical infrastructure operators should have a reasonable protective measures applied during the software development process.”


Comment: Thus introducing an entirely new and undefined term into the assurance lexicon: “protective measures.” I’ve worked in security – actually, the security of product development – for 20 years and I have no idea what this means. Does it mean that every product should self defend? I confess, I rather like the idea of applying the Marine Corps ethos – “every Marine a rifleman” – to commercial software. Every product should understand when it is under attack and every product should self-defend. It is a great concept but we do not, as an industry, know how to do that - yet. Does “protective measures” mean “quality measures?” Does it mean “standard assurance measures?” Nobody knows. And any term that is this nebulous will be interpreted by every reader as Something Different. 


3. “Ultimately, <Company X> believes that the public-private establishment of baseline security assurance standards for the ICT industry should cover all key components of the end-to-end lifecycle of ICT products, including R&D, product development, procurement, supply chain, pre-installation product evaluation, and trusted delivery/installation, and post-installation updates and servicing.”


Comment:  I can see the religious wars over tip-of-tree vs. waterfall vs. agile development methodologies. There is no single development methodology, there is no single set of assurance practices that will work for every organization (for goodness’ sake, you can’t even find a single vulnerability analysis tool that works well against all code bases).


Too many in government and industry cannot express concerns or problem statements in simple, declarative sentences, if at all. They don’t, therefore, have any business attempting to standardize how all commercial products are built (what problem will this solve, exactly?). Also, if there is an argument for baseline assurance requirements, it certainly can’t be for everything, or are we arguing that “FindEasyRecipes.com” is critical infrastructure and need to be built to withstand hostile nation state attacks that attempt to steal your brioche recipe if not tips on how to get sugar to caramelize at altitude?


 4. “Application of this technique to the Common Criteria for Information Technology Security Evaluation revealed a number of defects in that standard.  The journal Information and Software Technology will soon publish an article describing our technique and some of the defects we found in the Common Criteria.”


Comment: Nobody ever claimed the Common Criteria was perfect. What it does have going for it is a) it’s an ISO standard and b) by virtue of the Common Criteria Recognition Arrangement (CCRA), evaluating once against the Common Criteria gains you recognition in 20-some other countries. Putting it differently, the quickest way to make security much, much worse is to have a Balkanization of assurance requirements. (Taking a horse and jumping through mauve, pink, and yellow hoops doesn’t make the horse any better, but it does enrich the hoop manufacturers, quite nicely.)  In the security realm, doing the same thing four times doesn’t give you four times the security, it reduces security by four times, as limited (skilled) resource goes to doing the same thing four different ways. If we want better security, improve the Common Criteria and, by the way, major IT vendors and the Common Criteria national schemes – which come from each CCRA member country’s information assurance agency, like the NSA in the US – have been hard at work for the last few years applying their considerable security expertise and resources to do just that. Having state-by-state or country-by-country assurance requirements will make security worse – much, much worse.


 5. “…vendor adoption of industry standard security models.  In addition, we also believe that initiatives to motivate vendors to more uniformly adopt vulnerability and log data categorization, reporting and detection automation ecosystems will be a significant step in ensuring security tools can better detect, report and repair security vulnerabilities.”


Comment: There are so many flaws in this, one hardly knows where to start. There are existing vulnerability “scoring” standards – namely, the Common Vulnerability Scoring System (CVSS), *****  though there are some challenges with it, such as the fact that the value of the data compromised should make a difference in the score: a “breach” of Aunt Gertrude’s Whiskey Sauce Recipe is not, ceteris paribus, as dire as a breach of Personally Identifiable Information (PII) if for no other reason than a company can incur large fines for the latter, far exceeding Aunt Gertrude’s displeasure at the former. Even if she cuts you out of her will.


Also, there is work going on to standardize descriptions of product vulnerabilities (that is, the format and type). However, not all vendors release the exact same amount of information when they announce security vulnerabilities and should not be required to.  Oracle believes that it is not necessary to release either exploit code or the exact type of vulnerability; e.g., buffer overflow, cross-site request forgery (CSRF) or the like because this information does not help customers decide whether to apply a patch or not: it merely enables hackers to break into things faster. Standardize how you refer to particular advisory bulletin elements and make them machine readable? Sure. Insist on dictating business practices (e.g., how much information to release) – heck, no. That’s between a vendor and its customer base. Lastly, security tools cannot, in general “repair” security vulnerabilities – typically, only patch application can do that.


6. “All owners and operators of critical infrastructure face risk from the supply chain. Purchasing hardware and software potentially introduce security risk into the organization. Creation of a voluntary vendor certification program may help drive innovation and better security in the components that are essential to delivery of critical infrastructure services.”


Comment:  The insanity of the following comment astounds: “Purchasing hardware and software potentially introduce security risk into the organization.” News flash: all business involves “risk.” Not doing something is a risk. So, what else is new? Actually, attempting to build everything yourself also involves risk – not being able to find qualified people, the cost (and ability) to maintain a home-grown solution, and so forth. To quote myself again: “Only God created something from nothing: everyone else has a supply chain.”****** In short, everyone purchases something from outside their own organization. Making all purchases into A Supply Chain Risk as opposed to, say, a normal business risk is silly and counterproductive.  It also makes it far less likely that specific, targeted supply chain threats can be addressed at all if “buying something – anything – is a risk” is the threat definition.


At this point, I think I’ve said enough. Maybe too much. Again, I appreciate NIST as an organization and as I said above the direction they have set for the Framework (not to $%*& with IT innovation) is really to their credit. I believe NIST needs to in-source more of their standards/policy development, because it is their core mission and because consultants have every incentive to create perpetual work for themselves (and none whatsoever to be precise and focused). NIST should adopt a less-is-more mantra vis-a-vis security. It is better to ask organizations do a few critical things well than to ask them to do absolutely everything – with not enough resource (which is a collective industry problem and not one likely to be solved any time soon). Lastly, we need to remember that we are a proud nation of innovators. Governments generally don’t do well when they tell industry how to do their core mission – innovate – and, absent a truly compelling public policy argument for so doing, they shouldn’t try. 


*”Nice Work If You Can Get It,” lyrics by Ira Gershwin, music  by George Gershwin. Don’t you just love Gershwin?


** “Let’s Call The Whole Thing Off.” Another gem by George and Ira Gershwin.


*** Which reminds me – I really hate the expression “there are no silver bullets.” Of course there are silver bullets. How many vampires and werewolves do you see wandering around?


****Speaking of which, I just finished a fascinating if short read: The Man Who Deciphered Linear B: The Story of Michael Ventris.


*****CVSS is undergoing revision.


****** If you believe the account in Genesis, that is.


Pages

Subscribe to Oracle FAQ aggregator