Skip navigation.

Feed aggregator

Showing Interval Partitons Code in DBMS_METADATA.GET_DDL

Pakistan's First Oracle Blog - Tue, 2015-03-17 22:32

-- If you want to display the system generated partitions as part of the CREATE TABLE DDL, then set the EXPORT parameter of the dbms_metadata to true.

-- The default behavior of "DBMS_METADATA.GET_DDL" is that it does not show Interval Partitions created by the system for interval partitioned tables and indexes.

-- In the case of Interval Partitioning, New Partitions are created automatically when corresponding row is inserted.  This newly created partition information will be displayed in "DBA_TAB_PARTITIONS" dictionary view. However when the DDL is queried using function "DBMS_METADATA.GET_DDL", then this information is not shown.

Demo:  (Following was tested on the Oracle 12c, and it should be valid for Oracle 11g too.)

-- Create table with interval partition.

CREATE TABLE mytabwithInterval
(mydate DATE,
 mynum NUMBER)

-- Insert some data to generate interval partitions.

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-01-01', 'YYYY-MM-DD'),1);

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-02-01', 'YYYY-MM-DD'),2);

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-03-01', 'YYYY-MM-DD'),3);

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-04-01', 'YYYY-MM-DD'),3);

-- check partition information in dictionary table

col partition_name format a20
select partition_name from user_tab_partitions where table_name='MYTABWITHINTERVAL';

-- To see default behavior of dbms_metadata:

set long 100000
set pagesize 50
col DDL format a120


-- To see it with export option:

exec dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,'EXPORT',true);


Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> set lines 181
SQL> set pages 100
SQL> CREATE TABLE mytabwithInterval
(mydate DATE,
 mynum NUMBER)
  2    3    4    5 
Table created.

SQL> INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-01-01', 'YYYY-MM-DD'),1);

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-02-01', 'YYYY-MM-DD'),2);

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-03-01', 'YYYY-MM-DD'),3);

INSERT INTO mytabwithInterval VALUES (TO_DATE('2015-04-01', 'YYYY-MM-DD'),3);
1 row created.

1 row created.

1 row created.

1 row created.


Commit complete.

SQL> col partition_name format a20
select partition_name from user_tab_partitions where table_name='MYTABWITHINTERVAL';SQL>



SQL> set long 100000
set pagesize 50
col DDL format a120



   (    "MYDATE" DATE,
 (PARTITION "P_20150301"  VALUES LESS THAN (TO_DATE(' 2015-03-01 00:00:00', 'SYY

SQL> SQL> exec dbms_metadata.set_transform_param(dbms_metadata.SESSION_TRANSFORM,'EXPORT',true);

PL/SQL procedure successfully completed.



   (    "MYDATE" DATE,
 (PARTITION "P_20150301"  VALUES LESS THAN (TO_DATE(' 2015-03-01 00:00:00', 'SYY
 PARTITION "SYS_P561"  VALUES LESS THAN (TO_DATE(' 2015-04-01 00:00:00', 'SYYYY-
 PARTITION "SYS_P562"  VALUES LESS THAN (TO_DATE(' 2015-05-01 00:00:00', 'SYYYY-

Categories: DBA Blogs

Compatibililty Mode for Oracle Utilities Products

Anthony Shorten - Tue, 2015-03-17 18:04

For customers wishing to run Oracle Utilities products on Internet Explorer versions greater than Internet Explorer 8, you must use compatibility mode to support the vastly different levels of standards represented by the different versions of Internet Explorer.

There are a number of ways to do this:

  • In IE9 and IE10 you can open the site and then click the Compatibility Mode icon next to the URL to add the sites URL to the compatibility list. Refer to the article from Microsoft on this.
  • In IE11 you can access the Compatibility Mode Settings from the Tools menu and add the URL directly. Refer to this help page for details.
  • Customers using Microsoft's Enterprise utilities such as IEAK and others such as Group Policy, can prepopulate this list using Enterprise Mode. Refer to this document for details of this process.

Use of compatibility mode ensures a wide range of Internet Explorer versions can work with Oracle Utilities applications allowing customers to migrate their sites to newer versions as necessary.

This article is also available from My Oracle Support at  Running Oracle Utilities Applications in Internet Explorer Compatibility Mode (Doc Id: 1997802.1)

ADF BC Property Set to Group Attribute Hints

Andrejus Baranovski - Tue, 2015-03-17 12:35
There is one ADF BC component not mentioned often - Property Set. This is a bag of properties/hints, it can be applied for EO/VO attributes. Frequently used hints can be defined in Property Set once, without repeating the same for each attribute. This simplifies maintenance, if there will be a change required in the future - much easier to change it in single place. I will show below a practical example, where Property Set is applied for EO attribute to define a date type.

You can create Property Set with ADF BC wizard:

Property Set creation is really simple - provide a name and package:

It comes with a section to define Custom Properties. I will add two properties required to set date format - FMR_FORMAT and FMT_FORMATTER. I will be using EU format for the date (dd-MM-yyyy). The advantage of Property Set - you can define multiple sets for different date formats and apply them when required:

Select EO/VO attribute and choose Property Set, it will appear in the choice list:

Attribute is assigned with domain, pointing to the Property Set:

This is how it looks on UI - Hire Date field is correctly formatted, based on the format mask defined in Property Set:

Download sample application -

More notes on HBase

DBMS2 - Tue, 2015-03-17 12:13

1. Continuing from last week’s HBase post, the Cloudera folks were fairly proud of HBase’s features for performance and scalability. Indeed, they suggested that use cases which were a good technical match for HBase were those that required fast random reads and writes with high concurrency and strict consistency. Some of the HBase architecture for query performance seems to be:

  • Everything is stored in sorted files. (I didn’t probe as to what exactly the files were sorted on.)
  • Files have indexes and optional Bloom filters.
  • Files are marked with min/max field values and time stamp ranges, which helps with data skipping.

Notwithstanding that a couple of those features sound like they might help with analytic queries, the base expectation is that you’ll periodically massage your HBase data into a more analytically-oriented form. For example — I was talking with Cloudera after all — you could put it into Parquet.

2. The discussion of which kinds of data are originally put into HBase was a bit confusing.

  • HBase is commonly used to receive machine-generated data. Everybody knows that.
  • Cloudera drew a distinction between:
    • Straightforward time series, which should probably just go into HDFS (Hadoop Distributed File System) rather than HBase.
    • Data that is bucketed by entity, which likely should go into HBase. Examples of entities are specific users or devices.
  • Cloudera also reminded me that OpenTSDB, a popular time series data store, runs over HBase.

OpenTSDB, by the way, likes to store detailed data and aggregates side-by-side, which resembles a pattern I discussed in my recent BI for NoSQL post.

3. HBase supports caching, tiered storage, and so on. Cloudera is pretty sure that it is publicly known (I presume from blog posts or conference talks) that: 

  • Pinterest has a large HBase database on SSDs (Solid-State Drives), a large fraction of which is actually in RAM.
  • eBay has an HBase database largely on spinning disk, used to inform its search engine.

Cloudera also told me of a Very Famous Company that has many 100s of HBase nodes managing petabytes of mobile device data. That sounds like multiple terabytes per node even before considering a replication factor, so I presume it’s disk-based as well. The takeaway from those examples, other than general big-use-case impressiveness, is that storage choices for HBase can vary greatly by user and application.

4. HBase has master/master geographically remote replication. I gather that Yahoo replicates between a couple of 1000-node clusters, on behalf of its Flurry operation. HBase also has the technical capability to segment data across geographies — i.e., the geo-partitioning feature essential to data sovereignty compliance — but no actual implementations came to mind.

5. Besides the ones already mentioned, and famed HBase user Facebook, a few other users came up.

  • It seems to be common for ad-tech companies to store in HBase the data that arrives from many different computers and mobile devices.
  • An agency that Cloudera didn’t name, but which is obviously something like the SEC or CFTC, stores all trade data in HBase.
  • Cerner — or perhaps its software — stores data in HBase on a patient-by-patient basis.

In general, Cloudera suggested that HBase was used in a fair number of OEM situations.

6. Finally, I have one number: As of January, 2014 there were 20,000 HBase nodes managed by Cloudera software. Obviously, that number is growing very quickly, and of course there are many HBase nodes that Cloudera has nothing to do with.

Related link

  • A lot of this echoes what I hear from DataStax (December, 2013), notwithstanding the consensus that HBase and Cassandra rarely compete in the marketplace.
Categories: Other

Blackboard Brain Drain: One third of executive team leaves in past 3 months

Michael Feldstein - Tue, 2015-03-17 10:02

By Phil HillMore Posts (302)

In August 2013 Michael described Ray Henderson’s departure from an operational role at Blackboard. As of the end of 2014, Ray is no longer on the board of directors at Blackboard either. He is focusing on his board activity (including In The Telling, our partner for e-Literate TV) and helping with other ed tech companies. While Ray’s departure from the board did not come as a surprise to me, I have been noting the surprising number of other high-level departures from Blackboard recently.

As of December 24, 2014, Blackboard listed 12 company executives in their About > Leadership page. Of those 12 people, 4 have left the company since early January. Below is the list of the leadership team at that time along with notes on changes:

  • Jay Bhatt, CEO
  • Maurice Heiblum, SVP Higher Education, Corporate And Government Markets (DEPARTED February, new job unlisted)
  • Mark Belles, SVP K-12 (DEPARTED March, now President & COO at Teaching Strategies, LLC)
  • David Marr, SVP Transact
  • Matthew Small, SVP & Managing Director, International
  • Gary Lang, SVP Product Development, Support And Cloud Services (DEPARTED January, now VP B2B Technology, Amazon Supply)
  • Katie Blot, SVP Educational Services (now SVP Corporate Strategy & Business Development)
  • Mark Strassman, SVP Industry and Product Management
  • Bill Davis, CFO
  • Michael Bisignano, SVP General Counsel, Secretary (DEPARTED February, now EVP & General Counsel at CA Technologies)
  • Denise Haselhorst, SVP Human Resources
  • Tracey Stout, SVP Marketing

Beyond the leadership team, there are three others worth highlighting.

  • Brad Koch, VP Product Management (DEPARTED January, now at Instructure)
  • David Ashman, VP Chief Architect, Cloud Architecture (DEPARTED February, now CTO at Teaching Strategies, LLC)
  • Mark Drechsler, Senior Director, Consulting (APAC) (DEPARTED March, now at Flinders University)

I mentioned Brad’s departure already and the significance in this post. Mark is significant in terms of his influence in the Australian market, as he came aboard from the acquisition of NetSpot.

David is significant as he was Chief Architect and had the primary vision for Blackboard’s impending moving into the cloud. Michael described this move in his post last July.

Phil and I are still trying to nail down some of the details on this one, particularly since the term “cloud” is used particularly loosely in ed tech. For example, we don’t consider D2L’s virtualization to be a cloud implementation. But from what we can tell so far, it looks like a true elastic, single-instance multi-tenant implementation on top of Amazon Web Services. It’s kind of incredible. And by “kind of incredible,” I mean I have a hard time believing it. Re-engineering a legacy platform to a cloud architecture takes some serious technical mojo, not to mention a lot of pain. If it is true, then the Blackboard technical team has to have been working on this for a long time, laying the groundwork long before Jay and his team arrived. But who cares? If they are able to deliver a true cloud solution while still maintaining managed hosting and self-hosted options, that will be a major technical accomplishment and a significant differentiator.

This seems like the real deal as far as we can tell, but it definitely merits some more investigation and validation. We’ll let you know more as we learn it.

This rollout of new cloud architecture has taken a while, and I believe it is hitting select customers this year. Will David’s departure add risk to this move? I talked to David a few weeks ago, and he said that he was leaving for a great opportunity at Teaching Strategies, and that while he was perhaps the most visible face of the cloud at Blackboard, others behind the scenes are keeping the vision. He does not see added risk. While I appreciate the direct answers David gave me to my questions, I still cannot see how the departure of Gary Lang and David Ashman will not add risk.

So why are so many people leaving? From initial research and questions, the general answer seems to be ‘great opportunity for me professionally or personally, loved working at Blackboard, time to move on’. There is no smoking gun that I can find, and most departures are going to very good jobs.

Jay Bhatt, Blackboard’s CEO, provided the following statement based on my questions.

As part of the natural evolution of business, there have been some transitions that have taken place. A handful of executives have moved onto new roles, motivated by both personal and professional reasons. With these transitions, we have had the opportunity to add some great new executive talent to our company as well. Individuals who bring the experience and expertise we need to truly capture the growth opportunity we have in front of us. This includes Mark Gruzin, our new NAHE/ProEd GTM lead, Peter George, our new head of product development and a new general counsel who will be starting later this month. The amazing feedback we continue to receive from customers and others in the industry reinforces how far we’ve come and that we are on the right path. As Blackboard continues to evolve, our leaders remain dedicated to moving the company forward into the next stage of our transformation.

While Jay’s statement matches what I have heard, I would note the following:

  • The percentage of leadership changes within a 3 month period rises above the level of “natural evolution of business”. Correlation does not imply causation, but neither does it imply a coincidence.
  • The people leaving have a long history in educational technology (Gary Lang being the exception), but I have not seen the same in reverse direction. Mark Gruzin comes from a background in worldwide sales and federal software group at IBM. Peter George comes from a background in Identity & Access Management as well as Workforce Management companies. They both seem to be heavy hitters, but not in ed tech. Likewise, Jay himself along with Mark Strassman and Gary Lang had no ed tech experience when they joined Blackboard. This is not necessarily a mistake, as fresh ideas and approaches were needed, but it is worth noting the stark differences in people leaving and people coming in.
  • These changes come in the middle of Blackboard making huge bets on a completely new user experience and a move into the cloud. These changes were announced last year, but they have not been completed. This is the most important area to watch – whether Blackboard completes these changes and successfully rolls them out to the market.

We’ll keep watching and update where appropriate.

The post Blackboard Brain Drain: One third of executive team leaves in past 3 months appeared first on e-Literate.

Use a wallet to encrypt Oracle client passwords

Yann Neuhaus - Tue, 2015-03-17 06:53

In our more and more connected world, security is a big concern. Storing password is a very important topic because if a machine is compromised, we can assume that any password stored in clear text are also compromised. I had the question from a customer how can we encrypt the password in our application and monitoring scripts? In this blog we will discuss one solution: using the Oracle client wallet

Read Only Item Layout

Denes Kubicek - Tue, 2015-03-17 02:08
The possibility to set the items conditionally to read only is one of the important security features in APEX. Using this feature you can secure your applications and it will not be possible to manipulate the protected item (region or page). The downside of this feature is that it will change the layout of the protected items to some kind of standard, ignoring the previous formatting in the "normal" mode. If you have a form which looks like this:

and change the description item to the read only mode, then the page will look a lot different and will be hard to control:

In such cases you can help yourself in different ways. Using some css or jQuery, you can make those read only items to appear only slightly different. For example, I used this code to change the layout of the description item on page load:

,"height":"120px","overflow-y":"auto","border":"1px solid grey"})

and the result looked like this:

There are also some other more generic methods:

.display_only {font-weight:bold; color: #ccc; display: block; width: 200px}



The only thing you need to note is that setting an item to the read only mode will create two page elements. The displayed element will get the suffix _DISPLAY added to the corresponding item ID.

Categories: Development

The Benefits of Information Lifecycle Management (ILM)

Anthony Shorten - Mon, 2015-03-16 18:27

In the release of Oracle Utilities Application Framework, a new ILM based data management solution was implemented to allow customers to manage the increasing cost of storage for their Oracle Utilities products. It is being progressively rolled out across each of the Oracle Utilities products over the next year. With utilities across the world having to deal with larger and larger data volumes over time, the cost of storing that data and ensuring appropriate access by the business to that data become key concerns. ILM offers tools to address that.

The benefits of using this facility are as follows:

  • ILM is about designing the lifecycle of your transaction data in a storage point of view. The first step of implementing the approach is for the business to define the data retention period for individual objects within the product itself. This defines the active period of the data, in days, where the business wants active update actions on the data. One of the major issues in data storage is the ability for the business and IT to commit to communicate when the business needs active access to the data. This addresses this by making it explicit in configuration.
  • When a transactional object, that is ILM enabled, is created, the data is then automatically tracked internally using ILM specific fields. This defines when the date to consider the data active from (ILM_DT) and a flag indicating the ILM status of the record (ILM_ARCH_SW) in terms of the business. An easy way to remember this is that the date set for ILM is basically a ticking clock. As soon as the record is created, the ILM clock starts. It tells the product when to consider the before mentioned retention period from in respect to that individual object.
  • As the ILM data is part of the object itself, it can be manipulated by any custom business process if desired. For example, say some customer transaction data was due to expire but the utility receives a complaint at the last minute on that data. It is possible to create a customization to delay the individual records, to keep them active, till the complaint is resolved. This basically means the ILM data can be manipulated by a business process, if desired.
  • A dedicated background process, known as an ILM Crawler, assesses data when it expires to make sure nothing is outstanding on the object and indicates this state using an ILM flag. This tells the IT group that the individual data is now considered less-active by the business so the IT group can now manage the storage using Oracle's ILM facilities.
  • The ILM flag effectively protects the data from any storage based solutions as long as it is needed by the business. The business can defer the lifecycle of the individual objects as needed and internal processes affecting the flag will ensure the data is protected if it is still active.
  • The IT Group is free to reuse their storage solutions capabilities or use the ILM features within the database such as partitioning, compression and now in Oracle Database 12c, Automatic Data Optimization.
  • The cost savings in terms of storage can be realized in a number of ways:
    • Partitioning - Product tables can be partitioning along data retention times to realize cost savings using tiered storage solutions. For example, placing less active data on lower cost storage saves costs. Partitioning also means you can use Transportable Tablespaces to quickly remove data that is dormant (not needed by the business at all). Transportable tablespaces are also useful for restoring data.
    • Compression - Oracle offers a full range of compression options from basic compression built into the database, the optional Advanced Compression option which offers flexible row level compression optimized for OLTP systems to the high performance HCC compression offered by Oracle ExaData,
    • Automatic Data Optimization (ADO)/Heat Map - This allows real time tracking of data usage regardless of the state or business activity of the data. This feature coupled with Partitioning and/or Compression allows sites to realize additional savings regardless of the state data as it is based upon real usage rather than expected usage. For example, customers may want to keep two years of data as active on the database. In reality, they may not update the data after 1 year, ADO determines the last date/time the record was updated and can compress the data automatically if it detects that it has not been updated for a while, which realizes more savings.
  • The tools for the technical definition of ILM are available from the command line, Oracle Database Control, Oracle EM Express, ILM Assistant or Oracle Enterprise Manager. These are tools that most DBA's and technical people will already be familiar with so skills can be reused in this respect. 

The ILM based data management solution for Oracle Utilities, a powerful and flexible solution that marries the business needs for data against the storage costs associated with that retention. It allow sites to realize costs savings whilst retaining appropriate business access to data.

For more information about ILM refer to the ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support.

12c: Little test of “TABLE ACCESS INMEMORY FULL” with count stopkey

XTended Oracle SQL - Mon, 2015-03-16 17:02

The table has 9M rows:

SQL> with function f return int is
  2       begin
  3          for r in (select value from v$mystat natural join v$statname where name like 'IM scan rows') loop
  4             dbms_output.put_line(r.value);
  5             return r.value;
  6          end loop;
  7       end;
  8  select f() from t_inmemory where rownum<=1
  9  ;
 10  /


1 row selected.

SQL> /


1 row selected.

SQL> /


1 row selected.
DDL and Plan
create table t_inmemory inmemory
with gen as (select 0 id from dual connect by level<=3e3)
select 0 n from gen,gen;

SQL_ID  cpgrrfv9h6m52, child number 0
with function f return int is      begin         for r in (select value
from v$mystat natural join v$statname where name like 'IM scan rows')
loop            dbms_output.put_line(r.value);            return
r.value;         end loop;      end; select f() from t_inmemory where

Plan hash value: 3697881339

| Id  | Operation                   | Name       | Rows  | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT            |            |       |     3 (100)|          |
|*  1 |  COUNT STOPKEY              |            |       |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| T_INMEMORY |     1 |     3   (0)| 00:00:01 |

Predicate Information (identified by operation id):

   1 - filter(ROWNUM<=1)


Categories: Development

Use a #db12c PDB for the #em12c OMR…Finally

DBASolved - Mon, 2015-03-16 15:57

In an earlier post, I mentioned that Oracle has finally, offcially supported the Oracle Management Repository (OMR) on Database 12c (  As I’ve been working on a DBaaS project, I built a new Oracle Enterprise Manager (OEM) enivornment to test out a few things.  Since I was rebuilding, I decided to try out the PDB as an OMR (afterall I’ve been asking about this approach).  In the past, installation would fail around 63%.  This time around, OEM installed in a PDB with no issue at all!  

To verify that the OMR was actually in an PDB, I had to dig around a bit.  Durning the installation, I had to provide connection information.  This configuration can be verified by going to Management Services and Repository page within OEM.  Once on this page, the Repository Details section (see image) will show you the database name and the connection string used.  The connection string identified the PDB being used in the Service Name part of the connection.

In the image, you can see that I’m using a database named MGMT, yet the connection string is going to service name  OEM1 is the PDB that is running under the MGMT consoldiated database (see image).  

What makes this work this time around?  There are a few patches that have to be applied to the Oracle Database ( before the PDB can be used for the OMR.  These patches are 19769480, 19877336, and 20243268.  These patches are required (more details here).  These patches require using OPatch or higher which can be downloaded from MOS Note: 274526.1 (Patch 6880880).  

Overall, Oracle did a good job in resolving this issue and giving the ability to host OEM in a PDB.  There are a few questions that come to mind now.  Here are just a few that I have:

1. What use cases will come out of this?  
2. How will performance of OEM look in using a PDB?  
3. How will licensing change with OEM and PDBs?

Anyways, give it a try!  


Filed under: OEM
Categories: DBA Blogs

DCLI to back up Oracle home and inventory before patch

Bobby Durrett's DBA Blog - Mon, 2015-03-16 15:48

I was preparing for my weekend patch of our Exadata system and I needed to back up all of our Oracle homes and inventories on our production system.  On our 2 node dev and qa clusters I just ran the backups by hand like this:

login as root

cd /u01/app/oracle/product/

tar -cvf - dbhome_1 | gzip > dbhome_1-20150211.tgz

cd /u01/app

cp -r oraInventory oraInventory.20150211

But the production cluster has 12 nodes so I had to figure out how to use DCLI to run the equivalent on all 12 nodes instead of doing them one at a time.  To run a DCLI command you need go to the directory that has the list of database server host names.  So, first you do this:

login as root

cd /opt/oracle.SupportTools/onecommand

The file dbs_group contains a list of the database server host names.

Next, I wanted to check how much space was free on the filesystem and how much space the Oracle home occupied so I ran these commands:

dcli -g dbs_group -l root "df|grep u01"

dcli -g dbs_group -l root "cd /u01/app/oracle/product/;du -ks ."

The first command gave me how much space was free on the /u01 filesystem on all database nodes.   The second command gave me how much space the home consumed.  I should have done “du -ks dbhome_1″ since I’m backing up dbhome_1 instead of everything under, but there wasn’t much else under so it worked out.

Now that I knew that there was enough space I ran the backup commands using DCLI.

dcli -g dbs_group -l root "cd /u01/app/oracle/product/;tar -cvf - dbhome_1 | gzip > dbhome_1-20150316.tgz"

dcli -g dbs_group -l root "cd /u01/app;cp -r oraInventory oraInventory.20150316"

I keep forgetting how to do this so I thought I would post it.  I can refer back to this later and perhaps it will be helpful to others.

– Bobby


Categories: DBA Blogs

Working with multiple Trees on the same APEX Page (4.2/5.0)

Dimitri Gielis - Mon, 2015-03-16 15:36
Ever tried to put multiple Trees on the same page in APEX 4.2?

Here's my example:

On the left Tree the "Selected Node Link with" is set to hidden item P1_TREE1, on the right Tree the value that sets the tree is set to P1_TREE2. At least it should do that, but by default if you're values are not unique - in this case empno is the value behind both trees - it doesn't work...
Your first tree will be highlighted twice; one time for ALLEN and one time for SCOTT and not as it should be, on the left ALLEN selected and on the right SCOTT selected.

To fix this issue in APEX 4.2, you need to adapt your select statement so you have unique values for both trees. I typically put a meaningful character in front of the value; for example you can concatenate 'N' for new values (new tree) and 'O' for old values (old tree).

So that fixes the issue of the "Selected Node Page Item" issue with non unique values.
Behind the tree implementation in APEX 4.2, jsTree is used. APEX is not using the most recent version and the way the JavaScript is coded doesn't really work that nicely. If you really like jsTree and for example want to search in it, or you have a very large dataset, check out Tom Petrus' tree plugin and explanation.

So what about APEX 5.0? The tree implementation in APEX 5.0 has the same functionalities, but is a lot better. Behind the scenes jsTree is not used anymore, instead the tree that is also used in the Page Designer is used. The issue with unique values is off the table, so no worries anymore in APEX 5.0.

Categories: Development

REST Service Access with A-Team Mobile Persistence Accelerator

Andrejus Baranovski - Mon, 2015-03-16 14:40
A-Team Mobile Persistence Accelerator (AMPA) works on top of Oracle Mobile Application Framework (MAF) and provides tools to simplify consumption of REST services. It enables transparent persistent layer usage on the device with the option of synchronising offline data with the server side. In this post, I will be testing AMPA in the context of the use case implemented previously - MAF 2.1 Alta Mobile UI and Oracle Mobile Suite. Working MAF application, where REST service calls are coded by hand, will be changed to fetch REST service data through AMPA - reusing existing Data Control. This is unusual approach, most of the tutorials related to AMPA describe end-to-end scenario, where Data Control is generated on top of AMPA service class and UI is generated with AMPA wizard. Mobile UI in most of the cases will be custom made and we rarely generating it. This is why it is important to explain, how to use AMPA with custom UI and Data Control.

You could read more about AMPA and find installation instructions in Steven Davelaar post - Getting Started with the A-Team Mobile Persistence Accelerator.

I will be using my sample application from the MAF 2.1 Alta UI post mentioned above. You can download application updated with AMPA support here (this includes ADF BC SOAP service, OSB transformation to REST and MAF application with AMPA) -

As soon as REST connection is defined, AMPA wizard offers you to define REST resources. In my case I have to different resources, though working with the same Employees data schema. Both require to provide parameter:

Wizard is going to ask you to provide sample parameters to retrieve a response from REST resources. This is how AMPA retrieves metadata required to generate service representation in MAF:

Data Objects will be constructed automatically for each resource path. In my case, both resources are based on the same schema Employees. If this is the case for you, make sure to change the names in the wizard to be unique, otherwise AMPA fails to generate proper SQL scripts and service classes.

In this example I don't want to persist any data on the device. My use case requires to retrieve latest data from the server, therefore there is no need to keep a copy on the device - this is why Persistent checkbox is not set:

Next you will get similar wizard as in ADF BC, where all attributes for the service will be listed along with types, etc.:

I don't have any create/update/delete operations in the example, therefore only mapping present if for find operation:

Both REST resources were defined with attributes. We need to set the values later, for this purpose we could define ELExpression value provider for the attribute and type actual expression into Literal Value field. You could use any scope, I'm using applicationScope in this example. Variable defined by EL expression must be initialised before executing REST operation - this will be done later, in the code:

Six easy steps and AMPA persistence layer for MAF is generated. Let's take a quick look, what was actually generated. There is SQLite script with on device DB structure - two different tables will be created, each per REST service defined in the beginning:

In this example, I'm not really using on device DB, tables are not mandatory. Persistence mapping file is generated, this is where on device persistence structure and REST service mapping is defined. This mapping is used as a metadata resource to describe how data retrieved from REST service should be translated into on device persistence layer. I have set the option not to persist data, this will only retrieve it:

AMPA does a remote read in background by default - this means it reads data from REST, populates it into local SQLite DB and doesn't wait until operation is completed. This is good for performance optimisation reasons in some use cases, but is not suitable, when we want to see data immediately in the same request (especially when we are integrating AMPA into our own custom service class). Remote read in background can be turned off through persistence mapping:

Service class is generated, it extends from AMPA service. We could generate Data Control on top of this class and use it from MAF UI. However, I would like to integrate it into existing service class and consume from existing Data Control:

You could generate Data Control through JDEV menu, as displayed below.

In my case, I have Data Control already - generated on top of ServiceHelper class, I will reuse it. Here is the service class - ServiceHelper. In order to use AMPA generated service class and POJO types from existing service class, I have changed types for service class variables. Names and POJO structure remains the same, this means Data Control will continue to work:

Method responsible to search for employees by name is changed to invoke AMPA functionality. Instead of calling REST service and transforming response - I'm initialising EmployeeService generated by AMPA and calling findAllEmployeesRemote method. Remote method works with REST service, this is what we need - always fetch latest available data from the service:

You should notice how REST service variable is initialized - through EL Expression in application scope. This is the same expression as we have defined in resource mapper.

Here you can see how it look on UI. List of employees is filtered and displayed on the mobile device screen. Data from REST service is retrieved through AMPA, UI remains exactly the same as it was implemented in the previous version of the application:

Very important thing to mention - MAF Data Control is able to cache data by itself. Once you fetch data and if you don't require offline support, MAF Data Control is able to keep fetched rows in memory, without requiring them from the service.

For example, we fetch a set of rows with David Austin in the list:

There are two different MAF fragments (List and Edit), attached to the same Data Control. When we navigate to Edit and back to List, there is no REST call executed - MAF is using previously fetched data from Data Control (there is no need to use AMPA to handle such use cases):

Detail data for David Audit is retrieved from MAF Data Control:

You need to ensure - getter in service class implementing Data Control is not triggering a call to the REST service. REST service call must be invoked from Method Call, before MAF fragment is loaded - this will allow to prepare data in advance (see Task Flow diagram above):

Rutgers and ProctorTrack Fiasco: Impact of listening to regulations but not to students

Michael Feldstein - Mon, 2015-03-16 13:07

By Phil HillMore Posts (302)

If you want to observe the unfolding impact of an institution ignoring the impact of policy decisions on students, watch the situation at Rutgers University. If you want to see the power of a single student saying “enough is enough”, go thank Betsy Chao and sign her petition. The current situation is that students are protesting the Rutgers usage of ProctorTrack software – which costs students $32 in additional fees, accessing their personal webcams, automatically tracks face and knuckle video as well as watching browser activity – in online courses. Students seem to be outraged at the lack of concern over student privacy and additional fees.

Prior to 2015, Rutgers already provided services for online courses to comply with federal regulations to monitor student identity. The rationale cited [emphasis added]:

The 2008 Higher Education Opportunity Act (HEOA) requires institutions with distance education programs to have security mechanisms in place that ensure that the student enrolled in a particular course is in fact the same individual who also participates in course activities, is graded for the course, and receives the academic credit. According to the Department of Education, accrediting agencies must require distance education providers to authenticate students’ identities through secure (Learning Management System) log-ins and passwords, proctored exams, as well as “new identification technologies and practices as they become widely accepted.”

This academic term, Rutgers added a new option – ProctorTrack:

Proctortrack is cost-effective and scalable for any institution size. Through proprietary facial recognition algorithms, the platform automates proctoring by monitoring student behavior and action for test policy compliance. Proctortrack can detect when students leave their space, search online for additional resources, look at hard notes, consult with someone, or are replaced during a test.

This occurred at the same time as the parent company Verificient received a patent for their approach, in January 2015.

A missing piece not covered in the media thus far is that Rutgers leaves the choice of student identify verification approach up to individual faculty or academic program [emphasis added].

In face-to-face courses, all students’ identities are confirmed by photo ID prior to sitting for each exam and their activities are monitored throughout the exam period. To meet accreditation requirements for online courses, this process must also take place. Rutgers makes available electronic proctoring services for online students across the nation and can assist with on-site proctoring solutions. Student privacy during a proctored exam at a distance is maintained through direct communication and the use of a secure testing service. Students must be informed on the first day of class of any additional costs they may incur for exam proctoring and student authentication solutions.

The method of student authentication used in a course is the choice of the individual instructor and the academic unit offering the course. In addition to technology solutions such as Examity and ProctorTrack, student authentication can also be achieved through traditional on-site exam proctoring solutions. If you have any questions, talk to your course instructor.

As the use of of ProctorTrack rolled out this term, at least one student – senior Betsy Chao – was disturbed and on February 5th created a petition on

However, I recently received emails from both online courses, notifying me of a required “Proctortrack Onboarding” assessment to set up Proctortrack software. Upon reading the instructions, I was bewildered to discover that you had to pay an additional $32 for the software on top of the $100 convenience fee already required of online courses. And I’m told it’s $32 per online class. $32 isn’t exactly a large sum, but it’s certainly not pocket change to me. Especially if I’m taking more than one online class. I’m sure there are many other college students who echo this sentiment. Not only that, but nowhere in either of the syllabi was there any inkling of the use of Proctortrack or the $32 charge. [snip]

Not only that, but on an even more serious note, I certainly thought that the delicate issue of privacy would be more gracefully handled, especially within a school where the use of webcams was directly involved in a student’s death. As a result, I thought Rutgers would be highly sensitive to the issue of privacy.

If accurate, this clearly violates the notification policy of Rutgers highlighted above. Betsy goes on to describe the alarming implications relating to student privacy.

On February 7th, New Brunswick Today picked up on the story.

Seven years ago, Congress passed the Higher Education Opportunity Act of 2008, authorizing the U.S Department of Education to outline numerous recommendations on how institutions should administer online classes.

The law recommended that a systemic approach be deveoped to ensure that the student taking exams and submitting projects is the same as the student who receives the final grade, and that institutions of higher education employ “secure logins and passwords, or proctored exams to verify a student’s identity.”

Other recommendations include the use of an identity verification process, and the monitoring by institutions of the evolution of identity verification technology.

Under these recommendations by the U.S Department of Education, Rutgers would technically be within its right to implement the use of ProctorTrack, or an alternative form of identity verification technology.

However, the recommendations are by no means requirements, and an institution can decide whether or not to take action.

The student newspaper at Rutgers, The Daily Targum, ran stories on February 9th and February 12th, both highly critical of the new software usage. All of this attention thanks to one student who refused to quietly comply.

The real problem in my opinion can be found in this statement from the New Brunswick Today article.

“The university has put significant effort into protecting the privacy of online students,” said the Rutgers spokesperson. “The 2008 Act requires that verification methods not interfere with student privacy and Rutgers takes this issue very seriously.”

The Rutgers Center for Center for Online and Hybrid Learning and Instructional Technologies (COHLIT) would oversee the implementation and compliance with the usage of ProctorTrack, according to Rutgers spokesperson E.J. Miranda, who insisted it is not mandatory.

“ProctorTrack is one method, but COHLIT offers other options to students, faculty and departments for compliance with the federal requirements, such as Examity and ExamGuard,” said Miranda.

Rutgers has also put up a FAQ page on the subject.

The problem is that Rutgers is paying attention to federal regulations and assuming their solutions are just fine, yet:

  • Rutgers staff clearly spent little or no time asking students for their input on such an important and highly charged subject;
  • Rutgers policy leaves the choice purely up to faculty or academic programs, meaning that there was no coordinated decision-making and communication to students;
  • Now that students are complaining, Rutgers spokes person has been getting defensive, implying ‘there’s nothing to see here’ and not taking the student concerns seriously;
  • At no point that I can find has Rutgers acknowledged the problem of a lack of notification and new charges for students, nor have they acknowledged that students are saying that this solution goes too far.

That is why this is a fiasco. Student privacy is a big issue, and students should have some input into the policies shaped by institutions. The February 12th student paper put it quite well in conclusion.

Granted, I understand the University’s concern — if Rutgers is implementing online courses, there need to be accountability measures that prevent students from cheating. However, monitoring and recording our computer activity during online courses is not the solution, and failing to properly inform students of ProctorTrack’s payment fee is only a further blight on a rather terrible product. If Rutgers wants to transition to online courses, then the University needs to hold some inkling of respect for student privacy. Otherwise, undergraduates have absolutely no incentive to sign up for online classes.

If Rutgers administration wants to defuse this situation, they will be to find a way to talk and listen to students on the subject. Pure and simple.

H/T: Thanks to Audrey Watters and to Jonathan Rees for highlighting this situation.

Update: Bumping comment from Russ Poulin into post itself [emphasis added]:

The last paragraph in the federal regulation regarding academic integrity (602.17) reads:

“(2) Makes clear in writing that institutions must use processes that protect student privacy and notify students of any projected additional student charges associated with the verification of student identity at the time of registration or enrollment.”

The privacy issue is always a tricky one when needing to meet the other requirements of this section. But, it does sound like students were not notified of the additional charges at the time of registration.

The post Rutgers and ProctorTrack Fiasco: Impact of listening to regulations but not to students appeared first on e-Literate.

MySQL Sounds Like Fun

Pythian Group - Mon, 2015-03-16 07:48

I love finding out new things about MySQL. Last week, I stumbled on a query that had the phrase “SOUNDS LIKE” in it. Sounds made-up, right? Turns out MySQL is using a known “soundex” algorithm common to most databases, and popular in use cases in geneaology.

The basic idea is that words are encoded according to their consonants. Consonants that sound similar (like M and N) are given the same code. Here’s a simple example:

(“soundex” and “sounds like” are different ways of doing the same thing in these queries)

MariaDB> select soundex("boom");
| soundex("boom") |
| B500            |

MariaDB> select soundex("bam");
| soundex("bam") |
| B500           |

MariaDB> select soundex("bin");
| soundex("bin") |
| B500           |

This simple example isn’t terribly useful, but if you were trying to find similar, but differently spelled, names across continents, it could be helpful:

MariaDB> select soundex("William");
| soundex("William") |
| W450               |

MariaDB> select soundex("Walaam");
| soundex("Walaam") |
| W450              |

MariaDB> select soundex("Willem");
| soundex("Willem") |
| W450              |

MariaDB> select soundex("Williama");
| soundex("Williama") |
| W450                |

And you could probably agree these variations match as well:

MariaDB> select soundex("Guillaume");
| soundex("Guillaume") |
| G450                 |

MariaDB> select soundex("Uilleam");
| soundex("Uilleam") |
| U450               |

MariaDB> select soundex("Melhem");
| soundex("Melhem") |
| M450              |

MariaDB> select soundex("Uilliam");
| soundex("Uilliam") |
| U450               |

Well, that’s pretty neat. Of course, I want to try the silliest word I can think of:

MariaDB> select soundex("supercalifragilisticexpealidocious");
| soundex("supercalifragilisticexpealidocious") |
| S162416242321432                              |

So the algorithm doesn’t stop at 3 digits; good to know.

What does the algorithm do? Luckily MySQL is open source, and so we can look in the source code:

This looks like the raw mapping. And then this is called into a function that loops through the characters in the word.

/* :::::::::::::::::::::::::: */
const char *soundex_map= "01230120022455012623010202";

Note that even though it’s called “sounds like” it is really simply a character mapping based on an agreement by the developers’ ears which characters sounds similar. That’s of course an oversimplification, and I see in the code comments the following:

* *
* The basic Algorithm source is taken from EDN Nov. *
* 14, 1985 pg. 36. *

But despite hitting up several librarians, I can’t seem to get a copy of this. Someone out there has a copy sitting around, right?

As a side note, this is obviously specific to the English language. I found references to German and other languages having soundex mappings, and would be curious to see those and hear of any language-specific ways to do this.

Curiosity aside, here’s a real use.

I pulled down some government climate data. Let’s say the location field has some of my favorite misspellings of “Durham” in it:

MariaDB [weather]> select distinct(two), count(two) from weather.temps group by two;
| two                                        | count(two) |
| NULL                                       |          0 |
| DRM                                        |         51 |
| DURHAM                                     |    1101887 |
| DURM                                       |         71 |
| NCSU                                       |    1000000 |
| RDU AIRPORT                                |    1000000 |

A “LIKE” clause won’t work terribly well here.

I confirmed the misspellings would match as I expected:

MariaDB [weather]> select soundex("Durham"), soundex("Durm"), soundex("DRM");
| soundex("Durham") | soundex("Durm") | soundex("DRM") |
| D650              | D650            | D650           |

So instead of manually creating a query like:

MariaDB [weather]> select count(two) from weather.temps where two='DRM' or two='DURHAM' or two='DURM';
| count(two) |
|    1102009 |

I can simply do this:

MariaDB [weather]> select count(two) from weather.temps where two sounds like 'Durham';
| count(two) |
|    1102009 |

There are more than several ways to do string comparisons, but I enjoyed finding this one.

(Bonus points will be granted to the first person who comments that RDU is also Durham and submits a unique query to include it in the count.)

Categories: DBA Blogs

Oracle Platinum Sponsor of AIIM 2015

WebCenter Team - Mon, 2015-03-16 07:47

Three major disruptive forces are accelerating the pace of that change and driving organizations into information chaos: consumerization, cloud and mobile, and the internet of things. As a result, there are a number of new business problems that organizations both large and small are facing. Organizations of all sizes are telling us, “Our processes are broken, we are buried in information and it is killing our ability to satisfy our customers.” Chaotic and disruptive times call for new approaches, new conversations, and a network of connections to help you navigate these challenging times. That's what AIIM15 is all about.

Oracle is proud to be a Platinum sponsor of the AIIM 2015 Conference, taking place March 18-20 in San Diego, CA. Oracle will have 2 speaking sessions, as well as a booth on the show floor.

  • Group Therapy Session – Use Cases: How EFSS Powers Digital Businesses Today | Thursday, March 19th , 2:20-3:05 PM
  • Industry Insights – Engage Panel | Thursday, March 19th, 4:25-5:05 PM

Booth Hours:

  • Wednesday, March 18 | 12:00 PM - 7:00 PM
  • Thursday, March 19 | 8:00 AM - 5:30 PM
  • Friday, March 20 | 8:00 AM - 10:00 AM

We hope to see you in San Diego this week!

Monitoring Cassandra with Grafana and Influx DB

Pythian Group - Mon, 2015-03-16 07:37


In this post I will explain how to set up Cassandra monitoring with influxDB and Grafana. This can also be used to connect to other monitoring systems (Graphite, Collectd, etc…) but since both influxDB and Grafana are hot topics at the moment I decided to follow the trend! I was asked why I was doing this when a tool like OpsCenter is available, but sometimes you want to have all your systems reporting to a single dashboard. And if your dashboard is Grafana and your Backend is influxDB then you will learn how to connect Cassandra to it!

– You are running a Linux system (This post is based on CentOS 7)
– You are using Cassandra 1.2+ (I’m using 2.1.3 in this case)

  • Cassandra Installation
  • Graphite Metrics Jar
  • influxDB –
  • Grafana –
  • Apache (Any webserver would do)
Installing and configure influxDB

This one is dead easy, once you have the package install it (rpm -i, dpkg -i). Start the service:

service influxdb start

Once the service is running, go to the configuration (/opt/influxdb/shared/config.toml) and edit the file so that under [input_plugins] it looks like this:

# Configure the graphite api
enabled = true
# address = "" # If not set, is actually set to bind-address.
port = 2003
database = "cassandra-metrics" # store graphite data in this database
udp_enabled = true

Save the file, reload the service:

service influxdb reload

Now go to your browser localhost:8083, click connect (no credentials should be needed), and after you logged in, enter in a database name (use cassandra-metrics) and click Create (This should be your only option). Now you can click the database, and add an user to it (and make it admin). Now create another database, with name “grafana”, create an admin for that database also.
Now you are done with influxDB.

Installing Grafana

Grafana is a bit more tricky, since it is needed to configure a webserver also. Let’s assume apache is installed, and the home directory for www is /var/www/html.

So get the grafana package and extract it to /var/www/html. So the end result should be something like /var/www/html/grafana.

Now do the following:

cd /var/www/html/grafana
cp config.sample.js config.js

Now let’s configure the connection between influXDB and Grafana. Open for edit the new copied file config.js and edit it so it looks like this:

datasources: {
  influxdb: {
    type: 'influxdb',
    url: "http://localhost:8086/db/cassandra-metrics",
    username: 'admin',
    password: 'admin',
  grafana: {
    type: 'influxdb',
    url: "http://localhost:8086/db/grafana",
    username: 'admin',
    password: 'admin',
    grafanaDB: true

Now redirect your browser to localhost/grafana and you will have the Grafana default dashboard.

Preparing Cassandra

Now the final piece of the puzzle. Now we follow more or less the Cassandra guide that exists here, but we need to make some changes to make it more valuable (and allow multiple nodes to provide metrics).

So, first of all, put the metrics-graphite-2.2.0.jar in all the Cassandra nodes /lib directory.
Now create a yaml file with similar to the Datastax example, lets call it influx-reporting.yaml and store it on /conf directory. Now edit the file again so it looks like this:

  period: 60
  timeunit: 'SECONDS'
  prefix: 'Node1'
  - host: 'localhost'
    port: 2003
    color: "white"
    useQualifiedName: true
    - ".*"

What did we change here, first we added a prefix field, this will allow us to identify the node that is providing the metrics. It must be different for every node, otherwise the metrics will overwrite/mix with each other. Then we decided to allow all patterns (“.*”), this means that Cassandra will push out all the metrics into influxDB. You can decide whether or not this is too much and just enable the metrics you want (find out more about it here).

Now edit the so that it will read the yaml file to provide the metrics. Add the following line to the end of the file:

JVM_OPTS="$JVM_OPTS -Dcassandra.metricsReporterConfigFile=influx-reporting.yaml"

If all is done correctly, you can restart the Cassandra node (or nodes, but don’t do it all at the same time, 2min between each is recommended) and if the log file has the following message:

INFO [main] YYYY-MM-DD HH:MM:SS,SSS - Trying to load metrics-reporter-config from file: inf
INFO [main] YYYY-MM-DD HH:MM:SS,SSS - Enabling GraphiteReporter to localhost:2003

All is good!


Grafana is not that difficult to use, so once you start exploring a bit (And reading the documentation) you will find out doing nice graphs. This could be a long post only about graphing out, so I’m just go and post some images of the graphs I’m getting out of Grafana so that you can see how it can be powerful and help you on keeping your Cassandra Healthy.

Grafana_cassandra-test3 Grafana_cassandra-test2 Grafana_cassandra-test1
Categories: DBA Blogs

Cassandra 101 : Understanding What Cassandra Is

Pythian Group - Mon, 2015-03-16 07:35

As some of you may know, in my current role at Pythian, I am tackling OSDB and currently Cassandra is on my radar. So one of the things I have been trying to do is learn what Cassandra is, so in this series, I’m going to share a bit of what I have been able to learn.

According to the whitepaper “Solving Big Data Challenges for Enterprise Application Performance Management” , Cassandra is a “distributed key value store developed at Facebook. It was designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service without single point of failure allowing replication even across multiple data centers as well as for choosing between synchronous or asynchronous replication for each update.”

Cassandra, in layman’s terms, is a NoSQL database developed in JavaOne. One of Cassandra’s many benefits is that it’s an open source DB with deep developer support. It is also a fully distributed DB, meaning that there is no master DB, unlike Oracle or MySQL, so this allows this database to have no point of failure. It also touts being linearly scalable, meaning that if you have 2 nodes and a throughput of 100,000 transactions per second, and you added 2 more nodes, you would now get 200,000 transactions per second, and so forth.


Cassandra is based on 2 core technologies, Google’s Big Table and Amazon’s Dynamo, which Facebook uses to power their Inbox Search feature and released it as an open source project on Google Code and then incubated at Apache, and is nowadays a Top-Level-Project. Currently there exists 2 versions of Cassandra:

Since Cassandra is a distributed system, it follows the CAP Theorem, which is awesomely explained here, and it states that, in a distributed system, you can only have two out of the following three guarantees across a write/read pair:

  • Consistency.- A read is guaranteed to return the most recent write for a given client.
  • Availability.-A non-failing node will return a reasonable response within a reasonable amount of time (no error or timeout).
  • Partition Tolerance.-The system will continue to function when network partitions occur.

Also Cassandra is a BASE (Basically Available, Soft state, Eventually consistent) type system, not an ACID (Atomicity, Consistency, Isolation, Durability) type system, meaning that the system is optimistic and accepts that the database consistency will be in a state of flux, not like ACID which is pessimistic and it forces consistency at the end of every transaction.

Cassandra stores data according to the column family data model where:

  • Keyspace is the container for your application data, similar to a schema in a relational database. Keyspaces are used to group column families together. Typically, a cluster has one keyspace per application.It also defines the replication strategy and data objects belong to a single keyspace
  • Column Family is a set of  one,two or more individual rows with a similar structure
  • Row is a collection of sorted columns, it is the the smallest unit that stores related data in Cassandra, and any component of a Row can store data or metadata
    •  Row Key uniquely identifies a row in a column family

      •  Column key uniquely identifies a column value in a row
      •  Column value stores one value or a collection of values

Also we need to understand the basic architecture of Cassandra, which has the following key structures:

  • Node is one Cassandra instance and is the basic infrastructure component in Cassandra. Cassandra assigns data to nodes in the cluster, each node is assigned a part of the database based on the Row Key. Usually corresponds to a host, but not necessarily, specially in Dev or Test environments.
  • Rack is a logical set of nodes
  • Data Center is a logical set of Racks, a data center can be a physical data center or virtual data center. Replication is set by data center
  • Cluster contains one or more data centers and is the full set of nodes which map to a single complete token ring


Hopefully this will help you understand the basic Cassandra concepts. In the next series, I will go over architecture concepts of what a Seed node is, the purpose of the Snitch and topologies, the Coordinator node, replication factors, etc

Note 1:

André Araújo, a great friend of mine and previous Pythianite, wrote about his first experience with Cassandra : My First Experience with Cassandra – Part 1

Note 2:

This post was originally published in my personal blog:

Categories: DBA Blogs

Log Buffer #414, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-03-16 07:22

This Log Buffer Edition picks the sea shells from Blogs across the seas of Oracle, SQL Server and MySQL and arrange them for you in this Edition. Enjoy.


12c Parallel Execution New Features: Concurrent UNION ALL

Visualizing Statspack Performance Data in SQL Developer

Optimizer statistics – Gathering Statistics and Histograms

Big Data Made Actionable with Omar TawaKol at SXSW

Mobile backend with REST services and JSON payload based on SOA Suite 12c

SQL Server:

Setting Different Colors for Connections in SSMS

Defusing Database Time Bombs: Avoiding the Need to Refactor Databases

This article shows a step by step tutorial to create a virtual machine in 15 min on Windows Azure.

What SQL Statements Are Currently Using The Transaction Logs?

SQL Server Random Sorted Result Set


Oracle Linux 7.1 and MySQL 5.6

MySQL Workbench 6.3.2 RC has been released

MariaDB CONNECT storage engine now offers access to JSON

Avoiding MySQL ERROR 1052 by prefixing column names in multi table queries

MySQL 5.7.6 DMR: Packages, Repos, Docker Images

Categories: DBA Blogs

Oracle Forms Migration Webinar

Gerger Consulting - Mon, 2015-03-16 07:19

Below, you see a short demo of an Oracle Forms application rebuilt with Formspider by five Forms developers from TEAM GmbH.

Join our joint webinar with TEAM GmbH and find out how they successfully rebuilt their product ProStore with Formsipder.

The webinar will be presented by Frank Zscherlich (division manager logistics, TEAM GmbH), Michael Wibberg (product manager, lead of product development, TEAM GmbH) and Yalim K. Gerger (Founder of Formspider).

In the webinar the following topics will be discussed:

- What other products did TEAM look at?
- Why did TEAM choose Formspider?
- What are the benefits of using Formspider?
- What is it like to work with the company behind Formspider?
- What was TEAM's approach to Forms Modernization?
- A demo of the application
- A demo of the development experience with Formspider

Sign up now.
Categories: Development