Skip navigation.

Feed aggregator

Cassandra 101 : Understanding What Cassandra Is

Pythian Group - Mon, 2015-03-16 07:35

As some of you may know, in my current role at Pythian, I am tackling OSDB and currently Cassandra is on my radar. So one of the things I have been trying to do is learn what Cassandra is, so in this series, I’m going to share a bit of what I have been able to learn.

According to the whitepaper “Solving Big Data Challenges for Enterprise Application Performance Management” , Cassandra is a “distributed key value store developed at Facebook. It was designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service without single point of failure allowing replication even across multiple data centers as well as for choosing between synchronous or asynchronous replication for each update.”

Cassandra, in layman’s terms, is a NoSQL database developed in JavaOne. One of Cassandra’s many benefits is that it’s an open source DB with deep developer support. It is also a fully distributed DB, meaning that there is no master DB, unlike Oracle or MySQL, so this allows this database to have no point of failure. It also touts being linearly scalable, meaning that if you have 2 nodes and a throughput of 100,000 transactions per second, and you added 2 more nodes, you would now get 200,000 transactions per second, and so forth.

2015-03-12_1145

Cassandra is based on 2 core technologies, Google’s Big Table and Amazon’s Dynamo, which Facebook uses to power their Inbox Search feature and released it as an open source project on Google Code and then incubated at Apache, and is nowadays a Top-Level-Project. Currently there exists 2 versions of Cassandra:

Since Cassandra is a distributed system, it follows the CAP Theorem, which is awesomely explained here, and it states that, in a distributed system, you can only have two out of the following three guarantees across a write/read pair:

  • Consistency.- A read is guaranteed to return the most recent write for a given client.
  • Availability.-A non-failing node will return a reasonable response within a reasonable amount of time (no error or timeout).
  • Partition Tolerance.-The system will continue to function when network partitions occur.

Also Cassandra is a BASE (Basically Available, Soft state, Eventually consistent) type system, not an ACID (Atomicity, Consistency, Isolation, Durability) type system, meaning that the system is optimistic and accepts that the database consistency will be in a state of flux, not like ACID which is pessimistic and it forces consistency at the end of every transaction.

Cassandra stores data according to the column family data model where:

  • Keyspace is the container for your application data, similar to a schema in a relational database. Keyspaces are used to group column families together. Typically, a cluster has one keyspace per application.It also defines the replication strategy and data objects belong to a single keyspace
  • Column Family is a set of  one,two or more individual rows with a similar structure
  • Row is a collection of sorted columns, it is the the smallest unit that stores related data in Cassandra, and any component of a Row can store data or metadata
    •  Row Key uniquely identifies a row in a column family

      •  Column key uniquely identifies a column value in a row
      •  Column value stores one value or a collection of values
keyspace

Also we need to understand the basic architecture of Cassandra, which has the following key structures:

  • Node is one Cassandra instance and is the basic infrastructure component in Cassandra. Cassandra assigns data to nodes in the cluster, each node is assigned a part of the database based on the Row Key. Usually corresponds to a host, but not necessarily, specially in Dev or Test environments.
  • Rack is a logical set of nodes
  • Data Center is a logical set of Racks, a data center can be a physical data center or virtual data center. Replication is set by data center
  • Cluster contains one or more data centers and is the full set of nodes which map to a single complete token ring
Cassandra_Arch

Conclusion

Hopefully this will help you understand the basic Cassandra concepts. In the next series, I will go over architecture concepts of what a Seed node is, the purpose of the Snitch and topologies, the Coordinator node, replication factors, etc

Note 1:

André Araújo, a great friend of mine and previous Pythianite, wrote about his first experience with Cassandra : My First Experience with Cassandra – Part 1

Note 2:

This post was originally published in my personal blog: rene-ace.com

Categories: DBA Blogs

Log Buffer #414, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-03-16 07:22

This Log Buffer Edition picks the sea shells from Blogs across the seas of Oracle, SQL Server and MySQL and arrange them for you in this Edition. Enjoy.

Oracle:

12c Parallel Execution New Features: Concurrent UNION ALL

Visualizing Statspack Performance Data in SQL Developer

Optimizer statistics – Gathering Statistics and Histograms

Big Data Made Actionable with Omar TawaKol at SXSW

Mobile backend with REST services and JSON payload based on SOA Suite 12c

SQL Server:

Setting Different Colors for Connections in SSMS

Defusing Database Time Bombs: Avoiding the Need to Refactor Databases

This article shows a step by step tutorial to create a virtual machine in 15 min on Windows Azure.

What SQL Statements Are Currently Using The Transaction Logs?

SQL Server Random Sorted Result Set

MySQL:

Oracle Linux 7.1 and MySQL 5.6

MySQL Workbench 6.3.2 RC has been released

MariaDB CONNECT storage engine now offers access to JSON

Avoiding MySQL ERROR 1052 by prefixing column names in multi table queries

MySQL 5.7.6 DMR: Packages, Repos, Docker Images

Categories: DBA Blogs

Oracle Forms Migration Webinar

Gerger Consulting - Mon, 2015-03-16 07:19

Below, you see a short demo of an Oracle Forms application rebuilt with Formspider by five Forms developers from TEAM GmbH.



Join our joint webinar with TEAM GmbH and find out how they successfully rebuilt their product ProStore with Formsipder.

The webinar will be presented by Frank Zscherlich (division manager logistics, TEAM GmbH), Michael Wibberg (product manager, lead of product development, TEAM GmbH) and Yalim K. Gerger (Founder of Formspider).

In the webinar the following topics will be discussed:

- What other products did TEAM look at?
- Why did TEAM choose Formspider?
- What are the benefits of using Formspider?
- What is it like to work with the company behind Formspider?
- What was TEAM's approach to Forms Modernization?
- A demo of the application
- A demo of the development experience with Formspider

Sign up now.
Categories: Development

Free Webinar: Getting Started with Oracle and .NET

Christian Shay - Mon, 2015-03-16 06:48

Please join me on Wednesday, March 18th at 12 noon Eastern time for a one hour webinar going over the basics on how to get started developing C#/VB.NET apps for Oracle Database.

No signup needed. The Webex information is located here:

http://www.oracle.com/technetwork/database/application-development/appdev-webseries-2430444.html

How do you get started using Visual Studio to develop .NET web applications that leverage the power of Oracle Database 12c? This beginner-level webcast answers that question and introduces Oracle's offerings for .NET programmers, including Oracle Data Provider for .NET (ODP.NET), Oracle Developer Tools for Visual Studio, Oracle Providers for ASP.NET, and .NET stored procedures. Step-by step-demos show you how to get started using each of these free products.

See you on Wednesday!

(To subscribe to this blog, please click here)

Tablespace HWM

Jonathan Lewis - Mon, 2015-03-16 04:22

The following question appeared the Oracle-L list-server recently:

In order to resize a datafile to release space at the end, we need to find whatever the last block_id that is at the start of that free contiguous space. Problem is that we have a very large database such that querying dba_extents to find the last block is probably not an option. The standard query(ies) that make use of dba_extents runs for hours at stretch and also  sometimes fails with a ‘snapshot too old’ (just gives up). Is there an alternative to using dba_extents?

I was surprised to hear that a suitable query against dba_extents could last for hours, although for locally managed tablespaces Oracle does have to read the segment header block for every single segment in the tablespace to get the segment extent map and that might make things a little slow. (A follow-up post explained that part of the problem was that the tablespaces were locally managed, so maybe it wasn’t just a case of an unlucky execution plan.)

If you look hard enough there’s probably an alternative strategy for dealing with any problem – and it might even be a good one. In the case of tablespace highwater marks, how about looking at dba_free_space instead of dba_extents ? If there’s space that can be released from a file it starts at the block after the last used block, e.g.:


select 
        tablespace_name, file_id, block_id, blocks, block_id + blocks - 1 last_block  
from 
        user_free_space 
where 
        tablespace_name = 'TEST_8K_ASSM_AUTO' 
order by 
        file_id, block_id
;

TABLESPACE_NAME                   FILE_ID   BLOCK_ID     BLOCKS LAST_BLOCK
------------------------------ ---------- ---------- ---------- ----------
TEST_8K_ASSM_AUTO                       6        128        256        383
TEST_8K_ASSM_AUTO                       6       8576      12200      20775

2 rows selected.


alter database datafile '{file name}' resize {block size * 8,575};

Database altered.

If you do try this then one of two things happen – either you manage to resize the file to the current minimum it can be, or you fail with Oracle error ORA-03297: file contains used data beyond requested RESIZE value and the file can’t be resized until you move some objects which are above the highest chunk of free space, so you’re back to dba_extents to find out which segment is causing the problem.

If you want to try using optimistic approach but don’t want to run some SQL that might cause an Oracle error you could always compare the details from dba_free_space with the details from dba_data_files to see if any space has been used AFTER the last free chunk – but there’s a little trap to making that check. You’ll notice that the last block of the free space is 20,775; but look what dba_data_files says about the last block in the data file(s):

SQL> select file_id, blocks, blocks - 1 last_block, user_blocks, file_name from dba_data_files order by file_id;

   FILE_ID     BLOCKS LAST_BLOCK USER_BLOCKS FILE_NAME
---------- ---------- ---------- ----------- ------------------------------------------------------------------
         1     129280     129279      129152 /u01/app/oracle/oradata/TEST/datafile/o1_mf_system_938s4mr3_.dbf
         2     267520     267519      267392 /u01/app/oracle/oradata/TEST/datafile/o1_mf_sysaux_938s551h_.dbf
         3     131200     131199      131072 /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
         4      25600      25599       25472 /u01/app/oracle/oradata/TEST/datafile/o1_mf_users_938s6bhn_.dbf
         5     131200     131199      131072 /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
         6      20782      20781       20648 /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

6 rows selected.

There are 20,782 blocks in the data file (though the numbering starts at zero, so the last block is 20,781) so there seem to be blocks in the data file that are beyond the last blocks of free space. You’ll have to trust me when I say that there’s no data beyond the free space, I’ve dropped all the (other) segments in this tablespace and purged the recyclebin: the last free space chunks stops short of the end of the file by 6 blocks. The presence of the user_blocks column in dba_data_files helps to explain what’s going on. You can consider a datafile to be made of three components: the space management part, the part that can hold legally sized extents, and a part at the end of file which is too small to hold the smallest extent that can legally be created in the tablespace.

The details depends on the version of Oracle, the definition of the tablespace, initial size of the file, and how the file has grown. In recent versions of Oracle, and assuming you haven’t done something silly with a very small starting size and massive growth, the space management part is likely to be a chunk of 1MB at the start of the file (64KB for older versions). For a locally managed tablespace the chunk at the end of the file could be anything up to one block less than the defined size for “uniform” extent allocation, or one block short of 64KB for system allocated extents.

In my example I have blocks = 20,782, and user_blocks = 20648: that’s because the tablespace was created in a recent version of Oracle with system allocated extents and 8KB blocks: 20,782 = 20648 + 128 (space management header) + 6 (dead space at end of file); the value of user_blocks allows for 2,581 extents of 64KB, and the last six blocks of the file are (currently) unusable. (I have a more extreme example of wasted space in an example I published a couple of years ago.)

Footnote:

When the question first came up my first thought was simply to dump the tablespace space management block but realised just a bit too late that dba_free_space was a much easier option. If anyone does care to pursue the bitmap dump you’ll have to work out all the details because there are variations on the theme that are probably only going to appear with very large datafiles or if you’ve converted from dictionary managed to locally managed. The method starts with the dbms_space_admin package which allows you to dump a tablespace bitmap into the session’s trace file:


execute dbms_space_admin.tablespace_dump_bitmaps('TEST_8K')

Header Control:
RelFno: 5, Unit: 128, Size: 294400, Flag: 1
AutoExtend: NO, Increment: 0, MaxSize: 0
Initial Area: 126, Tail: 294399, First: 8, Free: 2283
Deallocation scn: 148317558.2950
Header Opcode:
Save: No Pending Op
File Space Bitmap Block:
BitMap Control:
RelFno: 5, BeginBlock: 128, Flag: 0, First: 8, Free: 63472
FF00FF0000000000 0000000000000000 0000000000000000 0000000000000000
0000000000000000 0000000000000000 0000000000000000 0000000000000000

This tablespace was locally managed with a block size of 8KB and uniform extents of 1MB (which equates to 128 blocks), so we’re looking at a bitmap where one bit represents 128 blocks. Since the Oracle version is 11gR2, and the file doesn’t fall into the special “tiny” category the header section is 1MB / 128 blocks; the bitmap starts in block 2 (the third block of the file) which is why the size of the “Initial Area” is 126 blocks rather than 128.  The first free extent is number 8 (counting from zero) and there are 2,283 free extents in the file.

If I use my space-reporting script to report the details of the free and used extents in the tablespace I can start to align the bitmap with the extents and work out how to interpret the ones and zeros. This is what I’ve got at present:


FILE_ID    BLOCK_ID   END_BLOCK     BLOCKS OWNER      SEGMENT_NAME                 SEGMENT_TYPE
------- ----------- ----------- ---------- ---------- ---------------------------- ------------------
      5         128         255        128 TEST_USER  T1                           TABLE
                256         383        128 TEST_USER  T1                           TABLE
                384         511        128 TEST_USER  T1                           TABLE
                512         639        128 TEST_USER  T1                           TABLE
                640         767        128 TEST_USER  T1                           TABLE
                768         895        128 TEST_USER  T1                           TABLE
                896       1,023        128 TEST_USER  T1                           TABLE
              1,024       1,151        128 TEST_USER  T1                           TABLE
              1,152       2,175       1024 free       free
              2,176       2,303        128 TEST_USER  T3                           TABLE
              2,304       2,431        128 TEST_USER  T3                           TABLE
              2,432       2,559        128 TEST_USER  T3                           TABLE
              2,560       2,687        128 TEST_USER  T3                           TABLE
              2,688       2,815        128 TEST_USER  T3                           TABLE
              2,816       2,943        128 TEST_USER  T3                           TABLE
              2,944       3,071        128 TEST_USER  T3                           TABLE
              3,072       3,199        128 TEST_USER  T3                           TABLE
              3,200     294,399     291200 free       free

As you can see, the 8 x 1-bit (starting FF) aligns with the first 8 allocated extents of 128 block each, then the 8 x 0-bit with the 1,024 free blocks, followed by a further 8 x 1-bit and 8 x 128 block extents.  Furher investigations are left as an exercise to the interested reader.

 


Conditions Based On Inequalities Can’t Use Indexes – How To Resolve?

Oracle in Action - Mon, 2015-03-16 02:55

RSS content

Conditions based on inequalities (!=, <>) cannot make use of index(es). I will illustrate this limitation and show you how to optimize SQL statements hitting it.

For the demonstration, I have  a table  students table having a column named result that  can contain the values – ‘Pass’, ‘Fail’, ‘To be evaluated’. The column is characterized by a very non-uniform distribution having most of the rows  set to value Passed (P). Here’s the example:

SQL>drop table students purge;
    create table students (id , result )
    as
    select rownum, decode (mod(rownum, 30), 0, 'F', 1, 'T',  'P')
    from  all_tables;

    create index students_idx on students (result);
    exec dbms_stats.gather_table_stats (USER, 'STUDENTS', cascade => TRUE);

     SELECT result , count(*)
     FROM students
     GROUP BY result;
RESULT COUNT(*)
---------- ----------
P              100
T                4
F                3

Let’s execute the  query to select all students who have not passed (result = ‘T’ or ‘F’). Even though the query has a very strong selectivity and the result column is indexed, the query optimizer chooses a full table scan for reading 7 rows as the predicate involves inequality.

SQL>select * from students where result <> 'P';
    select * from table(dbms_xplan.display_cursor);

ID RESULT
---------- ----------
1 T
30 F

....

7 rows selected.

PLAN_TABLE_OUTPUT
---------------------------------------------------------
SQL_ID f2wkxqy3b6b5h, child number 0
-------------------------------------
select * from students where result <> 'P'

Plan hash value: 4078133427
---------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
|* 1 | TABLE ACCESS FULL| STUDENTS | 71 | 355 | 3 (0)| 00:00:01 |
---------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RESULT"<>'P')

In a case like this, where the inequality condition has a strong selectivity, we can advantage of an index using folowing three techniques :

First, the inequality condition can be rewritten into an IN condition. This is an option only when the number of values to be selected is known and the number is limited. For example, if the query is modified as shown, index range scan is employed.

SQL>select * from students where result in ('F', 'T');
select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
---------------------------------------------------------
SQL_ID 672mnj9pggkq7, child number 0
-------------------------------------
select * from students where result in ('F', 'T')

Plan hash value: 2871222462
---------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| STUDENTS | 71 | 355 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | STUDENTS_IDX | 71 | | 1 (0)| 00:00:01 |
---------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access(("RESULT"='F' OR "RESULT"='T'))

Second,   manually rewrite the query to make sure that both component queries can take advantage of an index range scan. This technique  can be applied if the values are unknown or the number of values to be specified is too high.   Hence, if  the query is rewritten as shown, it will be able to to take advantage of the or expansion query transformation:

SQL>select * from students where result < 'P'
    union all
    select * from students where result > 'P' ;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------- 
SQL_ID gqrp063y9c5a5, child number 0
-------------------------------------
select * from students where result < 'P' union all select * from
students where result > 'P'

Plan hash value: 2171568329
--------------------------------------------------------- 
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------- 
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | UNION-ALL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| STUDENTS | 76 | 380 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | STUDENTS_IDX | 76 | | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| STUDENTS | 36 | 180 | 2 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | STUDENTS_IDX | 36 | | 1 (0)| 00:00:01 |
--------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("RESULT"<'P')
5 - access("RESULT">'P')

The third technique simply forces an index full scan with, for example, the index hint. From a performance point of view, it’s not optimal,as, for a query with very strong selectivity, full index has to be scanned.

SQL>SELECT /*+ index(students) */ * FROM students where result != 'P';
select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------- 
SQL_ID 2hyrf6n7kb8pr, child number 0
-------------------------------------
SELECT /*+ index(students) */ * FROM students where result != 'P'

Plan hash value: 635752001
---------------------------------------------------------  
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------- 
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| STUDENTS | 71 | 355 | 2 (0)| 00:00:01|
|* 2 | INDEX FULL SCAN | STUDENTS_IDX | 71 | | 1 (0)| 00:00:01 |
--------------------------------------------------------- 

Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("RESULT"<>'P')

Conclusion:

In cases where the inequality condition having a strong selectivity is notable to make use of an index, we can advantage of an index using following three techniques : 

  • First, the inequality condition can be rewritten into an IN condition. This is an option only when the number of values to be selected is known and the number is limited.
  • Second,   manually rewrite the query to make sure that both component queries can take advantage of an index range scan. This technique  can be applied if the values are unknown or the number of values to be specified is too high.
  • The third technique simply forces an index full scan with, for example, the index hint. From a performance point of view, it’s not optimal,as, for a query with very strong selectivity, full index has to be scanned.

References:
Troubleshooting Oracle Performance (second edition ) by Christian Antognini
—————————————————————————————————————

Related links:

Home
Tuning Index

————————-

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Conditions Based On Inequalities Can't Use Indexes - How To Resolve?], All Right Reserved. 2015.

The post Conditions Based On Inequalities Can’t Use Indexes – How To Resolve? appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

12c: shutdown abort a PDB?

Yann Neuhaus - Sun, 2015-03-15 23:56

Can we shutdown abort a PDB? Let's try:

SQL> show con_id
CON_ID
------------------------------
3SQL> shutdown abort;
Pluggable Database closed.


But is it really a shutdown abort?

BI for NoSQL — some very early comments

DBMS2 - Sun, 2015-03-15 17:51

Over the past couple years, there have been various quick comments and vague press releases about “BI for NoSQL”. I’ve had trouble, however, imagining what it could amount to that was particularly interesting, with my confusion boiling down to “Just what are you aggregating over what?” Recently I raised the subject with a few leading NoSQL companies. The result is that my confusion was expanded. :) Here’s the small amount that I have actually figured out.

As I noted in a recent post about data models, many databases — in particular SQL and NoSQL ones — can be viewed as collections of <name, value> pairs.

  • In a relational database, a record is a collection of <name, value> pairs with a particular and predictable — i.e. derived from the table definition — sequence of names. Further, a record usually has an identifying key (commonly one of the first values).
  • Something similar can be said about structured-document stores — i.e. JSON or XML — except that the sequence of names may not be consistent from one document to the next. Further, there’s commonly a hierarchical relationship among the names.
  • For these purposes, a “wide-column” NoSQL store like Cassandra or HBase can be viewed much as a structured-document store, albeit with different performance optimizations and characteristics and a different flavor of DML (Data Manipulation Language).

Consequently, a NoSQL database can often be viewed as a table or a collection of tables, except that:

  • The NoSQL database is likely to have more null values.
  • The NoSQL database, in a naive translation toward relational, may have repeated values. So a less naive translation might require extra tables.

That’s all straightforward to deal with if you’re willing to write scripts to extract the NoSQL data and transform or aggregate it as needed. But things get tricky when you try to insist on some kind of point-and-click. And by the way, that last comment pertains to BI and ETL (Extract/Transform/Load) alike. Indeed, multiple people I talked with on this subject conflated BI and ETL, and they were probably right to do so.

Another set of issues arise on the performance side. Many NoSQL systems have indexes, and thus some kind of filtering capability. Some — e.g. MongoDB — have aggregation frameworks as well. So if you’re getting at the data with some combination of a BI tool, ETL tool or ODBC/JDBC drivers — are you leveraging the capabilities in place? Or are you doing the simplest and slowest thing, which is to suck data out en masse and operate on it somewhere else? Getting good answers to those questions is a work-in-progress at best.

Having established that NoSQL data structures cause problems for BI, let’s turn that around. Is there any way that they actually help? I want to say “NoSQL data often comes in hierarchies, and hierarchies are good for roll-up/drill-down.” But the hierarchies that describe NoSQL data aren’t necessarily the same kinds of hierarchies that are useful for BI aggregation, and I’m indeed skeptical as to how often those two categories overlap.

Hierarchies aside, I do think there are use cases for fundamentally non-tabular BI. For example, consider the following scenario, typically implemented with the help of NoSQL today:

  • You have more data — presumably machine-generated — than you can afford to keep.
  • So you keep time-sliced aggregates.
  • You also keep selective details, namely ones that you identified when they streamed in as being interesting in some way.

Visualizing that properly would be very hard in a traditional tabularly-oriented BI tool. So it could end up with NoSQL-oriented BI tools running over NoSQL data stores. Event series BI done right also seems to be quite non-tabular. That said, I don’t know for sure about the actual data structures used under the best event series BI today.

And at that inconclusive point, I’ll stop for now. If you have something to add, please share it in the comments below, or hit me up as per my Contact link above.

Categories: Other

Index on trunc(date) - do you still need old index?

Yann Neuhaus - Sun, 2015-03-15 15:16

Sometimes we have to index on ( trunc(date) ) because a SQL statement uses predicate on it instead of giving a range from midnight to midnight. When you do that you probably keep the index on the column. That's two indexes to maintain for DML. Do we need it?

Slides and Follow-up From Faculty Development Workshop at Aurora University

Michael Feldstein - Fri, 2015-03-13 20:41

By Phil HillMore Posts (300)

Today I facilitated a faculty development workshop at Aurora University, sponsored by the Center for Excellence in Teaching and Learning and the IT Department. I always enjoy sessions like this, particularly with the ability to focus our discussions squarely on technology in support of teaching and learning. The session was titled “Emerging Trends in Educational Technology and Implications for Faculty”. Below are very rough notes, slides, and a follow-up.

Apparent Dilemma and Challenge

Building off of previous presentations at ITC Network, there is an apparent dilemma:

  • One one hand, little has changed: Despite all the hype and investment in ed tech, there is only one new fully-established LMS vendor in the past decade (Canvas), and the top uses of LMS are for course management (rosters, content sharing, grades). Plus the MOOC movement fizzled out, at least for replacing higher ed programs or courses.
  • On the other hand, everything has changed: There are examples of redesigned courses such as Habitable Worlds at ASU that are showing dramatic results in the depth of learning by students.

The best lens to understand this dilemma is Everett Rogers’ Diffusions of Innovations and the technology adoption curve and categories. Geoffrey Moore extended this work to call out a chasm between Innovators / Early Adopters on the left side (wanting advanced tech, OK with partial solutions they cobble together, pushing the boundary) and Majority / Laggards on the right side (wanting full solution – just make it work, make it reliable, make it intuitive). Whereas Moore described Crossing the Chasm for technology companies (moving from one side to the other), in most cases in education we don’t have that choice. The challenge in education is Straddling the Chasm (a concept I’m developing with a couple of consulting clients as well as observations from e-Literate TV case studies):

Straddling the Chasm 3

This view can help explain how advances in pedagogy and learning approaches generally fit on the left side and have not diffused into mainstream, whereas advances in simple course management generally fit on the right side and have diffused, although we want more than that. You can also view the left side as faculty wanting to try new tools and faculty on the right just wanting the basics to work.

The trend in market moving away from walled garden offers education the chance to straddle the chasm.

Implications for Faculty

1) The changes are not fully in place, and it’s going to be a bumpy ride. One example is difficulty in protecting privacy and allowing accessibility in tools not fully centralized. Plus, the LTI 2.0+ and Caliper interoperability standards & frameworks are still a work in progress.

2) While there are new possibilities to use commercial tools, there are new responsibilities as the left side of chasm and non-standard apps require faculty and local support (department, division) to pick up support challenges.

3) There is a challenge is balance innovation with the student need for consistency across courses, mostly in terms of navigation and course administration.

4) While there are new opportunities for student-faculty and student-student engagement, there are new demands on faculty to change their role and to be available on the students’ schedule.

5) Sometimes, simple is best. It amazes me how often the simple act of moving lecture or content delivery online is trivialized. What is enabled here is the ability for students to work at their own pace and replay certain segments without shame or fear of holding up their peers (or even jumping ahead and accelerating).

Slides

Emerging Trends in Educational Technology and Implications for Faculty from Phil Hill Follow-Up

One item discussed in the workshop was how to take advantage of this approach in Aurora’s LMS, Moodle. While Moodle has always supported the open approach and has supported LTI standards, I neglected to mention a missing element. Commercial apps such as Twitter, Google+, etc, do not natively follow LTI standards, which are education-specific. The EduAppCenter was created to help with this challenge by creating a library of apps and wrappers around apps that are LTI-compliant.

The post Slides and Follow-up From Faculty Development Workshop at Aurora University appeared first on e-Literate.

Loads of fun with DBA_HIST_OSSTAT

Bobby Durrett's DBA Blog - Fri, 2015-03-13 17:35

I saw a load of 44 on a node of our production Exadata and it worried me.  The AWR report looks like this:

Host CPU
            Load Average
 CPUs     Begin       End     %User   %System      %WIO     %Idle
----- --------- --------- --------- --------- --------- ---------
   16     10.66     44.73      68.3       4.3       0.0      26.8

So, why is the load average 44 and yet the CPU is 26% idle?

I started looking at ASH data and found samples with 128 processes active on the CPU:

     select
  2  sample_time,count(*)
  3  from DBA_HIST_ACTIVE_SESS_HISTORY a
  4  where
  5  session_state='ON CPU' and
  6  instance_number=3 and
  7  sample_time
  8  between
  9  to_date('05-MAR-2015 01:00:00','DD-MON-YYYY HH24:MI:SS')
 10  and
 11  to_date('05-MAR-2015 02:00:00','DD-MON-YYYY HH24:MI:SS')
 12  group by sample_time
 13  order by sample_time;

SAMPLE_TIME                    COUNT(*)
---------------------------- ----------
05-MAR-15 01.35.31.451 AM           128

... lines removed for brevity

Then I dumped out the ASH data for one sample and found all the sessions on the CPU were running the same parallel query:

select /*+  parallel(t,128) parallel_index(t,128) dbms_stats ...

So, for some reason we are gathering stats on a table with a degree of 128 and that spikes the load.  But, why does the CPU idle percentage sit at 26.8% when the load starts at 10.66 and ends at 44.73?  Best I can tell load in DBA_HIST_OSSTAT is a point measurement of load.  It isn’t an average over a long period.  The 11.2 manual describes load in v$osstat in this way:

Current number of processes that are either running or in the ready state, waiting to be selected by the operating-system scheduler to run. On many platforms, this statistic reflects the average load over the past minute.

So, load could spike at the end of an hour-long AWR report interval and still CPU could average 26% idle for the entire hour?  So it seems.

– Bobby

Categories: DBA Blogs

Implementing the Tree Navigation Oracle Alta UI Design Pattern

Shay Shmeltzer - Fri, 2015-03-13 15:10

The Oracle Alta UI design patterns offer many new approaches for navigation in your application as you can see in the navigation patterns section. One of those is the Tree Navigation pattern - which is an updated approach to the way that many applications display menus today.

While the "old" way of building these menus was using the tree component, the new design uses an interface that works better on mobile devices and is easier on the eyes. It uses animation to do in-place replacement of one level in the menu with the next one. 

old new img

You could also use this approach to represent other types of hierarchical/master-detail relationships. 

In the demo below I show you how to quickly implement such navigation pattern with ADF Faces and a combination of af:listView components along with the af:deck component.

There are a bunch of further things you might want to do in your actual application beyond what the demo does.

One is to show on the right side of the page the information on the object you select on the left side. Using a deck component there you can also switch that section to show either Dept or Emp data in the same area. You'll already have the actionListener in place to do the switch of display, and ADF already has the right record selected - so just dropping the same data control on the right as a form will be enough.

Another nice enhancement would be to condition the showing of the right caret to be based on whether there are actually details. This should be easy to achieve with a calculated attribute using groovy - as shown here

In the demo I also show how to invoke the makeCurrent row selection functionality from a managed bean, this allows me to do two operations when a menu option is selected. The code I use ,which is based on code I found on Ashish's blog, is:

public void deptSelect(SelectionEvent selectionEvent) {
        ELContext elcontext = FacesContext.getCurrentInstance().getELContext();
        MethodExpression methodExpression =
            FacesContext.getCurrentInstance().getApplication().getExpressionFactory().createMethodExpression(elcontext,
                                                "#{bindings.DepartmentsView1.treeModel.makeCurrent}",
                                                                                                             Object.class, new Class[] {
                                                                                                             SelectionEvent.class });
        methodExpression.invoke(elcontext, new Object[] { selectionEvent });
        deck.setDisplayedChild("pgl2");
        AdfFacesContext.getCurrentInstance().addPartialTarget(deck);
    } 

I also use styleClass="AFAppNavbarButton" for the "back" button to make it look a bit better. 

The full source of the JSF page is:

<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html>
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
    <af:document title="untitled3.jsf" id="d1">
        <af:messages id="m1"/>
        <af:form id="f1">
            <af:pageTemplate viewId="/oracle/templates/tabletFirstTemplate.jspx" id="pt1">
                <f:facet name="header"/>
                <f:facet name="status"/>
                <f:facet name="appNav"/>
                <f:facet name="globalLinks"/>
                <f:facet name="footer"/>
                <f:facet name="center"/>
                <f:facet name="start">
                    <af:deck id="d2" binding="#{mb3.deck}" displayedChild="pgl1">
                        <af:panelGroupLayout id="pgl1">
                            <af:listView value="#{bindings.DepartmentsView1.collectionModel}" var="item"
                                         emptyText="#{bindings.DepartmentsView1.viewable ? 'No data to display.' : 'Access Denied.'}"
                                         fetchSize="#{bindings.DepartmentsView1.rangeSize}" id="lv1" selection="single"
                                         selectionListener="#{mb3.deptSelect}">
                                <af:listItem id="li1">
                                    <af:panelGridLayout id="pgl3">
                                        <af:gridRow marginTop="5px" height="auto" marginBottom="5px" id="gr1">
                                            <af:gridCell marginStart="5px" width="80%" id="gc1">
                                                <af:outputFormatted value="#{item.bindings.DepartmentName.inputValue}"
                                                                    id="of1"/>
                                            </af:gridCell>
                                            <af:gridCell marginStart="5px" width="20%" marginEnd="5px" id="gc2">
                                                <af:image source="func_caretright_16_ena.png" id="i1"/>
                                            </af:gridCell>
                                        </af:gridRow>
                                    </af:panelGridLayout>
                                </af:listItem>
                            </af:listView>
                        </af:panelGroupLayout>
                        <af:panelGroupLayout id="pgl2">
                            <af:button text="#{bindings.DepartmentName.inputValue}" id="b1"
                                       actionListener="#{mb3.backToDept}" styleClass="AFAppNavbarButton"
                                       icon="/func_caretleft_16_ena.png"/>
                            <af:listView value="#{bindings.EmployeesView4.collectionModel}" var="item"
                                         emptyText="#{bindings.EmployeesView4.viewable ? 'No data to display.' : 'Access Denied.'}"
                                         fetchSize="#{bindings.EmployeesView4.rangeSize}" id="lv2">
                                <af:listItem id="li2">
                                    <af:panelGridLayout id="pgl4">
                                        <af:gridRow marginTop="5px" height="auto" marginBottom="5px" id="gr2">
                                            <af:gridCell marginStart="5px" width="80%" id="gc3">
                                                <af:outputFormatted value="#{item.bindings.LastName.inputValue}"
                                                                    id="of2"/>
                                            </af:gridCell>
                                            <af:gridCell marginStart="5px" width="20%" marginEnd="5px" id="gc4">
                                                <af:image source="func_caretright_16_ena.png" id="i2"/>
                                            </af:gridCell>
                                        </af:gridRow>
                                    </af:panelGridLayout>
                                </af:listItem>
                            </af:listView>
                        </af:panelGroupLayout>
                        <af:transition triggerType="forwardNavigate" transition="slideLeft"/>
                        <af:transition triggerType="backNavigate" transition="slideRight"/>
                    </af:deck>
                </f:facet>
                <f:facet name="end"/>
                <f:attribute name="endWidth" value="0px"/>
                <f:attribute name="startWidth" value="200px"/>
            </af:pageTemplate>
        </af:form>
    </af:document>
</f:view> 

Categories: Development

Oracle Linux 6: Remove Obsolete Kernels

Darwin IT - Fri, 2015-03-13 12:50
I have several Virtual Machines for our Virtual Course Environments. From time to time, I do an upgrade of the Oracle Linux version. But with every upgrade, Oracle Linux leaves the old kernel files. And in time the root disk is cluttered up. So I want to remove the old kernels. With a little googling, I came up with a discussion thread in Oracle Communities: Oracle Linux Remove Old Kernels (Archived by now).

To me this was quite helpfull. Let me sum up the steps here.

1. List the kernels in the boot menuFirst list the kernels in the boot menu with:
#cd /boot/grub
#cat grub.conf
In my example VM this is:
[root@darlin-vce-db ~]# cd /boot/grub
[root@darlin-vce-db grub]# cat grub.conf
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/mapper/vg_darlinvce-lv_root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/vg_darlinvce-lv_root rd_LVM_LV=vg_darlinvce/lv_root rd_LVM_LV=vg_darlinvce/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us-acentos nomodeset rhgb quiet crashkernel=auto numa=off
initrd /initramfs-2.6.32-431.el6.x86_64.img
title Oracle Linux Server (2.6.32-279.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.el6.x86_64 ro root=/dev/mapper/vg_darlinvce-lv_root rd_LVM_LV=vg_darlinvce/lv_root rd_LVM_LV=vg_darlinvce/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us-acentos nomodeset rhgb quiet crashkernel=auto numa=off
initrd /initramfs-2.6.32-279.el6.x86_64.img
title Oracle Linux Server-uek (2.6.32-100.34.1.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64 ro root=/dev/mapper/vg_darlinvce-lv_root rd_LVM_LV=vg_darlinvce/lv_root rd_LVM_LV=vg_darlinvce/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us-acentos nomodeset rhgb quiet numa=off
initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.img
title Oracle Linux Server (2.6.32-131.0.15.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=/dev/mapper/vg_darlinvce-lv_root rd_LVM_LV=vg_darlinvce/lv_root rd_LVM_LV=vg_darlinvce/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us-acentos nomodeset crashkernel=auto rhgb quiet numa=off
initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img
The newest kernel (Oracle Linux 6 Update 5) is at the top. Apparently in my case I have the following kernels:
  • 2.6.32-431
  • 2.6.32-279
  • 2.6.32-100.34.1
  • 2.6.32-131.0.15
Let's make sure I have booted using the latest kernel:
[root@darlin-vce-db grub]# uname -r
2.6.32-431.el6.x86_64
So indeed Linux is started using the 2.6.32-431 kernel.
2. Remove obsolete kernelsLet's remove the last 3 one by one.
But before you do so, make a backup of the grub.conf, because the removal of the kernels may update your grub.conf and remove the newest kernel. Then you end up starting up a dated kernel version.

To list the kernel packages of one version:
[root@darlin-vce-db grub]# rpm -qa | grep kernel|grep 100
kernel-uek-2.6.32-100.34.1.el6uek.x86_64
kernel-uek-headers-2.6.32-100.34.1.el6uek.x86_64
kernel-uek-devel-2.6.32-100.34.1.el6uek.x86_64
kernel-uek-firmware-2.6.32-100.34.1.el6uek.noarch

Now to remove these packages use yum remove:
[root@darlin-vce-db grub]# yum remove kernel-uek-2.6.32-100.34.1.el6uek.x86_64  kernel-uek-headers-2.6.32-100.34.1.el6uek.x86_64 kernel-uek-devel-2.6.32-100.34.1.el6uek.x86_64 kernel-uek-firmware-2.6.32-100.34.1.el6uek.noarch
Loaded plugins: refresh-packagekit
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
....
Dependencies Resolved

=====================================================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================================================
Removing:
kernel-uek x86_64 2.6.32-100.34.1.el6uek @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 86 M
kernel-uek-devel x86_64 2.6.32-100.34.1.el6uek @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 22 M
kernel-uek-firmware noarch 2.6.32-100.34.1.el6uek @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 3.9 M
kernel-uek-headers x86_64 2.6.32-100.34.1.el6uek @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 2.2 M
Removing for dependencies:
audit-libs-devel x86_64 2.2-2.el6 @anaconda-OracleLinuxServer-201206261930.x86_64/6.3 70 k
compat-gcc-34 x86_64 3.4.6-19.el6 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 13 M
compat-gcc-34-c++ x86_64 3.4.6-19.el6 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 84 M
compat-gcc-34-g77 x86_64 3.4.6-19.el6 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 5.9 M
compat-glibc x86_64 1:2.5-46.2.0.1 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 6.4 M
compat-glibc-headers x86_64 1:2.5-46.2.0.1 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 1.9 M
gcc x86_64 4.4.7-4.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 19 M
gcc-c++ x86_64 4.4.7-4.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 11 M
glibc-devel x86_64 2.12-1.132.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 966 k
glibc-headers x86_64 2.12-1.132.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 2.0 M
libcap-ng-devel x86_64 0.6.4-3.el6_0.1 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 14 k
libcgroup x86_64 0.40.rc1-5.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 321 k
libcgroup-devel x86_64 0.40.rc1-5.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 89 k
libtool x86_64 2.2.6-15.5.el6 @anaconda-OracleLinuxServer-201105261616.x86_64/6.1 1.9 M
oracle-rdbms-server-11gR2-preinstall x86_64 1.0-7.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 32 k
perl-Archive-Extract x86_64 1:0.38-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 52 k
perl-CPAN x86_64 1.9402-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 663 k
perl-CPANPLUS x86_64 0.88-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 767 k
perl-ExtUtils-CBuilder x86_64 1:0.27-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 59 k
perl-ExtUtils-Embed x86_64 1.28-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 17 k
perl-ExtUtils-MakeMaker x86_64 6.55-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 608 k
perl-ExtUtils-ParseXS x86_64 1:2.2003.0-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 61 k
perl-File-Fetch x86_64 0.26-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 46 k
perl-IPC-Cmd x86_64 1:0.56-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 57 k
perl-Module-Build x86_64 1:0.3500-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 460 k
perl-Test-Harness x86_64 3.17-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 399 k
perl-Test-Simple x86_64 0.92-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 184 k
perl-core x86_64 5.10.1-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 0.0
perl-devel x86_64 4:5.10.1-136.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 1.8 M
redhat-lsb x86_64 4.0-7.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 0.0
redhat-lsb-compat x86_64 4.0-7.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 0.0
redhat-lsb-core x86_64 4.0-7.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 22 k
redhat-lsb-graphics x86_64 4.0-7.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 0.0
redhat-lsb-printing x86_64 4.0-7.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 0.0
systemtap x86_64 2.3-3.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 46 k
systemtap-devel x86_64 2.3-3.0.1.el6 @anaconda-OracleLinuxServer-201311252058.x86_64/6.5 4.9 M

Transaction Summary
=====================================================================================================================================================================================================
Remove 40 Package(s)

Installed size: 270 M
Is this ok [y/N]: y

Answer with 'y', and wait until the removal is complete.
Repeat this for the other kernels as well.
3. Edit the grub.conf and last refreshmentsEdit the grub.conf and remove the entries all the removed kernels, leaving only the entry of the latest kernel:
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/mapper/vg_darlinvce-lv_root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=/dev/mapper/vg_darlinvce-lv_root rd_LVM_LV=vg_darlinvce/lv_root rd_LVM_LV=vg_darlinvce/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us-acentos nomodeset rhgb quiet crashkernel=auto numa=off
initrd /initramfs-2.6.32-431.el6.x86_64.img
For each entry the line starting with 'title Oracle Linux Server...' is the first line of the entry.
After editing the grub.conf, you can reboot the server. If you're running inside an VirtualBox image, update the Virtualbox Guest Additions. I find it convenient to backup the VBoxGuest additions disk to a folder on one of the disks. When you update the VM with a new Linux Version, the guestadditions aren't compiled with the latest kernel anymore. Then XServer might not come up again, and mounting the guestadditions disk might not work. Then it is convenient to have a copy within the VM available to do a recompile from the console.
4. Re-install kernel-dependent pacakgesI found that tools like gcc, gcc-c++, etc. are removed at removing my first kernel. You can use yum to re-install them, using yum install , eg. yum install gcc. Yum will figure out the depencencies for you. What I re-installed was:

yum install gcc gcc-c++ glibc-devel glibc-headers perl kernel-headers compat-gcc-34 compat-gcc-34-c++ compat-gcc-34-g77 compat-glibc-headers libcap-ng-devel libtool systemtap systemtap-devel
These packages were installed and needed because of Oracle Database and Fusion Middleware. Also the VirtalBox Guest Additions need some of those.

Secure Boot support with Oracle Linux 7.1

Wim Coekaerts - Fri, 2015-03-13 12:04
Update : as my PM team pointed out to me - it's listed as Tech Preview for OL7.1 not GA/production in the release notes - just making sure I add this disclaimer ;)

Another feature introduced with Oracle Linux 7.1 is support for Secure Boot.

If Secure Boot is enabled on a system (typically desktop, but in some cases also servers) - the system can have an embedded certificate (in firmware). This certificate can be one that's uploaded to the system by the admin or it could be one provided by the OEM/OS vendor. In many cases, in particular newer desktops, the system already contains the Microsoft key. (there can be more than one certificate uploaded...). When the firmware loads the boot loader, it verifies/checks the signature of this bootloader with the key stored in firmware before continuing. This signed bootloader (at this point trusted to continue) will then load a signed kernel, or signed second stage boot loader and verify it before starting and continuing the boot process. This creates what is called a chain of trust through the boot process.

We ship a 1st stage bootloader with Oracle Linux 7.1 which is a tiny "shim" layer that is signed by both Microsoft and Oracle. So if a system comes with Secure Boot support, and already ships the microsoft PK, then the shim layer will be started, verified, and if it passes verification, it will then load grub2 (the real bootloader). grub2 is signed by us (Oracle). The signed/verified shim layer contains the Oracle key and will validate that grub2 is ours (signed), if verification passes, grub2 will load the Oracle Linux kernel, and the same process takes place, our kernel is signed by us (Oracle) and grub2 will validate the signature prior to allowing execution of the kernel. Once the kernel is running, all kernel modules that we ship as part of Oracle Linux whether it's standard included kernel modules as part of the kernel RPM or external kernel modules used with Oracle Ksplice, are also signed by Oracle and the kernel will validate the signature prior to loading these kernel modules.

Enabling loading and verification of signed kernel modules is done by adding enforcemodulesig=1 to the grub kernel option line. In enforcing mode, any kernel module that is attempted to be loaded that's not signed by Oracle will fail to load.

If a system has Secure Boot support but a sysadmin wants to use the Oracle signature instead, we will make our certificate available to be downloaded securely from Oracle and then this can be uploaded into the firmware key database.

node-oracledb 0.4.1 is on GitHub (Node.js driver for Oracle Database)

Christopher Jones - Fri, 2015-03-13 11:26

Just a few small changes in this update of the Node.js driver for Oracle Database.

  • Support for External authentication was added This closes Issue #15.

  • The isAutoCommit flags now works with query execution. This is useful in cases where multiple DML statements are executed followed by a SELECT statement. This can be used to avoid a round trip to the database that an explicit call to commit() would add.

  • Added AIX build support to package.json. Thanks to Hannes Prirschl for submitting a pull request.

  • Errors messages when using properties that are out of range have been improved

  • Numerous API doc updates. Thanks to Greg Huang, BuiltInParris, mello151, and others for instigating some of these changes.

  • Fixed a bug: When terminate() of a connection pool fails because connections have not yet been closed, subsequent use of release() to close those connections no longer gives an error "ORA-24550: Signal Received".

  • Thanks go to krishnanm86 for a code cleanup pull request.

Bigger features (e.g. LOBS and RETURNING INTO) that I know you really, really want are still being worked on - have patience!

HEUG Alliance 15 - What Looks Good To Me

Floyd Teter - Fri, 2015-03-13 10:59
The Higher Education User Group's Alliance 15 software conference kicks off this Sunday, March 15, in Nashville, Tennessee.  You're going, right?

I'm flying early this weekend myself.  Just have a few conference things to do to get ready for my little role in this big Oracle user group conference.

Looks to be a great conference.  I'm personally planning to dive into five areas of focus:  

  1. increasing my depth of understanding about customers and processes in Higher Education; 
  2. learning and evangelizing about Oracle Cloud Application services within Higher Education; 
  3. joining the discussion around Oracle's PeopleSoft Campus Solutions Self Service Mobile (especially the new 5.0 release) and the Oracle Mobile Applications Framework; 
  4. learning more about the application of User Experience design patterns, guidelines and the like to the unique set of use cases presented within Higher Education; 
  5. getting updated with the latest news on Oracle's roadmaps for their various Higher Education products.

I suspect that an underlying theme about the need for higher levels of successful student engagement will encompass all five of these focus areas in one way or another.

I'll be presenting a few sessions of my own...special sessions not found in the Alliance Agenda Builder.  They're all in the Delta Island C room at the Gaylord Hotel and I can promise they'll be absolutely brilliant ;)

  1. Taleo Cloud Demo: Tuesday, March 17 12 noon-1 p.m.
  2. Financials Cloud Demo: Tuesday March 17, 2:00-3:00 p.m.
  3. HCM Cloud Demo: Wednesday, March 18, 7:45-8:45 a.m.


So, other than those sessions where I'm presenting, I looked over the catalog of sessions with my three areas of focus in mind. What follows is a list of the sessions that look good to me. I didn't include session times or locations, as you can get to those details via the Agenda Builder.  If you do look up my list, you'll see there are time conflicts involved - a sign of a good conference is that you have to make difficult choices about how to spend your time - so you won't be able to catch all of these sessions...these are just the sessions that piqued my interest.  Maybe you'll be interested too?

So that's what looks good to me at Alliance 15.  Let me know what looks good to you if you're going, and let me know how it really turned out for you after the conference...love those comments!

Parallel Execution -- 2b PX Servers

Hemant K Chitale - Fri, 2015-03-13 09:05
Continuing my previous post, here I demonstrate  using V$SQLSTATS.PX_SERVERS_EXECUTIONS and a couple of issues around it.

I have restarted the database.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Fri Mar 13 22:49:20 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 16 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
2 32 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

[Note : To understand why the executions took 16 PX Servers inspite of the degree on table being 1, see this post]
So we see that PX_SERVERS_EXECUTIONS shows cumulative statistics.  Let's try a slight twist.

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_max_servers=8;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
3 40 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Because I set PARALLEL_MAX_SERVERS to 8, my query on Large_Table could take only 8 PX Servers at the next execution.  V$SQLSTATS.PX_SERVERS_EXECUTIONS now shows a cumulative count of 40 for 3 executions. There is no way to determine how many PX Servers were used in each of the 3 executions, because the history of executions is not maintained.
(In my controlled experiment, we know, by deduction, that the 3rd execution took 8 PX Servers simply because we know already that the first 2 executions took a cumulative count of 32 PX Servers -- by deducting 32 from 40 to get 8 for the 3rd execution)

What happens if the SQL is invalidated ?

HEMANT>alter table large_table parallel 4;

Table altered.

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 8 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

The ALTER TABLE, being a DDL, had invalidated the query on Large_Table.  So, V$SQLSTATS also got reset.  Therefore, EXECUTIONS reset to 1 and PX_SERVES_EXECUTIONS got reset to 8.

.
.

.
Categories: DBA Blogs

Not NULL Constraint Influences Access Path

Oracle in Action - Thu, 2015-03-12 23:12

RSS content

The optimizer can make use of explicitly defined Not NULL constraints to take advantage
of an index in order to avoid a full table scan since a B-tree index stores only not NULL values .
When  count (constant) or count(*)  is queried,  we want to count no. of rows in the table. Hence , if there is a column which is defined as not NULL and has an index on it, the number of index entries  in the index are bound to be same as the number of rows. The query optimizer uses the index to count no. of rows in the table.

Similarly, when  a count (not-nullable-column) is queried,  we want to count the no. of rows having not null values in the column. Since the column  has a not NULL constraint on it, every row in the table will have a not null value in it and count(not-nullable-column) is  same as count(*). As a result, the query optimizer can use  the index on the column to process the query.
In fact, in both the cases above, any B-tree containing at least a not-nullable column can serve the purpose.

When a count (nullable-column) is queried, we want to count the no. of rows having not null values in the column. If we have an index on the column, the index will store only not NULL values and hence can be effectively used by  the query optimizer to give the result.
In fact, the optimizer can use any index containing the nullable column for this purpose.

To demonstrate the above functionality, I have created a  table HR.TEST with two columns – NOTNULL having not NULL constraint
NULLABLE
. having same data as column NOTNULL but has not been declared not NULL
. has a B-tree index on it

SQL>drop table hr.test purge;
    create table hr.test (notnull number not null, nullable number);
    insert into hr.test select rownum, rownum from all_tables;
    create index hr.test_idx on hr.test(nullable);
    exec dbms_stats.gather_table_stats ('HR','TEST', cascade => true);

Now I will query count for various arguments and check if optimizer can use the index on NULLABLE column.

Note that to process count(*),  count(1) and   count(notnull), the query optimizer uses Full Table Scan. Although the column NULLABLE has non-null values in all the rows but since it has not been explicitly declared not null , the  optimizer does not know that no. of entries in index reflect the count correctly and hence does not use the index .

SQL>select count(*) from hr.test;
            select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 1mat065c25crk, child number 0
-------------------------------------
select count(*) from hr.test

Plan hash value: 1950795681
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST | 108 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------

SQL>select count(1) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID gzpsn7ff3ncmc, child number 0
-------------------------------------
select count(1) from hr.test

Plan hash value: 1950795681
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST | 108 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------

SQL>select count(notnull) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 6kxdzxbac62b4, child number 0
-------------------------------------
select count(notnull) from hr.test

Plan hash value: 1950795681
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST | 108 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------

To process count(nullable), the optimizer uses index on column NULLABLE because we want to count not null values in column nullable and Btree index stores only not null values.

SQL> select count(nullable) from hr.test;
select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID bz8rxw5rmmv8g, child number 0
-------------------------------------
select count(nullable) from hr.test

Plan hash value: 2284640995
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 432 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------

Now I will declare not NULL constraint on  column NULLABLE.

SQL> alter table hr.test modify (nullable not null);

Now if query count(*), count(1), count(notnull) and count(nullable), the optimizer is able to avoid Full Table Index by making  use of the index  on NULLABLE column in all the cases . Since the column NULLABLE having index has been declared not null and optimizer knows that entries in the index represent all the rows of the table.

SQL>select count(*) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID 1mat065c25crk, child number 0
-------------------------------------
select count(*) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

SQL>select count(1) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID gzpsn7ff3ncmc, child number 0
-------------------------------------
select count(1) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

SQL>select count(notnull) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID 6kxdzxbac62b4, child number 0
-------------------------------------
select count(notnull) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

SQL> select count(nullable) from hr.test;
     select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID bz8rxw5rmmv8g, child number 0
-------------------------------------
select count(nullable) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

Hence, It is advisable to declare NOT NULL constraint on relevant columns so that optimizer can choose index access in relevant cases.

References:
Troubleshooting Oracle Performance (second edition ) by Christian Antognini
—————————————————————————————————————

Related links:

Home
Tuning Index

————————-



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Not NULL Constraint Influences Access Path], All Right Reserved. 2015.

The post Not NULL Constraint Influences Access Path appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Oracle Linux 7.1 and MySQL 5.6

Wim Coekaerts - Thu, 2015-03-12 21:47
Yesterday we released Oracle Linux 7 update 1. The individual RPM updates are available from both public-yum (our free, open, public yum repo site) and Oracle Linux Network. The install ISOs can be downloaded from My Oracle Support right away and the public downloadable ISOs will be made available in the next few days from the usual e-delivery site. The ISOs will also, as usual, be mirrored to other mirror sites that also make Oracle Linux freely available.

One update in Oracle linux 7 update 1 that I wanted to point out is the convenience of upgrading to MySQL 5.6 at install time. Oracle Linux 7 GA includes MariaDB 5.5 (due to our compatibility commitment in terms of exact packages and the same packages) and we added MySQL 5.6 RPMs on the ISO image (and in the yum repo channels online). So while it was easy for someone to download and upgrade from MariaDB 5.5 to MySQL 5.6 there was no install option. Now with 7.1 we included an installation option for MySQL. So you can decide which database to install in the installer or through kickstart with @mariadb or @mysql as a group. Again, MariaDB 5.5 is also part of Oracle Linux 7.1 and any users that are looking for strict package compatibility will see that we are very much that. All we have done is make it easy to have a better alternative option (1) conveniently available and integrated (2) without any compatibility risks whatsoever so you can easily run the real standard that is MySQL. A bug fix if you will.

I have a little screenshot available here.

Enjoy.

Hackathon Provides Opportunity for Collaboration and Innovation

Fishbowl team members competed in our third annual Hackathon over the weekend. Our consultants split up into four groups to create an innovative solution, and were given just over 24 hours to complete the task. The goal was to create usable software that could be used in the future when working with clients; some of our most creative products and services originally derived from Hackathon events.

13689_10152683472083053_3545144700959488753_n  10410948_10152683472493053_2066307430651186573_n

10592750_10152683472058053_6810123916371611080_n  11043146_10152683472098053_5907236575559895437_n

 

Here’s a summary of what the teams came up with this year:

Team 1: Integrated Fishbowl’s Control Center with Oracle WebCenter Portal

Team 2: Integrated the Google Search Appliance with Oracle’s Document Cloud Service

Team 3: Worked to connect WebCenter Portal with Oracle’s new Document Cloud Service

Team 4: Got Fishbowl’s ControlCenter to run in the cloud with Google Compute Engine

At the end of the Hackathon, all participants voted on the best solution. The winning team was #2 (GSA-Doc Cloud Service integration), made up of Kim Negaard, Andy Weaver, and Mike Hill. Although there could only be one winner, overall consensus was that each team came up with something extremely useful that could help solve common problems we’ve run into while working with WebCenter and the GSA. If you’re interested in learning more about any of the team’s programs, feel free to contact us at info@fishbowlsolutions.com.

The post Hackathon Provides Opportunity for Collaboration and Innovation appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other