Skip navigation.

Feed aggregator

Oracle OpenWorld Rejections : #TeamRejectedByOracleOpenWorld

Tim Hall - Sat, 2015-07-11 03:50

Once again it is Oracle OpenWorld paper rejection season. :)

Invariably, us conference types start to have a bit of a moan about being rejected, hence my little jibe #TeamRejectedByOracleOpenWorld. In case people lost sight of this being a joke, here was one of my tweets on the subject.

“Setting up a helpline for #TeamRejectedByOracleOpenWorld to deal with all us people who can’t cope with not being heard for 5 mins. :)”

The reaction to these tweets is quite interesting, because some in the community are stunned by the people getting rejected. In reality it shouldn’t be a surprise to anyone. Jonathan Lewis summed the situation up nicely with the following tweet.

“You’re confusing OOW with a user group event. Different organisations, reasons, and balance”

If I’m honest, presenting is not high on my list of desires where OpenWorld is concerned. There is too much to do anyway, without having to worry about preparing for talks. If someone asks me to get involved in a panel session, RAC Attack or some similar thing I’m happy to help out, but if I do none of the above, I will still be rushed off my feet for a week.

The Oracle ACE program is allocated a few slots each year. Some people need to present or their company won’t allow them to attend. Others want the “profile” associated with presenting at OpenWorld. Neither of these things affect me, so I typically don’t submit for the ACE slots. I would rather see them go to someone who really does want them. I get plenty of opportunities to speak. :)

If you really want to speak at conferences, your focus should be on user group events. Getting to speak at something like OOW can be a nice treat for some people, but it probably shouldn’t be your goal. :)

Cheers

Tim…

Oracle OpenWorld Rejections : #TeamRejectedByOracleOpenWorld was first posted on July 11, 2015 at 10:50 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

YouTube : Rags to Riches in 1 Week?

Tim Hall - Sat, 2015-07-11 00:58

youtubeIf you’ve followed me on Twitter you will have seen me posting links to videos on my YouTube channel. You can see me talking about starting the channel in the first video.

One week and 5 videos in and I’ve just hit 50 subscribers. Watch out PewDiePie!

One thing I didn’t mention in that video was my hopes/expectations as far as subscribers are concerned. As I said in one of my writing tips posts, Oracle is a niche subject on the internet. If you put out some half-decent content on a YouTube gaming or fitness channel, you would probably expect to get a few thousand subscribers fairly quickly. That’s not the case for an Oracle channel. Before I started this YouTube channel I did a little research and the biggest Oracle-related channel I could find was about 30,000 subscribers and that was Oracle’s main YouTube channel. After that some were knocking around 1000-4000 subscribers. Below that were a bunch of channels that were pulling double or triple figures of subscribers. Starting an Oracle-related channel is *not* a good idea if your master plan is to dominate YouTube! :)

OK. With that bullshit out of the way, how have I found my first week?

  • Making YouTube videos is hard! It takes a long time. I’m estimating about 1 hour of effort per minute of footage. The first 3 minute video took 3 days, but that included learning the technology and getting to grips with editing. Hopefully I’ll get a bit slicker as time goes on. :)
  • Doing the vocal for a video is a nightmare. After a certain number of retakes your voice ends up sounding so flat you start to wonder if you are going to send people to sleep. I listen back to my voice on some of the videos and it makes me cringe. It’s monotone and devoid of personality (insert insult of your choice here). As I get better at the recording thing, I’m hoping the number of retakes will reduce and my vocal will sounds less like I’m bored shitless. :)
  • I love the fact I can do “quick” hit-and-run videos and not feel guilty about not including every detail. I’m putting links back to my articles, which contain more detail and most importantly links back to the documentation, so I’m happy that these videos are like little tasters.
  • I’m being a bit guarded about the comments section at the moment. When I look at other channels, their comments are full of spam and haters. I can’t be bothered with that. I’ll see how my attitude to comments develops over time.
  • I’m hoping to do some beginner series for a few areas, which I will build into playlists. This probably won’t be of much interest to regular followers of the blog and website, but it will come in handy for me personally when I’m trying to help people get started, or re-skilled into Oracle. I might be doing some of that at work, hence my interest. :)
  • I’ve tried to release a burst of videos to get the thing rolling, but I don’t know how often I will be able to upload in future. Where Oracle is concerned, the website is my main priority. Then the blog. Then the conference thing. Then YouTube. The day job and general life have to fit in around that somewhere too. This is always going to be a labour of love, not a money spinner, so I have to be realistic about what I can achieve.

So there it is. One week down. Five videos. Four cameos by other members of the Oracle community. Superstardom and wealth beyond my wildest dreams are just around the corner… Not!

Cheers

Tim…

Note to self: Why is this a blog post, not another video? :(

YouTube : Rags to Riches in 1 Week? was first posted on July 11, 2015 at 7:58 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

SLOB 2.3 User Guide

Kevin Closson - Fri, 2015-07-10 19:13

SLOB 2.3 is releasing within the next 48 hours. In case anyone wants to read about all the new features here is a link to the SLOB 2.3 User Guide:

SLOB 2.3 User Guide (pdf)

 


Filed under: oracle

Code Studio rocks; diversity does, too

Catherine Devlin - Fri, 2015-07-10 17:08

If you want to quickly get some kids introduced to computer programming concepts, you could do a lot worse than using Code Studiofrom code.org. That's what I did the last couple weeks - took two hours to lightly shepherd the Dayton YWCA day camp through a programming intro.

It's really well-organized and easy to understand - frankly, it pretty much drives itself. It's based on block-dragging for turtle graphics and/or simple 2D games, all easy and appealing stuff. (They even got their turtle graphics branded as the sisters from Frozen ice-skating!) I didn't need to do much more than stand there and demonstrate that programmers actually exist in the flesh, and occasionally nudge a student over a bump. Though, by pair programming, they did most of the nudging themselves.

Here's most of my awesome class. Sorry I'm as bad at photography as at CSS.

Hey - we got demographics, huh? Right - if you announce that you're teaching a coding class through your usual geeky circles, they spread the word among their circles and recruit you a class that looks pretty much like the industry already looks. And if you seek a venue through your geeky circles, the usual suspects will step up to host. In badly segregated Dayton, that means "as far from the colored parts of town as possible." That's less than inviting to the people who don't live there.

But if you partner with groups that already have connections in diverse communities - like the YWCA, which makes anti-racism one of its keystones - getting some fresh faces can be pretty easy! And there are venues available outside the bleached-white exurbs you're used to - you just need to think to look.

Another benefit of Code Studio is that it's entirely web-based, so you don't need to restrict your demographics to "kids whose parents can afford to get them laptops". The public library's computer classroom did the job with flying colors.

Seriously, this was about the easiest outreach I've ever done. I'm working on the follow-up, but I think I'll be able to find further lazy options. Quite likely it will leverage CodeAcademy. So, what's your excuse for not doing it in your city?

Now, in other news: You are running out of time to register for PyOhio, a fantastic, friendly, free, all-levels Python conference, and my pride and joy. The schedule is amazing this year, and for better or for worse, I'm keynoting. So please come and add to my terror.

Hot Deployment in JDeveloper 12c - Don't Stop/Start Your App

Shay Shmeltzer - Fri, 2015-07-10 15:45

Old habits are hard to get rid off, and I still see long time users of JDeveloper (and also many new users) who are stopping/starting their application on the embedded WebLogic each time that they make a change or addition to their code.

Well you should stop it! (I mean stop stopping the application).

For a while now, JDeveloper has support for hot deployment that means that when you do most of the changes to your code you just need to do save-all followed by a rebuild of your viewController project - and that's it.

You can then go to your browser and reload your page - and the changes will be reflected there.  This will not only save you the time it takes to undeploy and redeploy your app, it will also reduce the amount of memory you use since frequent redeployment of the app on the embedded WebLogic leads to bigger memory consumption.

In the demo below I use JDeveloper 12.1.3 to show you that I can just do the save->rebuild and pick up:

  • Changes in the JSF file
  • Changes to the JSF configuration file adfc-config.xml
  • New classes that are added to both the model and viewController projects
  • Changed to the ADF configuration files (pagedefs, data binding, data controls)

So for most cases, you should be covered with this hot-deployment capability.

There are some cases that will require a redeploy of the application (for example if you add a new skin css file, or if you change some runtime configuration of your app in web.xml) but for most cases you don't need to continue with the stop/start habit.

Categories: Development

Promising Research Results On Specific Forms Of Adaptive Learning / ITS

Michael Feldstein - Fri, 2015-07-10 12:45

By Phil HillMore Posts (344)

Recently I described an unpublished study by Dragan Gasevic and team on the use of Knowillage / LeaP adaptive platform.[1] The context of article was on D2L’s misuse of the results, but the study itself is interesting in terms of its findings that adaptive learning usage (specifically LeaP in addition to Moodle within an Intro to Chemistry course) can improve academic performance. I will share more when and if the results become public.

If we look to published research reports there are other studies that back up the potential of adaptive approaches, but the most promising results appear to be for a subset of adaptive systems that provide not just content selection but also tutoring. Last year a research team from Simon Fraser University and Washington State University published a meta-analysis on Intelligent Tutoring Systems (ITS) which they described as having origins from 1970 and the development of SCHOLAR.[2] The study looked at 107 studies involving 14,321 participants and found:

The use of ITS was associated with greater achievement in comparison with teacher-led, large-group instruction (g .42), non-ITS computer-based instruction (g .57), and textbooks or workbooks (g .35). There was no significant difference between learning from ITS and learning from individualized human tutoring (g –.11) or small-group instruction (g .05). Significant, positive mean effect sizes were found regardless of whether the ITS was used as the principal means of instruction, a supplement to teacher-led instruction, an integral component of teacher-led instruction, or an aid to homework. Significant, positive effect sizes were found at all levels of education, in almost all subject domains evaluated, and whether or not the ITS provided feedback or modeled student misconceptions. The claim that ITS are relatively effective tools for learning is consistent with our analysis of potential publication bias.

Relationship of ITS and Adaptive Learning Software

Unlike most marketing and media descriptions of Adaptive Learning, the report is quite specific on defining what an Intelligent Tutoring System is and isn’t.

An ITS is a computer system that for each student:

  1. Performs tutoring functions by (a) presenting information to be learned, (b) asking questions or assigning learning tasks, (c) providing feedback or hints, (d) answering questions posed by students, or (e) offering prompts to provoke cognitive, motivational or metacognitive change
  2. By computing inferences from student responses con- structs either a persistent multidimensional model of the student’s psychological states (such as subject matter knowledge, learning strategies, motivations, or emotions) or locates the student’s current psychological state in a multidimensional domain model
  3. Uses the student modeling functions identified in point 2 to adapt one or more of the tutoring functions identified in point 1

There are plenty of computer-based instruction (CBI) methods out there, but ITS relies on a multidimensional model of the student in addition to a model of the subject area (domain model). The report also calls out that CBI approaches that only model the student in one dimension of item response theory (IRT, more or less the model of a student’s ability to correctly answer specific questions) are not ITS in their definition. IRT can be one of the dimensions but not the only dimension.

A 2014 meta-analysis referred to by the above report further clarifies the conditions for a system to be an ITS as follows [emphasis added]:

VanLehn (2006) described ITS as tutoring systems that have both an outer loop and an inner loop. The outer loop selects learning tasks; it may do so in an adaptive manner (i.e., select different problem sequences for different students) based on the system’s assessment of each individual student’s strengths and weaknesses with respect to the targeted learning objectives. The inner loop elicits steps within each task (e.g., problem-solving steps) and provides guidance with respect to these steps, typically in the form of feedback, hints, or error messages.

For the sloppy field of Adaptive Learning, this means that the study looks at systems that model students, provide immediate feedback to students, and provide hints and support to students as they work through a specific task (inner loop). Adaptive Learning systems that only change the content or tasks presented to students adaptively (outer loop) do not qualify. Some examples of Adaptive Learning / ITS systems include McGraw-Hill’s ALEKS and AutoTUTOR. Knowillage / LeaP is an example of a system that is not an ITS.

Promising Findings

The results showed “the use of ITS produced moderate, statistically significant mean effect sizes” compared to large-group human instruction, individual CBI, and textbooks / workbooks. The results showed no statistically significant mean effect sizes compared to small-group human instruction and individual tutoring. In other words, the study shows improvements of ITS over large lecture classes, non-ITS software tools, and textbooks / workbooks but no real difference with small classes or individual tutors.

ITS Fig 1

What is quite interesting is that the results hold across multiple intervention approaches. Using ITS as Principal instruction, Integrated class instruction, Separate in-class activities, Supplementary after-class instruction, or Homework give similar positive results.

Why Does ITS Give Positive Results?

The report hypothesizes that the primary reasons that ITS seems to provide positive results as follows [formatting added, excerpted]:

[ITS shared characteristics with other forms of CBI] Specifically, they have attributed the effectiveness of CBI to:

  • greater immediacy of feedback (Azevedo & Bernard, 1995),
  • feedback that is more response-specific (Sosa, Berger, Saw, & Mary, 2011),
  • greater cognitive engagement (Cohen & Dacanay, 1992),
  • more opportunity for practice and feedback (Martin, Klein, & Sullivan, 2007),
  • increased learner control (Hughes et al., 2013), and
  • individualized task selection (Corbalan, Kester, & Van Mer- riënboer, 2006).

[snip] The prior quantitative reviews also concluded that using ITS is associated with greater achievement than using non-ITS CBI. We hypothesize that multidimensional student modeling enables ITS to outperform non-ITS CBI on each of its advantages cited in the previous paragraph.

[snip] ITS may also be more effective than non-ITS CBI in the sense that ITS can extend the general advantages of CBI to wider set of learning activities. For example, the ability to score and provide individualized comments on a student’s essay would extend the advantage of immediate feedback well beyond what is possible in non-ITS CBI. This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Student modeling also enables ITS to interact with students at a finer level of granularity than test-and-branch CBI systems.

These are very encouraging results for the field of ITS and a subset of Adaptive Learning. I view the results not as saying adaptive learning is the way to go but rather as there is evidence that adaptive learning working applied in a tutoring role can improve academic performance in the right situations.

We need more evidence-based evaluation of different teaching strategies and edtech applications.

  1. When the study started Knowillage was an independent company; mid-way through study D2L bought Knowillage and renamed product as LeaP.
  2. I would link to G+ post by George Station here if it were not for the ironic impossibility of searching within that platform.

The post Promising Research Results On Specific Forms Of Adaptive Learning / ITS appeared first on e-Literate.

Clustered columnstore index and memory management

Yann Neuhaus - Fri, 2015-07-10 10:46

Few weeks ago, I had the opportunity to give a session about the clustered columnstore index feature (CCI) at our In-Memory event dedicated to In-Memory technologies for Microsoft SQL Server, Oracle and SAP HANA. During our session, I explained the improvement made by Microsoft on SQL Server 2014 with the introduction of new clustered columnstore index (CCI).

The CCI includes a new structure that allows update operations: the delta store. Indeed, insert operations go directly in the delta store. Delete operations are logical and go in the deleted bitmap in the delta store and finally update operations are split into two basic operations DELETE operation followed by INSERT operation. I was very interested in how SQL Server deals with both the structures (delta store and columnstore) and memory in different scenarios. This blog post is a result of my studies and will probably concern those who are interested by internal stuff. In fact, I discussed with one of my (oracle) friend and he asked me some interesting questions about CCI and the memory management topic.

First of all, let’s begin with the storage concept: the delta store consists of a traditional row-based storage unlike the columnstore index which is column-oriented storage. The two structures are managed differently by SQL Server and they have their own memory space - respectively the CACHESTORE_COLUMSTOREOBJECTPOOL for the columnstore structure and the traditional buffer pool (BP) for the row store structure. When columnstore data are fetched from disk to memory, they come first to the BP and then to the columnstore memory pool. We may get information about columnstore memory pool by using the following query:

 

select        type,        name,        memory_node_id,        pages_kb,        page_size_in_bytes from sys.dm_os_memory_clerks where type = 'CACHESTORE_COLUMNSTOREOBJECTPOOL'; go

 

blog_55_-_2-_CCI_memory_management_memory_clerk-_

 

First scenario

We’ll see how SQL Server behaves by reading data exclusively from the delta store. Let’s begin with a pretty simple table:

 

-- create table test_CCI if object_id('test_CCI', 'U') is not null        drop table test_CCI; go   create table test_CCI (        id int not null identity(1,1),        col1 char(10) not null default 'col_' ); go

 

Next, let’s create a CCI that will include 1 compressed row group and 1 delta store (open state):

 

set nocount on;   -- insert 1000 rows insert test_CCI default values; go 1000   -- create CCI create clustered columnstore index [PK__test_CCI__3213E83F3A6FE3AC] on test_CCI go   -- insert 1 rows in order to create a delta store (OPEN state) insert test_CCI default values; go 1

 

Let’s have a look at the CCI row group’s information:

 

select        object_name(object_id) as table_name,        index_id,        row_group_id,        delta_store_hobt_id,        state_description as [state],        total_rows,        deleted_rows,        size_in_bytes from sys.column_store_row_groups where object_id = object_id('test_CCI'); go

 

blog_55_-_2-_CCI_configuration-_

 

Let’s execute the first query that will fetch data from the record to the delta store

 

dbcc dropcleanbuffers;   select        max(id) from dbo.test_CCI where id = 1001

 

Let’s have a look at the memory cache entries related to the CCI memory pool:

 

select        name,        in_use_count,        is_dirty,        pages_kb,        entry_data,        pool_id from sys.dm_os_memory_cache_entries where type = 'CACHESTORE_COLUMNSTOREOBJECTPOOL'; go

 

blog_55_-_3-_CCI_memory_management_cache_entries_empty-_

No entries and this is what I expected because data come only from delta store and the buffer pool is the only one concerned by this scenario. Another important point: segments are eliminated directly from disk. In order to prove it, I created an extended event to get segment elimination information as follows:

 

CREATE EVENT SESSION [cci_segment_elimination] ON SERVER ADD EVENT sqlserver.column_store_segment_eliminate (    WHERE ([sqlserver].[database_name]=N'db_test') ) ADD TARGET package0.event_file (        SET filename= N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Log\cci_segment_elimination' ) WITH (        MAX_MEMORY=4096 KB,        EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,        MAX_DISPATCH_LATENCY=30 SECONDS,        MAX_EVENT_SIZE=0 KB,        MEMORY_PARTITION_MODE=NONE,        TRACK_CAUSALITY=OFF,        STARTUP_STATE=OFF ) GO

 

And after looking at the extended event file, I noticed that the segment was eliminated by SQL Server as expected.

 

blog_55_-_4-_CCI_segment_elimination

 

The hobt_id value relies to the compressed segment in the columnstore index:

 

select        partition_id,        hobt_id,        column_id,        segment_id,        row_count,        min_data_id,        max_data_id from sys.column_store_segments where hobt_id = 72057594041925632

 

blog_55_-_5-_CCI_segment

 

Second scenario

It will consist in reading data directly from the columnstore segment.

 

dbcc dropcleanbuffers go   select        max(id) from dbo.test_CCI where id = 1

 

With the previous script that uses the sys.dm_os_memory_cache_entries DMV we can see this time, two cached entries from the columnstore object pool:

 

blog_55_-_1-_CCI_memory_management_-_cache_entries

 

 

I would like to thank Sunil Argarwal (Principal Program Manager in SQL Server Storage Engine) for his kindness and some information he gave me, in order to read the above entry data column values. [Object type] is a very useful information here with the following meaning:

0x0 = Un-initalized object 0x1 = Column segment 0x2 = Primary dictionary for a column 0x4 = Secondary dictionary for a column segment 0x5 = the primary dictionary with reverse HT initialized, used for bulk insert 0x6 = Delete bitmap - used temporarily when reading from disk

So let’s correlate with the retrieved entry data column values. SQL Server fetched the concerned segment (object_id = 1) from disk to the columnstore object memory. However, let’s have a look at the column_id value (= 2 here). In fact, I expected to get value =1 which might be related to the id column in the table dbo.test_CCI. I performed some other tests and they let me think that the column_id from the entry_data column value is in fact equal to the column_id minus 1 from the concerned table but I will check this point in the near future.

Moreover, according to the Sunil’s information, the deleted bitmap (object_id=6) was also fetched by SQL Server. I can imagine that SQL Server needs to read it to retrieve deleted records. My feeling is that these operations are fully logical and SQL Server has no way to identify a deleted record from a segment without reading the deleted bitmap.

We can assume that the columnstore memory pool is a room for storing the columnstore segments and segments are stored in LOB pages. Does SQL Server read data directly from the columnstore memory pool?

Let’s go back to the previous test. As a reminder, we want to get the max (id) from the columnstore segment related to the id columm. So SQL Server needs to read the related segment in this case. Let’s see if we can retrieve a corresponding page in the buffer pool by using the following script:

  select        page_type,        count(*) as nb_pages from sys.dm_os_buffer_descriptors where database_id = db_id('db_test') group by page_type order by page_type

 

blog_55_-_7-_CCI_buffer_pool

 

Ok, there are plenty of pages in the buffer pool related to the db_test database. Let’s focus first on data page by using the following script that consists in retrieving data pages only for the dbo.test_CCI table:

 

if object_id('tempdb..#buffer_descriptor_pages') is not null        drop table #buffer_descriptor_pages; go   create table #buffer_descriptor_pages (        num INT null,        ParentObject varchar(100) not null,        [Object] varchar(100) not null,        Field varchar(100) not null,        VALUE varchar(100) not null );       declare @sql varchar(max) = ''; declare @database_id int; declare @file_id int; declare @page_id int; declare @i int = 0;   declare c_pages cursor fast_forward for select        database_id, file_id, page_id from sys.dm_os_buffer_descriptors where database_id = db_id('db_test')        and page_type = 'DATA_PAGE';   open c_pages;   fetch next from c_pages into @database_id, @file_id, @page_id;   while @@fetch_status = 0 begin          set @sql = 'dbcc traceon (3604); dbcc page(' + cast(@database_id as varchar(10))                                                 + ', ' + cast(@file_id as varchar(10))                                                                           + ', ' + cast(@page_id as varchar(10))                                                                           + ', 3) with tableresults';          insert into #buffer_descriptor_pages(ParentObject, [Object], Field, VALUE)        exec(@sql);               update #buffer_descriptor_pages        set num = @i        where num is null;          set @i = @i + 1;        fetch next from c_pages into @database_id, @file_id, @page_id;     end        close c_pages; deallocate c_pages;   select * from #buffer_descriptor_pages where num in(select num                            from #buffer_descriptor_pages                            where Field = 'Metadata: ObjectId'                                   and VALUE = object_id('dbo.test_CCI'));

 

In my case, I retrieved only one page with the following detail:

 

blog_55_-_7-_CCI_page_compressed_details

 

We get a compressed data page and, to be more precise, a data page that comes from the delta store (id = 1001). Remember that the segment elimination is not performed for the delta store. This is why I got this page in my case.

Next, let’s have a look at the LOB pages (our segments)

 

select        database_id,        file_id,        page_id,        allocation_unit_id,        row_count from sys.dm_os_buffer_descriptors where database_id = db_id('db_test')        and page_type = 'TEXT_MIX_PAGE'              and allocation_unit_id IN(select au.allocation_unit_id                                         from sys.allocation_units as au                                                          join sys.partitions as p                                                             on p.hobt_id = au.container_id                                                          where p.object_id = object_id('dbo.test_CCI'));

 

blog_55_-_8-_CCI_LOB_pages

 

We have one LOB page (TEXT_MIXPAGE type) but it seems to be empty and I admit that I don’t have any idea of this page. I will update this blog post later if I get a response.

So, to summarize and according to my tests, it’s seems that SQL Server reads LOB pages directly from the columnstore object pool and doesn’t need to use the BP in this case.

 

Third scenario

It will consist in updating data from the columnstore index and understanding how SQL Server behaves in this case.

 

alter index [PK__test_CCI__3213E83F3A6FE3AC] on [dbo].[test_CCI] rebuild

...

select        object_name(object_id) as table_name,        index_id,        row_group_id,        delta_store_hobt_id,        state_description as [state],        total_rows,        deleted_rows,        size_in_bytes from sys.column_store_row_groups where object_id = object_id('test_CCI');

 

blog_55_-_9-_CCI_without_deltastore

 

Next, let’s update the columnstore index by using the following query:

  checkpoint; go
dbcc dropcleanbuffers; go   update dbo.test_CCI set col1 = 'toto'

 

At this point, a delta store page is created by SQL Server and we have to think differently because the storage has changed from columnar to row store. So let’s have a look at the modified pages related to the columnstore index.

 

if object_id('tempdb..#buffer_descriptor_pages') is not null        drop table #buffer_descriptor_pages; go   create table #buffer_descriptor_pages (        num INT null,        ParentObject varchar(400) not null,        [Object] varchar(400) not null,        Field varchar(400) not null,        VALUE varchar(400) not null );     declare @sql varchar(max) = ''; declare @database_id int; declare @file_id int; declare @page_id int; declare @i int = 0;   declare c_pages cursor fast_forward for select        database_id, file_id, page_id from sys.dm_os_buffer_descriptors where database_id = db_id('db_test')        and page_type = 'DATA_PAGE'              and is_modified = 1;   open c_pages;   fetch next from c_pages into @database_id, @file_id, @page_id;   while @@fetch_status = 0 begin          set @sql = 'dbcc traceon (3604); dbcc page(' + cast(@database_id as varchar(10))                                                 + ', ' + cast(@file_id as varchar(10))                                                                           + ', ' + cast(@page_id as varchar(10))                                                                           + ', 3) with tableresults';          insert into #buffer_descriptor_pages(ParentObject, [Object], Field, VALUE)        exec(@sql);               update #buffer_descriptor_pages        set num = @i        where num is null;          set @i = @i + 1;        fetch next from c_pages into @database_id, @file_id, @page_id;     end        close c_pages; deallocate c_pages; select * from #buffer_descriptor_pages where num in(select num                            from #buffer_descriptor_pages                            where (Field = 'Metadata: ObjectId')                                   and VALUE = object_id('dbo.test_CCI'))        and(Field = 'm_pageId' or Field = 'Record Type' or Field = 'CD array entry' or Field = 'id' or Field = 'col1');

 

blog_55_-_10-_CCI_after_modifying_data

 

Note that this time, I only focused on the modified / dirty pages in my result and I noticed that there are two data pages. The second page (1:94) in the order of this result set is pretty obvious because it concerns the record with id = 1 and col1 = toto (the modified data). However, I’m not sure to know exactly what the first page is but I can again speculate: we performed an update operation and we know that this operation is split in two basic operations DELETE + INSERT. So my feeling here is that this page lies to the deleted bitmap. Let’s have a look at the sys.column_store_row_groups DMV:

 

select        object_name(object_id) as table_name,        index_id,        row_group_id,        delta_store_hobt_id,        state_description as [state],        total_rows,        deleted_rows,        size_in_bytes from sys.column_store_row_groups where object_id = object_id('test_CCI');

 

blog_55_-_11-_CCI_rowgroup_after_modifying_data

 

And as expected we can notice a logical deleted record in the row group with a new open delta store (and its deleted bitmap). So let’s perform a checkpoint and clear the buffer pool.

 

checkpoint; go   dbcc dropcleanbuffers; go

 

Now, we can wonder how SQL Server will retrieve data from id column = 1. Indeed, we have a deleted record into the row group from one side and the new version of the data in the delta store on the other side. So, we can guess that SQL Server will need to fetch both the data pages from the delta store and the deleted bitmap to get the correct record.

Let’s verify by performing this test and taking a look first at the memory cache entries related to the columnstore index.

 

blog_55_-_12-_CCI_memory_management_cache_entries_3

 

SQL Server has fetched the corresponding segment (object_type =1) and the deleted bitmap (object_id=6) as well. Note that segment elimination is not performed for the concerned segment because SQL Server is not able to perform an elimination for segments that contain logical deletions.

Finally let’s retrieve the data pages in the buffer pool related to the columnstore index:

 

blog_55_-_13_-_CCI_page_compressed_details

 

Ok we retrieved the same clean pages (is_modified = 0) and performing the same test after rebuilding the CCI yielded an empty result. In the latter case, this is the expected behaviour because rebuilding the columnstore index get rid of deleted records inside the segments. Thus, SQL Server doesn’t need the deleted bitmap.

I didn’t cover all the scenarios in this blog post and some questions are not answered yet. My intention was just to introduce some interesting internal stuff done by the CCI. This is definitely a very interesting topic that I want to cover in the near future. Please feel free to share your thoughts about this exciting feature!

OTN Virtual Technology Summit – Spotlight on Database Tracks

OTN TechBlog - Fri, 2015-07-10 09:00

The Virtual Technology Summit is a series of interactive online events with hands-on sessions and presenters answering technical questions. The events are sponsored by the Oracle Technology Network (OTN). These are free events but you must register.

Database - Mastering Oracle Database Technologies:

Oracle Database 12c delivers market-leading security, high performance, availability and scalability for Cloud application deployments. The OTN Virtual Technology Summit offers two Database tracks: one focused on Cloud application development and deployment practices and the other on the Developing and deploying .Net applications on the Oracle platform. Sessions focus on Oracle Database Cloud Services, Oracle .Net development tools and technologies and more.

Track One Sessions include:

Best Practices for Migrating On-Premises Databases to the Cloud: Oracle Multitenant is helping organizations reduce IT costs by simplifying database consolidation, provisioning, upgrades, and more. Now you can combine the advantages of multitenant databases with the benefits of the cloud by leveraging Database as a Service (DBaaS). In this session, you’ll learn about key best practices for moving your databases from on-premises environments to the Oracle Database Cloud and back again.

Master Data Management (MDM) Using Oracle Table Access for Hadoop: The new Hadoop 2 architecture leads to a bloom of compute engines including MapReduce v2, Apache Spark, Apache Tez, Apache Storm, Apache Giraph, Cloudera Impala, GraphLab, Splunk Hunk, Microsoft Dryad, SAS HPA/LASR, and Oracle Big Data SQL.Some Hadoop applications such as Master Data Management and Advanced Analytics perform the majority of their processing from Hadoop but need access to data in Oracle database which is the reliable and auditable source of truth. This technical session introduces upcoming Oracle Table Access for Hadoop (OTA4H) which exposes Oracle database tables as Hadoop data sources. It will describe OTA4H architecture, projected features, performance/scalability optimizations, and discuss use cases. A demo of various Hive SQL and Spark SQL queries against Oracle table will be shown.

Hot Tips: Mastering SQL Developer Data Modeler: Oracle SQL Developer Data Modeler (SDDM) has been around for a few years now and is up to version 4.1. It really is an industrial strength data modeling tool that can be used for any data modeling task you need to tackle. Over the years I have found quite a few features and utilities in the tool that I rely on to make me more efficient (and agile) in developing my models. This presentation will demonstrate at least five of these features, tips, and tricks for you. I will walk through things like modifying the delivered reporting templates, how to create and applying object naming templates, how to use a table template and transformation script to add audit columns to every table, and using the new meta data export tool and several other cool things you might not know are there. Get SDDM installed on your device and bring it to the session so you can follow along.

Track Two Sessions include:

What's New for Oracle and .NET - (Part 1): With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore the following new features introduced in recent releases:

  • Visual Studio 2015 and .NET Framework 4.6
  • Entity Framework Code First and Code First Migrations
  • NuGet installation
  • Schema compare tools
  • ODP.NET, Managed Driver
  • Multitenant container database management
  • ODP.NET application high availability
  • Ease of .NET development features

Oracle product managers will present these sessions with code and tool demonstrations using Visual Studio 2015.

What's New for Oracle and .NET - (Part 2): With the release of ODAC 12c Release 4 and Oracle Database 12c, .NET developers have many more features to increase productivity and ease development. These sessions explore the following new features introduced in recent releases:

  • Visual Studio 2015 and .NET Framework 4.6
  • Entity Framework Code First and Code First Migrations
  • NuGet installation
  • Schema compare tools
  • ODP.NET, Managed Driver
  • Multitenant container database management
  • ODP.NET application high availability
  • Ease of .NET development features

Oracle product managers will present these sessions with code and tool demonstrations using Visual Studio 2015.

Oracle and .NET: Best Practices for Performance: This session explores .NET coding and tuning best practices to achieve faster data access performance. It presents techniques and trade-offs for optimizing connection pooling, caching, data fetching and updating, statement batching, and Oracle datatype usage. We will also explore using Oracle Performance Analyzer from Visual Studio to tune a .NET application's use of the Oracle Database end to end.

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it in our FAQ: Oracle Community – Rewards & Recognition FAQ.

Log Buffer #431: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-10 08:46

This Log buffer edition covers Oracle, SQL Server and MySQL blog posts about new features, tips, tricks and best practices.

Oracle:

  • Traditionally, assigning specific processes to a certain set of CPUs has been done by using processor sets (and resource pools). This is quite useful, but it requires the hard partitioning of processors in the system. That means, we can’t restrict process A to run on CPUs 1,2,3 and process B to run on CPUs 3,4,5, because these partitions overlap.
  • Parallel_Degree_Limit, Parallel_Max_Degree, Maximum DOP? Confused?
  • JDeveloper 12c – ORA-01882: time zone region not found
  • Using a Parallel Gateway without a Merge in OBPM
  • Secure multi-threaded live migration for kernel zones

SQL Server:

  • How to Unlock Your Team’s Creative Power with SQL Prompt and a VCS
  • In-Memory OLTP – Common Workload Patterns and Migration Considerations
  • The Poster of the Plethora of PowerShell Pitfalls
  • Temporarily Change SQL Server Log Shipping Database to Read Only
  • Monitoring Availability Groups with Redgate SQL Monitor

MySQL:

  • Introducing MySQL Performance Analyzer
  • MySQL 5.7.8 – Now featuring super_read_only and disabled_storage_engines
  • Become a MySQL DBA – Webinar series: Which High Availability Solution?
  • How to obtain the MySQL version from an FRM file
  • MySQL Enterprise Audit: Parsing Audit Information From Log Files, Inserting Into MySQL Table

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #431: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

About BYOE – Bring Your Own Encryption

OracleApps Epicenter - Fri, 2015-07-10 08:14
BYOE aka Bring your own encryption is a security model that gives cloud customers complete control over the encryption of their data by allowing them to deploy a visualized instance of their own encryption software in tandem with the application they are hosting in the cloud. BYOE can help an organization that wishes to take […]
Categories: APPS Blogs

OpenWorld and UKOUG Apps 15 Sessions

Duncan Davies - Fri, 2015-07-10 05:03

I’m delighted that the sessions that I’ve submitted to both Oracle OpenWorld and UKOUG’s APPS15 conference have all been accepted.

The sessions are as follows:

OpenWorld:

PeopleSoft Selective Adoption Best Practices from the Front Line (joint with Mark Thomas of Hays)
PeopleSoft Selective Adoption is genuinely transformative, but to make the best use of it you need a plan. Come to hear from the one of the earliest adopters of this new model on defining how it can work best for you, not just the implementation but the ongoing effective use of PeopleSoft Images using PeopleSoft Update Manager (PUM), the frequency of patch application and the business benefits achieved.

UKOUG APPS15:

Taleo for PeopleSoft People
We’ll show you the Taleo products that work well with PeopleSoft and explain the circumstances in which they might be a good choice for you. We’ll also show the products working together and explain how you can deploy it without breaking the bank.

Running PeopleSoft as a Service
We’ve all heard of the ‘Software as a Service’ (SaaS) vendors. You don’t need to abandon the many benefits of PeopleSoft to gain the agility, security and efficiencies of consuming PeopleSoft as a Service however. We’ll show the key tenets of ‘as a Service’ delivery and how functionality like Selective Adoption enables you to attain them with PeopleSoft.

You can find out more about OpenWorld here and Apps15 here.


ACFS 12.1.0.2 on Oracle Linux 7.1

Yann Neuhaus - Fri, 2015-07-10 03:56
Recently we wanted to create an ACFS filesystem on a brand new 12.1.0.2 GI installation on Oracle Linux 7.1. According to the documentation this should not be an issue as "Oracle Linux 7 with the Unbreakable Enterprise kernel: 3.8.13-33.el7uek.x86_64 or later" is supported.

Oracle Priority Support Infogram for 09-JUL-2015

Oracle Infogram - Thu, 2015-07-09 15:05

Solaris
The blogosphere is humming with the new Solaris 11.3 and this posting from the Oracle brings a bunch of those links together for you:
Here's Your Oracle Solaris 11.3 List of Blog Posts
A couple of general posts of interest. This one from Prakash Sangappa's Weblog: Post-Wait mechanism
And: Secure multi-threaded live migration for kernel zones, from The Zones Zone.
Oracle Support
New ORAchk Version 12.1.0.2.4 Released, from the Database Support Blog at Oracle Communities.
System Administration
Handy Space Monitoring, from Oracle Storage Ops.
OEM
Snap Cloning Databases on Exadata using Enterprise Manager, from the Oracle Enterprise Managerblog.
And from the same blog:
Understanding Plans, Profiles and Policies in Enterprise Manager Ops Center
SQL Developer
Top 10 Things You Might Be Overlooking in Oracle SQL Developer, from that JEFF SMITH.
OBIEE
Download Demonstration VM for OBI 11g SampleApp v506 with Big Data, from BI & EPM Partner Community EMEA.
OVM
Oracle VM VirtualBox 5.0 Officially Released!, from Oracle’s Visualization Blog.
ODI
ODI KMs for Business Intelligence Cloud Service, from the Data Integration blog.
Oracle Stream Explorer
Getting Started with Oracle Stream Explorer free online training at Oracle Learning Library, from SOA & BPM Partner Community Blog.
Java
Java ME 8 Tutorial Series, from The Java Source.
EBS
From the Oracle E-Business Suite Support blog:
Does the Approval Analyzer show "Authentication failed" in the Output?
New to the Procurement Accounting Space - Introducing the EBS iProcurement Change Request Analyzer!
Webcast: Subledger Accounting (SLA) Features within Cost Management
Need Help with PO Output for Communication Issues?

EB Tax Analyzer enhanced to capture Tax Reporting issues !!

Pillars of PowerShell: SQL Server – Part 1

Pythian Group - Thu, 2015-07-09 13:37
Introduction

This is the sixth and final post in the series on the Pillars of PowerShell, at least part one of the final post. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS

PowerShell + SQL Server is just cool! You will see folks talk about the ability to perform a task against multiple servers at a time, automate implementing a configuration or database change, or just obtaining a bit of consistency when doing certain processes. I tend to use it just because I can, and it is fun to see what I can do. There are a some instances where I have used it for a specific purpose where it saved me time, but overall I just chose to use it. I would say that on average there are going to be things you can do in PowerShell that could be done in T-SQL, and in those cases you use the tool that fits your needs.

Interacting with SQL Server PowerShell

There are a three main ways to interact with SQL Server using PowerShell that I have seen:

  1. SQL Server PowerShell (SQLPS)
  2. SQL Server Server Management Object (SMO)
  3. Native .NET coding

I am not going to touch on the third option in this series because it is not something I use enough to discuss. I will say, it is not the first choice for me to use it, but it does serve a purpose at times.

To try and provide enough information to introduce you to working with PowerShell and SQL Server, I broke this into two parts. Part one, we are going to look at SQL Server PowerShell (SQLPS) and using the SQL Server Provider (SQLSERVER:\). In part two we will go over SMO and what can be accomplished.

SQLPS, to me, offers you quick access to do the one-liner type tasks against SQL Server. It is just a preference really on which option you go with, so if it works for you just use it. There are some situations that using the SQL Server Provider actually requires you to mix in using SMO (e.g. creating a schema or database role). It also offers up a few cmdlets that are added onto (and improved upon) with each release of SQL Server.

Loading/Importing

The first thing to understand is how to get the product module into your PowerShell session. As with most products, some portion of the software has to exist on the machine you are working on, or the machine your script is going to be executed on. SQL Server PowerShell and SMO are installed by default if you install the SQL Server Management Tools (aka SSMS and such) for SQL Server 2008 and higher. I will only mention that they can also be found in the SQL Server Feature Pack if you need a more “standalone” type setup on a remote machine.

One thing you should get in the habit of doing with your scripts is verifying certain things that can cause more errors than are desired, one of those is dealing with modules. If the module is not loaded when the script is run your script is just going to spit out a ton of red text. If the prerequisites are not there to begin with, there is no point in continuing. You can verify that a version of the SQLPS module is installed on your machine by running the following command:

Get-Module -ListAvailable -Name SQL*

If you are running SQL Server 2012 or 2014 you will see something like this:

SQLModule1

This works in a similar fashion when you want to verify if the SQL Server 2008 snap-in is loaded:

SQLSnapin1

I generally do not want to have to remember or type out these commands all the time when I am doing things on the fly, so I will add this bit of code to my PowerShell Profile:

Push-Location
Import-Module SQLPS -DisableNameChecking -ErrorAction 'Stop'
Pop-Location

#Load SQL Server 2008 by uncommenting next line
#Add-PSSnapin *SQL* -ErrorAction 'Stop'

One cool thing that most cmdlets you use in PowerShell contain is the -ErrorAction parameter. There are a few different values you can use for this parameter, and you can find those by checking the help on about_CommonParamters. If your script is one that is going to be interactive or run manually I would use -ErrorAction ‘Inquire‘ instead, try it out on a machine that does not have the module installed to see what happens. Once you have the module or snap-in loaded you will be able to access the SQL Server PowerShell Provider.

One side note, there actually is a “sqlps.exe” utility that is easily accessible in most cases via the right-click menu in SSMS (e.g. right-click on the “Databases” node in Object Explorer). If you open this, you are thrust into the SQLPS provider and the “directory” of the node you opened from in SSMS. However convenient as that may seem, it is something that was added to the depreciation list with SQL Server 2012, so there’s not much point in talking about it. It has its own little quirks that most folks steer clear of using it anymore.

Being Specific

The code I use in my profile is going to load the most current version of the module found on my system, at least it should. It may not do as you think it will every time. In some circumstances when you are developing scripts on your own system you may need to only import a specific version; especially if you are in a mixed version environment for SQL Server. You can load a specific version of the module by utilizing Get-Module to find your version, and just pass it to Import-Module.

Get-Module -ListAvailable -Name SQLPS | select name, path
#110 = SQL Server 2012, 120 = SQL Server 2014, 130 = SQL Server 2016
Push-Location
Get-Module -ListAvailable -Name SQLPS |
     where {$_.path -match "110"} | Import-Module
Pop-Location

# To show that it was indeed loaded
Get-Module -Name SQLPS | select name, path

#If you want to switch to another one, you need to remove it
Remove-Module SQLPS
Authentication

By default when you browse the SQLPS provider (or most providers actually), it is going to utilize the account that is running the PowerShell session, Windows Authentication. If you find yourself working with an instance that you require SQL Login authentication, don’t lose hope. You can connect to an instance via the SQL Server Provider with a SQL Login. There is an MSDN article that provides a complete function that you can use to create a connection for such a purpose. It does not show a version of the article for SQL Server 2008 but I tested this with SQL Server 2008 R2 and it worked fine.

SQLSnapin_Authentication

One important note I will make that you can learn from the function in that article: the password is secure and not stored or processed in plain text.

SQLPS Cmdlets

SQLPS as noted previously offers a handful of cmdlets for performing a few administrative tasks against SQL Server instances. The majority of the ones you will find with SQL Server 2012 for example revolve around Availability Groups (e.g. disabling, creating, removing, etc.). The other unmentionables include Backup-SqlDatabase and Restore-SqlDatabase, these do exactly what you think but with a few limitations. The backup cmdlet can actually only perform a FULL, LOG, or FILE level backup (not sure why they did not offer support of a differential backup). Anyway, they could be useful for automating backups of production databases to “refresh” development or testing environments as the backup cmdlet does support doing a copy only backup. Another way is if you deal with Express Edition you can utilize this cmdlet and a scheduled task to backup those databases.

Update 7/13/2015: One correction, where I should have checked previously, but the Backup cmdlet for 2012 and above does include an “-Incremental” parameter for performing differential backups.

The other main cmdlet you get with SQLPS is what most people consider the replacement to the sqlcmd utility, Invoke-Sqlcmd. The main thing you get from the cmdlet is a smarter output in the sense that PowerShell will more appropriately detect the data type coming out, compared to the utility that just had everything as a string.

SQLPS One-liners

Working with the SQL Server Provider you will traverse this provider as you would a drive on your computer. So you can use the cmdlet Get-ChildItem or do as most folks and use the alias dir. The main thing to understand is the first few “directories” to access a given SQL Server instance. There are actually multiple root directories under the provider that you can see just by doing “dir SQLSERVER:\“. You can see by the description what each one is for, the one we are interested in is the “Database Engine”

SQLProvider2

Once you get beyond the root directory it can require a bit of patience as the provider is slow to respond or return information. If we want to dig into an instance of SQL Server you just need to understand the structure of the provider, it will generally follow this syntax: <Provider>:\<root>\<hostname>\<instance name>\. The instance name will be “DEFAULT” if you are dealing with a SQL Server default instance. If you have a named instance you just add the name of the instance (minus the server name).

To provide a real-world example, Avail Monitoring is the custom tool Pythian developed to monitor the SQL Server environments of our customers (or Oracle or MySQL…you get the point). One of the features it includes, among many, is monitoring for failed jobs. We customize the monitoring around the customer’s requirements so some job failures will page us immediately when it occurs, while others may allow a few extra failures before we are notified to investigate. This is all done without any intervention required by the customer and I know from that notification what job failed. Well right off you are going to want to check the job history for that job to see what information shows up, and I can use SQLPS Provider to do just that:

# To see the job history
dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.EnumHistory()} | select message, rundate -first 5 | format-list
SQLProvider3
# if I needed to start the job again
$jobs = dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs
$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Start()}

You might think that is a good bit to typing, but consider how long it can take for me to do the same thing through SSMS…I can type much faster than I can click with a mouse.

Anyway to close things out, I thought I would show one cool thing SQLPS can be used for the most: scripting out stuff. Just about every “directory” you go into with the provider is going to offer a method named “Script()”.

$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Script()}

I will get the T-SQL equivalent of the job just like SSMS provides, this can be used to document your jobs or used when refreshing a development server.

Summary

I hope you got the idea of what SQLPS can do from the information above, one-liners are always fun to discover. The SQL Server Provider is not the most used tool out there by DBAs, but it can be a life-saver at times. In the next post we will dig into using SMO and the awesome power it offers.

 

Discover more about our expertise in SQL Server

The post Pillars of PowerShell: SQL Server – Part 1 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Kscope15 - It's a Wrap, Part II

Chet Justice - Thu, 2015-07-09 13:04
Another fantastic Kscope in the can.

This was my final year in an official capacity which was a lot more difficult to deal with than I had anticipated. Here's my record of service:
  • 2010 (2011, Long Beach) - I was on the database abstract review committee run by Lewis Cunningham. I ended up volunteering to help put together the Sunday Symposium and with the help of Dominic Delmolino, Cary Millsap and Kris Rice, I felt I did a pretty decent job.
  • 2011 (2012, San Antonio) - Database track lead. I believe this is the year that Oracle started running the Sunday Symposiums. Kris again led the charge with some input from those other two from the year before, i.e. DevOps oriented
  • 2012 (2013, New Orleans) Content co-chair for the traditional stuff (Database, APEX, ADF), Interview Monkey (Tom Kyte OMFG!), OOW/ODTUG Coordinator, etc.
  • 2013 (2014, Seattle) Content co-chair for the traditional stuff (Database, APEX, ADF), Interview Monkey, OOW/ODTUG Coordinator, etc.
  • 2014 (2015, Hollywood, FL) Content co-chair for the traditional stuff (Database, APEX, ADF)

This has been a wonderful time for me both professionally and, more importantly to me, personally. Obviously I had a big voice in the direction of content. Also and maybe hard to believe, I actually presented for the first time. Slotted against Mr. Kyte. I reminded everyone of that too. Multiple times. It seemed to go well though. Only a few made fun of me.

I was constantly recruiting too. "Did you submit an abstract?" "No, why not?" and I'd go into my own personal diatribe (ignoring my own lack of presenting) into why they should present. Sarah Craynon Zumbrum summed it up pretty well in a recent article.

But it was the connections I made, the people I met, the stories I shared (#ampm, #cupcakeshirt, etc), and the friends that I made, that's what has had the most impact on me. Kscope is unique in that way because of it's size...at Collaborate or OOW, you'll be lucky to see someone more than once or twice, at Kscope you're running into everyone constantly.

How could I forget? #tadasforkate! This year was even more special. For those that don't know, Katezilla is my profoundly delayed but equally profoundly happy 10 y/o daughter. Just prior to the conference her physical therapist taught her "tada!" and Kate would hold her hands up high in the air and everyone around would yell, Tada! I got this crazy idea to ask others to do it and I would film it. Thirty or forty videos and hundreds of participants later...



So a gigantic thank you to everyone who made this possible for me.
Here's a short list of those that had a direct impact on me...
  • Lewis Cunningham - he asked me to be a reviewer which started all of this off.
  • Mike Riley - can't really say enough about Mike. After turning me away a long time ago (jerk), he was probably my biggest supporter over the years. (Remind me next year to you tell you about "The Hug."). Mike, and his family, are very dear to me.
  • Monty Latiolais (rhymes with Frito Lay I would tell myself) - How can you not love this guy?
  • Natalie Delemar - Co-chair for EPM/BI and then boss as Conference Chair.
  • Opal Alapat - Co-chair for EPM/BI and one of my favorite humans ever invented. I aspire to be more organized, assertive, and bad-ass like Opal.
That list is by no means exhaustive. It doesn't even include staff at YCC, like Crystal Walton, Lauren Prezby and everyone else there. Nor does it include the very long list of Very Special People I've met. I consider myself very fortunate and incredibly grateful.

What's the future hold?
I have no idea. My people are in talks with Helen J. Sander's people to do one or more presentations next year, so there's that. Speaking of which...it's in Chicago. Abstract submissions start soon, I hope you plan on submitting. If you're not ready to submit, I hope you take try to take part in shaping the content by finding one of about 10 abstract review committees. Who knows where they may lead you?

Finally, here's the It's a Wrap video from Kscope15 (see Helen's story there). Here's Kscope16's site. Go sign up.

Categories: BI & Warehousing

Reading System Logs on SQL Server

Pythian Group - Thu, 2015-07-09 12:54

HDDRecently, while I was working on a backup failure issue, I found that it was failing for a particular database. When I ran the backup manually to a different folder it would complete successfully, but not on the folder that it was directed to when the backup jobs were originally configured .  This makes me suspicious about hard disk corruption. In the end, I fixed the backup issues in the interim so that in the future I would not get paged, as well as lowering the risk of having no backup in place.

Upon reviewing the Windows Event logs, it was revealed that I was right about suspecting a faulty hard drive. The log reported some messages related to the SCSI codes, especially the SCSI Sense Key 3 which means SCSI had a Medium error. Eventually, the hard drive was replaced by the client and the database has been moved to another drive.  In the past month, I have had about 3 cases where I have observed that the serious messages related to storage are reported as information. I have included one case here for your reference, which may help you in case you see such things in your own logs.

CASE 1 – Here is what I found in the SQL Server error log:

  • Error: 18210, Severity: 16, State: 1
  • BackupIoRequest::WaitForIoCompletion: read failure on backup device ‘G:\MSSQL\Data\SomeDB.mdf’.
  • Msg 3271, Level 16, State 1, Line 1
  • A non-recoverable I/O error occurred on file “G:\MSSQL\Data\SomeDB.mdf:” 121 (The semaphore timeout period has expired.).
  • Msg 3013, Level 16, State 1, Line 1
  • BACKUP DATABASE is terminating abnormally.

When I ran the backup command manually I found that it ran fine until a specific point (i.e. 55%) before it failed again with the above error. Further, I decided to run DBCC CHECKDB which reports when a particular table has a consistency error at a particular page. Here are the reported errors:

Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:157134) with latch type SH. 121(The semaphore timeout period has expired.) failed.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:157134) allocated to object ID 645577338, index ID 0, partition ID 72057594039304192, alloc unit ID 72057594043301888 (type In-row data) 
was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header. The repair level on the DBCC statement caused this repair to be bypassed.

Of course, repairing options did not help as I had anticipated initially, since the backup was also failing when it reached at 55%. The select statement also failed to complete when I queried the object 645577338.  The only option that I was left with was to recreate the new table and drop the original table. After this had been done, the full back up succeeded. As soon as this was completed we moved the database to another drive.

I was still curious regarding these errors, so I started looking at Windows Error Logs – System folder, filtering it to show only Errors and Warnings.  However, this did not show me anything that attracted me to read further. Thus, I removed the filter, and carefully reviewed the logs.  To my surprise, the logs show entries for a bad sector, but, this was in the Information section of Windows Event Viewer, System folder.

Event Type: Information
Event Source: Server Administrator
Event Category: Storage Service
Event ID: 2095
Date: 6/10/2015
Time: 1:04:18 AM
User: N/A
Computer: SQLServer
Description: SCSI sense data Sense key: 3 Sense code:11 Sense qualifier: 0:  Physical Disk 0:2 Controller 0, Connector 0.

There could be a different error, warning or information printed on your server depending what the issue is. Upon further review there is still much to be said in order to explain codes and descriptions.

You may have noticed that I have referred to this as CASE 1, which means, I will blog one or two more case(s) in the future. Stay tuned!

Photo credit: Hard Disk KO via photopin (license)

Learn more about our expertise in SQL Server.

The post Reading System Logs on SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Using a Parallel Gateway without a Merge in OBPM

Jan Kettenis - Thu, 2015-07-09 10:54
In this blog article I give a brief explanation regarding some aspect of the behavior of the parallel gateway in Oracle BPM. It has been changed on September 15 2015 by adding the remark at the end regarding a Complex Merge (thanks to Martien van den Akker).

For the BPMN modelers among us, I have a small quiz.

Given a process model like this, what would be the behavior of Oracle BPM?



  1. It does not compile because OBPM thinks it is not valid BPMN
  2. The flows with Activity 1 and 2 are merged, the token moves to the End event of the process, and then the instance finishes.
  3. Activity 1 and 2 are executed, and then OBPM waits in the merge because to continue all tokens have to reach the merge.
  4. The flows with Activity 1 and 2 are merged, the token moves to the End event of the process, and in the meantime waits until the timer expires. It will not end before the token reached the Terminate end event, because not all flows from the split are explicitly merged the whole process itself serves as an implicit merge.

If this would be some magazine, I would now tell you to go to the last page and turn it upside down to read the answer. Or wait until the next issue in which I announce the prize winners.

Alas, no such thing here so let me give you the answer straight away, which is answer 4:



I must admit I was a bit surprised, as I seem to remember that some bundle patches or patch sets ago it would have been a. But when you look at the BPMN specification there is nothing that says that a parallel gateway always has to have a merge. Strange then that OBPM does not let you draw a model without one, but at least it works with a merge with just one ingoing flow.

As a matter of fact, to make the End even actually end the instance, you should change it into an Intermediate Message Throw event, and end the process with a Terminate End event as well. Run-time that looks awkward, because even when your process ends successfully it has the state Terminated.

Fir this reason and and perhaps because your audience might just not understand this model, specifically when it concerns a larger one, the following alternative perhaps is easier to understand. You now can choose if and which flow you want to end with a Terminate End event.

To force that the process continues after the merge, a Complex Merge is used that aborts all other pending parallel flows when the timer expires.

WebCenter & BPM: Adaptive Case Management

WebCenter Team - Thu, 2015-07-09 10:47
By Mitchell Palski, Oracle WebCenter Sales Consultant 
We are happy to have Mitchell Palski joining us on the blog for a Q&A around strategies and best practices for how to deliver Adaptive Case Management with WebCenter and BPM.
Q. So to begin, can you describe for our listeners what case management is? A case is a collection of activities that support a specific business objective. Each case has a lifecycle, which is essentially a process that delivers a service to a use that includes:
  • Activities
  • Rules
  • Information, content, etc.
Case management defines specific interactions that a case may have with a system and with the actual users who are involved in a case’s lifecycle. In a self-service solution, these cases are typically:
  1. Initiated by a customer or citizen
  2. Routed through workflow based on specific business rules or employee intervention
  3. Resolved by evaluating data and content that is captured during the lifecycle

Q. Why is Case Management important? How does Case Management differ from Adaptive Case Management?
Case management is an important concept in today’s technology because it is a primary means of how services are provided to our end users. Some examples might include:

  • Patient services
  • Traffic violation payments
  • Retirement benefit delivery
  • Building, health, or child safety inspections
  • Employee application, promotion, or incident tracking
Each of these examples ties unique combinations of data and documentation together to provide some meaningful service to an end user. Case management software gives organizations the means to standardize how those services are delivered so that they can be completed accurately, quickly and more efficiently.
Adaptive case management is way of modeling flexible and data intensive business processes. Typically, adaptive case management is needed when a case’s lifecycle includes:
  • Complex interactions of people and policies
  • Complex decision making that require subjective judgments to be made
  • Specific dependencies that may need to be overridden based on the combination of fluid circumstances

Adaptive case management allows your organization to employ the use of standard processes and policies, but also allows for flexibility and dynamic decision making when necessary.

Q. How do Oracle WebCenter and Oracle Business Process Management help to deliver Adaptive Case Management? The Oracle Business Process Management Suite includes:
  • Business user-friendly modeling and optimization tools
  • Tools for system integration
  • Business activity monitoring dashboards
  • Rich task and case management capabilities for end users
Oracle BPM Suite gives your organization the tools that it needs to illustrate complex case management lifecycles, define and assign business rules, and integrate your processes into critical enterprise systems. Everything from defining your data and process flows, to implementing actual case interactions; BPM Suite has an intuitive web-based interface where everyone on your staff can collaborate and deliver the best solution for your customers possible.

WebCenter Portal serves as a secure and role-based presentation layer that can be used to optimize the way that users interact with the case management system. For case management lifecycles to be effective, they need to be easy and intuitive to access as well as provide meaningful contextual content to end users who are interacting with their case. WebCenter Content supports the document management aspect of a case by managing the complete lifecycle of documents that are associated with cases, organizational policies, or any web content that helps to educate end-users.

Q. Do you have any customer or real-world examples you could share with our listeners? The Spirit of Alaska is a Retirements Benefit program that faced:
  • Limited resources and funding
  • Out-dated and undocumented processes
  • A drastic and immediate increase in the number of cases being processes
With the help of Oracle, the Alaska Department of Retirement Benefits was able to:
  • Automate and streamline their business processes
  • Reduce the frequency of data input errors
  • Improve customer service effectiveness
The end result was a solution that not only delivered retirements benefits to citizens more quickly and accurately, but also relieved the burden of the state’s business challenges now and in the future.
Thank you, Mitchell for sharing your strategies and best practices on how to deliver Adaptive Case Management with WebCenter and BPM.  You can listen to a podcast on this topic here, and be sure to tune in to the Oracle WebCenter Café Best Practices Podcast Series for more information!

Unizin One Year Later: View of contract reveals . . . nothing of substance

Michael Feldstein - Thu, 2015-07-09 09:18

By Phil HillMore Posts (343)

I’ve been meaning to write an update post on Unizin, as we broke the story here at e-Literate in May 2014 and Unizin went public a month later. It’s one year later, and we still have the most expensive method to get the Canvas LMS. There are also plans for a Content Relay and Analytics Relay as seen in ELI presentation, but the actual dates keep slipping.

Unizin Roadmap

e-Literate was able to obtain a copy of the Unizin contract, at least for the founding members, through a public records request. There is nothing to see here. Because there is nothing to see here. The essence of the contract is for a university to pay $1.050 million to become a member. The member university then has a right (but not an obligation) to then select and pay for actual services. Based on the contract, membership gets you . . . membership. Nothing else.

What is remarkable to me is the portion of the contract spelling out obligations. Section 3.1 calls out that “As a member of the Consortium, University agrees to the following:” and lists:

  • complying with Unizin bylaws and policies;
  • paying the $1.050 million; and
  • designating points of contact and representation on board.

Unizin agrees to nothing. There is literally no description of what Unizin provides beyond this description [emphasis added]:

This Agreement establishes the terms of University’s participation in the Consortium, an unincorporated member-owned association created to provide Consortium Members access to an evolving ecosystem of digitally enabled educational systems and collaborations.

What does access mean? For the past year the only service available has been Canvas as an LMS. When and if the Content Relay and Analytics Relay become available, member institutions will have the right to pay for those. Membership in Unizin gives a school input into defining those services as well.

As we described last year, paying a million dollars to join Unizin does not give a school any of the software. The school has to pay licensing & hosting fees for each service in addition to the initial investment.

The contract goes out of its way to point out that Unizin actually provides nothing. While this is contract legalese, it’s important to note this description in section 6.5 [original emphasized in ALL CAPS but shared here at lower volume].[1]

Consortium operator is not providing the Unizin services, or any other services, licenses, products, offerings or deliverables of any kind to University, and therefore makes no warranties, whether express or implied. Consortium Operator expressly disclaims all warranties in connection with the Unizin services and any other services, licenses, products, offerings or deliverables made available to University under or in connection with this agreement, both express and implied, …[snip]. Consortium Operator will not be liable for any data loss or corruption related to use of the Unizin services.

This contract appears to be at odds with the oft-stated goal of giving institutions control and ownership of their digital tools (also taken from ELI presentation).

We have a vested interest in staying in control of our data, our students, our content, and our reputation/brand.

I had planned to piece together clues and speculate on what functionality the Content Relay will provide, but given the delays it is probably best to just wait and see. I’ve been told by Unizin insiders and heard publicly at conference presentations since February 2015 about the imminent release of Content Relay, and right now we just have slideware. I have asked for a better description of what functionality the Content Relay will provide, but this information is not yet available.

Unizin leadership and board members understand this quandary. As Bruce Maas, CIO at U Wisconsin, put it to me this spring, his job promoting and explaining Unizin will get a lot easier when there is more to offer than just Canvas as the LMS.

For now, here is the full agreement as signed by the University of Florida [I have removed the signature page and contact information page as I do not see the need to make these public].

Download (PDF, 587KB)

  1. Also note that Unizin is unincorporated part of Internet2. Internet2 is the “Consortium Operator” and signer of this agreement.

The post Unizin One Year Later: View of contract reveals . . . nothing of substance appeared first on e-Literate.