Skip navigation.

Feed aggregator

comment in external table

Laurent Schneider - Thu, 2015-03-12 11:37

Depending the files, you may use different signs for comments, typically


# hash
// slash slash 
/* slash-star star-slash */
: column
-- dash dash

The latest is used in sql and pl/sql, but :


CREATE TABLE t (x NUMBER)
ORGANIZATION EXTERNAL
(
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY data_pump_dir
  ACCESS PARAMETERS (
    FIELDS TERMINATED BY ';'  -- This is a comment
    (x))
  LOCATION ('x'));

SELECT * FROM t;
SELECT * FROM t
              *
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "minussign": expecting one of: 
  "column, enclosed, (, ltrim, lrtrim, ldrtrim, missing, 
  notrim, optionally, rtrim, reject"
KUP-01007: at line 2 column 38

not in external table access parameters.

No comment is allowed there!

Arizona Oracle User Group Meeting March 18

Bobby Durrett's DBA Blog - Thu, 2015-03-12 09:39
I just found out that this meeting was cancelled.  We will have to catch the next one. :) – 3/16/2015

Sign up for the Arizona Oracle User Group (AZORA) meeting next week: signup url

The email that I received from the meeting organizer described the topic of the meeting in this way:

“…the AZORA meetup on March 18, 2015 is going to talk about how a local business decided to upgrade their Oracle Application from 11i to R12 and give you a first hand account of what went well and what didn’t go so well. ”

Description of the speakers from the email:

Becky Tipton

Becky is the Director of Project Management at Blood Systems located in Scottsdale, AZ. Prior to coming to Blood Systems, Becky was an independent consultant for Tipton Consulting for four years.

Mike Dill

Mike is the Vice President of Application Solutions at 3RP, a Phoenix consulting company. Mike has over 10 years of experience implementing Oracle E-Business Suite and managing large-scale projects.

I plan to attend.  I hope to see you there too. :)

– Bobby

Categories: DBA Blogs

The Best Thing

Floyd Teter - Thu, 2015-03-12 09:33
I'm spending the latter part of my week at the Utah Oracle User Group Training Days conference.  It's a nice regional conference for me to engage with...it's local to me here in Salt Lake City.  So I get to participate in the conference and still go home every night.  Pretty sweet.

But being local is not the best thing about this conference.  The best thing about this...or most user group conferences, for that matter...is the opportunity to exchange ideas with some very smart people.      When I get to listen in for a bit on conversations with the real brains in this business, I always come away with more knowledge...and often with a different point of view.  That's the really cool part.  And, yes, it's worth investing a couple of days of my time.

So far as the Utah Oracle User Group, we'll probably do another one of these this fall.  You really out to come out.  Who knows, you might learn something?  I know I always do.

ODA 12.1.X.X.X - add a multiplexed control file under ACFS

Yann Neuhaus - Thu, 2015-03-12 07:43

Since version 12, ODA stores databases on ACFS volumes instead of ASM directly. This slightly changed the way the files are managed and administer. This articles presents how to multiplex your control files on ACFS.

More on the Rittman Mead BI Forum 2015 Masterclass : “Delivering the Oracle Big Data and Information Management Reference Architecture”

Rittman Mead Consulting - Thu, 2015-03-12 05:29

Each year at the Rittman Mead BI Forum we host an optional one-day masterclass before the event opens properly on Wednesday evening, with guest speakers over the year including Kurt Wolff, Kevin McGinley and last year, Cloudera’s Lars George. This year I’m particularly excited that together with Jordan Meyer, our Head of R&D, I’ll be presenting the masterclass on the topic of “Delivering the Oracle Big Data and Information Management Reference Architecture”.

NewImageLast year we launched at the Brighton BI Forum event a new reference architecture that Rittman Mead had collaborated with Oracle on, that incorporated big data and schema-on-read databases into the Oracle data warehouse and BI reference architecture. In two subsequent blog posts, and in a white paper published on the Oracle website a few weeks after, concepts such as the “Discovery Lab”, “Data Reservoirs” and the “Data Factory” were introduced as a way of incorporating the latest thinking, and product capabilities, into the reference architecture for Oracle-based BI, data warehousing and big data systems.

One of the problems I always feel with reference architectures though is that they tell you what you should create, but they don’t tell you how. Just how do you go from a set of example files and a vague requirement from the client to do something interesting with Hadoop and data science, and how do you turn the insights produced by that process into a production-ready, enterprise Big Data system? How do you implement the data factory, and how do you use new tools such as Oracle Big Data Discovery and Oracle Big Data SQL as part of this architecture? In this masterclass we’re looking to explain the “how” and “why” to go with this new reference architecture, based on experiences working with clients over the past couple of years.

The masterclass will be divided into two sections; the first, led by Jordan Meyer, will focus on the data discovery and “data science” parts of the Information Management architecture, going through initial analysis and discovery of datasets using R and Oracle R Enterprise. Jordan will share techniques he uses from both his work at Rittman Mead and his work with Slacker Radio, a Silicon Valley startup, and will introduce the R and Oracle R Enterprise toolset for uncovering insights, correlations and patterns in sample datasets and productionizing them as database routines. Over his three hours he’ll cover topics including: 

Session #1 – Data exploration and discovery with R (2 hours) 

1.1 Introduction to R 

1.2 Tidy Data  

1.3 Data transformations 

1.4 Data Visualization 

Session #2 – Predictive Modeling in the enterprise (1 hr)

2.1 Classification

2.2 Regression

2.3 Deploying models to the data warehouse with ORE

After lunch, I’ll take the insights and analysis patterns identified in the Discovery Lab and turn them into production big data pipelines and datasets using Oracle Data Integrator 12c, Oracle Big Data Discovery and Oracle Big Data SQL For a flavour of the topics I’ll be covering take a look at this Slideshare presentation from a recent Oracle event, and in the masterclass itself I’ll concentrate on techniques and approaches for ingesting and transforming streaming and semi-structured data, storing it in Hadoop-based data stores, and presenting it out to users using BI tools like OBIEE, and Oracle’s new Big Data Discovery.

Session # 3 – Building the Data Reservoir and Data Factory (2 hr)

3.1 Designing and Building the Data Reservoir using Cloudera CDH5 / Hortonworks HDP, Oracle BDA and Oracle Database 12c

3.2 Building the Data Factory using ODI12c & new component Hadoop KM modules, real-time loading using Apache Kafka, Spark and Spark Streaming

Session #4 – Accessing and visualising the data (1 hr)

4.1 Discovering and Analyzing the Data Reservoir using Oracle Big Data Discovery

4.2 Reporting and Dashboards across the Data Reservoir using Oracle Big Data SQL + OBIEE 11.1.1.9

You can register for a place at the two masterclasses when booking your BI Forum 2015 place, but you’ll need to hurry as we limit the number of attendees at each event in order to maximise interaction and networking within each group. Registration is open now and the two events take place in May – hopefully we’ll see you there!

Categories: BI & Warehousing

Venturing into bulk insert a SQL Server error log and data order

Yann Neuhaus - Thu, 2015-03-12 02:21

Have you ever attempted to bulk import a SQL Server error log in order to use the information inside a report for example? If yes, you have probably wondered how to keep data in the correct order in a query because you cannot refer to any column from the table. In such case you can notice that you may have many records with the same date. Of course, there exists some workarounds but this is not the subject of this blog. Instead, I would like to share with you an interesting discussion I had with a forum member about the guarantee to get the SQL Server error log data in order with a simple SELECT statement without an ORDER BY clause.

Let’s begin with the following script which bulk import data from a SQL Server error log file inside a table:

CREATE TABLE ErrLog ( LogCol NVARCHAR(max) NULL )

 

BULK INSERT ErrLog FROM 'C:Program FilesMicrosoft SQL ServerMSSQL12.MSSQLSERVERMSSQLLogERRORLOG.1' WITH ( FIRSTROW = 6, DATAFILETYPE = 'widechar' )


You may notice that we use FIRSTROW hint to begin from the 6th line and skip information as follows:

2015-03-07 13:57:41.43 Server     Microsoft SQL Server 2014 - 12.0.2370.0 (X64) Jun 21 2014 15:21:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor)

 

In addition, using DATAFILETYPE = 'widechar' is mandatory in order to bulk import Unicode data.

Let’s continue and after bulking import data let’s take a look at the data itself inside the table. You will probably get the same kind of sample information as follows:

 

SELECT        LogCol FROM ErrLog;

 

blog_34_-_1_-_bulk_result

 

Comparing records order between the SQL Server error log file and the table tends to state that the order is the same. At this point, we may wonder how to number the records inside the table without affecting the table order. Indeed, numbering records in the table will allow to control the order of data by using the ORDER BY clause. So go ahead and let’s using the ROW_NUMBER() function in order to meet our requirement. You may notice that I use an “artificial” ORDER BY clause inside the windows function to avoid to interfere with the original order of getting data in my table.

 

SELECT        ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS numline,        LogCol FROM ErrLog;

 

blog_34_-_2_-_bulk_result_with_row_number

 

At this point, the forum member tells me that we cannot guarantee the order of data without using an order by clause but once again, it seems that we get the same order that the previous query but can I trust it? I completely agree with this forum member and I tend to advice the same thing. However in this specific case the order seems to be correct but why?

If you take a look at the first script, the first operation consisted in creating a heap table. This detail is very important. Then, the bulk insert operation reads sequentially the file and insert the data in the allocation order of pages. It means that data insertion order is tied to the allocation order of the pages specifying in the IAM page. So when we perform a table scan operation to number each row in the table (see the execution plan below), in fact this operation will be performed by scanning the IAM page to find the extents that are holding pages.

 

blog_34_-_21_-_execution_plan

 

As you know, IAM represents extents in the same order that they exist in the data files, so in our case table scan is performed by using the same allocation page order. We can check it by catching the call stack of our query. We may see that the query optimizer uses a particular function called AllocationOrderPageScanner for our heap table.

 

blog_34_-_3_-_allocation_order_scan

 

So, that we are agreed, I don’t claim that in all cases we may trust blindly the data order without specifying the ORDER BY clause. In fact, I’m sure you will have to process differently most of the time depending on your context and different factors that will not guarantee the order of data without specifying the ORDER BY CLAUSE (bulking import data from multiple files in parallel, reading data from table with b-tree structure in particular condition etc…)

Enjoy!

An Introduction to Analysing ODI Runtime Data Through Elasticsearch and Kibana 4

Rittman Mead Consulting - Thu, 2015-03-12 01:39

An important part of working with ODI is analysing the performance when it runs, and identifying steps that might be inefficient as well as variations in runtime against a baseline trend. The Operator tool in ODI itself is great for digging down into individual sessions and load plan executions, but for broader analysis we need a different approach. We also need to make sure we keep the data available for trend analysis, as it’s often the case that tables behind Operator are frequently purged for performance reasons.


In this article I’m going to show how we can make use of a generic method of pulling information out of an RDBMS such as Oracle and storing it in Elasticsearch, from where it can be explored and analysed through Kibana. It’s standalone, it’s easy to do, it’s free open source – and it looks and works great! Here I’m going to use it for supporting the analysis of ODI runtime information, but it is equally applicable to any time-based data you’ve got in an RDBMS (e.g. OBIEE Usage Tracking data).

Kibana is an open-source data visualisation and analysis tool, working with data stored in Elasticsearch. These tools work really well for very rapid analysis of any kind of data that you want to chuck at them quickly and work with. By skipping the process of schema definition and data modelling the time taken to the first results is drastically reduced. It enables to you quickly start “chucking about” data and getting meaning out of it before you commit full-scale to how you want to analyse it, which is what the traditional modelling route can sometimes force you to do prematurely.

ODI writes runtime information to the database, about sessions run, steps executed, time taken and rows processed. This data is important for analysing things like performance issues, and batch run times. Whilst with the equivalent runtime data (Usage Tracking) from OBIEE there is the superb RPD/Dashboard content that Oracle ship in SampleApp v406, for ODI the options aren’t as vast, ultimately being based on home-brew SQL against the repository tables using the repository schema documentation from Oracle. Building an OBIEE metadata model against the ODI schema is one option, but then requires an OBIEE server on which to run it – or merging into an existing OBIEE deployment – which means that it can become more hassle than it’s worth. It also means a bunch of up-front modelling before you get any kind of visualisations and data out. By copying the data across into Elasticsearch it’s easy to quickly build analyses against it, and has the additional benefit of retaining the data as long as you’d like meaning that it’s still available for long-term trend analysis once the data’s been purged from the ODI repository itself.

Let’s take a bit of a walk through the ODI dashboard that I’ve put together. First up is a view on the number of sessions that have run over time, along with their duration. For duration I’ve shown 50th (median), 75th and 95th percentiles to get an idea of the spread of session runtimes. At the moment we’re looking at all sessions, so it’s not surprising that there is a wide range since there’ll always be small sessions and longer ones:

Next up on the dashboard comes a summary of top sessions by runtime, both cumulative and per-session. The longest running sessions are an obvious point of interest, but cumulative runtime is also important; something may only take a short while to run when compared to some singular long-running sessions, but if it runs hundreds of times then it all adds up and can give a big performance boost if time is shaved off it.

Plotting out session execution times is useful to be able to see both when the longest running sessions ran:

The final element on this first dashboard is one giving the detail for each of the top x long-running session executions, including the session number so that it can be examined in further detail through the Operator tool.

Kibana dashboards are interactive, so you can click on a point in a graph to zoom in on that time period, as well as click & drag to select an arbitrary range. The latter technique is sometimes known as “Brushing”, and if I’m not describing it very well have a look at this example here and you’ll see in an instant what I mean.

As you focus on a time period in one graph the whole dashboard’s time filter changes, so where you have a table of detail data it then just shows it for the range you’ve selected. Notice also that the granularity of the aggregation changes as well, from a summary of every three hours in the first of the screenshots through to 30 seconds in the last. This is a nice way of presenting a summary of data, but isn’t always desirable (it can mask extremes and abnormalities) so can be configured to be fixed as well.

Time isn’t the only interaction on the dashboard – anything that’s not a metric can be clicked on to apply a filter. So in the above example where the top session by cumulative time are listed out we might want to find out more about the one with several thousand executions

Simply clicking on it then filters the dashboard and now the session details table and graph show information just for that session, including duration, and rows processed:

Session performance analysis

As an example of the benefit of using a spread of percentiles we can see here is a particular session that had an erratic runtime with great variation, that then stabilised. The purple line is the 95th percentile response time; the green and blue are 50th and 75th respectively. It’s clear that whilst up to 75% of the sessions completed in about the same kind of time each time they ran, the remaining quarter took anything up to five times as long.

One of the most important things in performance is ensuring consistent performance, and that is what happens here from about half way along the horizontal axis at c.February:

But what was causing the variation? By digging a notch deeper and looking at the runtime of the individual steps within the given session it can be seen that the inconsistent runtime was caused by a single step (the green line in this graph) within the execution. When this step’s runtime stabilises, so does the overall performance of the session:

This is performing a port-mortem on a resolved performance problem to illustrate how useful the data is – obviously if there were still a performance problem we’d have a clear path of investigation to pursue thanks to this data.

How?

Data’s pulled from the ODI repository tables using Elasticsearch JDBC river, from where it’s stored and indexed in Elasticsearch, and presented through Kibana 4 dashboards.

esr35

The data load from the repository tables into Elasticsearch is incremental, meaning that the solution works for both historical analysis and more immediate monitoring too. Because the data’s actually stored in Elasticsearch for analysis it means the ODI repository tables can be purged if required and you can still work with a full history of runtime data in Kibana.

If you’re interested in finding out more about this solution and how Rittman Mead can help you with your ODI and OBIEE implementation and monitoring needs, please do get in touch.

Categories: BI & Warehousing

APEX 5.0: pimping the Login page

Dimitri Gielis - Wed, 2015-03-11 17:29
When you create a new application in APEX 5.0, the login page probably looks like this:


I love the build-in login page of APEX itself - luckily it's easy enough to build that in our own apps too. Thank you APEX Dev team!

The first step is to change the region type to be of Login Region Template:


We want to add a nice icon on top of the Login text. You can use the Icon CSS Class in the Region options - in this case I opted for fa-medkit:

Next up is making the Login button bigger and make it the complete width like the items.In APEX 5.0 you can use the Template Options to do that:

Once we stretched the Login button it fits the entire size.
Next up is getting some icons in the username and password field.For the username we use the "icon-login-username" css class.Instead of the label we make that hidden and specify a placeholder, so before you start typing you see the word username and when you start typing the text disappears.

For the password field we do the same thing, but for the css class we specify "icon-login-password".


Finally your login screen looks like this:


Great? Absolutely - and so easy with APEX 5.0!

What's next? Is there anything better? Euh... yes, what about live validation?
Sure we can do that in APEX 5.0 without too much hassle :)) Thanks once again APEX Dev Team!

In the item make sure the item is set to Value Required and add in the Post Text following span:


That will give you a nice visual indication if you entered text:


Cool? Creating login pages in APEX 5.0 is ... (you fill in the word)

Interested in more? We're doing an APEX 5.0 UI Training in May.
Categories: Development

Delphix User Group Presentation

Bobby Durrett's DBA Blog - Wed, 2015-03-11 16:30

My Delphix user group presentation went well today. 65 people attended.  It was great to have so much participation.

Here are links to my PowerPoint slides and a recording of the WebEx:

Slides: PowerPoint

Recording: WebEx

Also, I want to thank two Delphix employees, Ann Togasaki and Matthew Yeh.  Ann did a great job of converting my text bullet points into a visually appealing PowerPoint.  She also translated my hand drawn images into useful drawings.  Matthew did an amazing job of taking my bullet points and my notes and adding meaningful graphics to my text only slides

I could not have put the PowerPoint together in time without Ann and Matthew’s help and they did a great job.

Also, for the first time I wrote out my script word for word and added it to the notes on the slides.  So, you can see what I intended to say with each slide.

Thank you to Adam Leventhal of Delphix for inviting me to do this first Delphix user group WebEx presentation.  It was a great experience for me and I hope that it was useful to the user community as well.

– Bobby

Categories: DBA Blogs

Webcast Replay: Public Sector FMW: Mobility Solutions – Re-Think Mobile

WebCenter Team - Wed, 2015-03-11 14:48
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Mobile is the digital disruptor that has transformed industries and organizations big and small. Mobile transformations are everywhere, across all industries in organizations of all sizes. The enterprise mobile market is expected to bring in $140 billion by 2020, and yet today 7 in 10 enterprises are still struggling to keep pace with new mobile devices and systems. We know that access to relevant information, anywhere and anytime is expected, yet connecting to back-end systems and securing the corporate data is both complex and a necessity.

Watch this webinar to learn how customers are re-thinking their enterprise mobile strategy and unifying the client, content, context, security and cloud in their enterprise mobile strategy. Through case studies and live demonstrations, Oracle Gold Partner 3Di will present how customers like you have successfully addressed these questions using Oracle Technologies and 3Di's solutions, innovations and services.

Find out more here

Product links:

o http://www.oracle.com/technetwork/middleware/webcenter/suite/overview/index.html

o http://www.oracle.com/us/technologies/bpm/overview/index.html

o http://www.oracle.com/technetwork/developer-tools/maf/overview/index.html

Case Studies:

o https://blogs.oracle.com/webcenter/entry/los_angeles_department_of_building

o https://blogs.oracle.com/fusionmiddleware/entry/ladwp_transformed_customer_experience_with

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

Three Weeks with the Nike+ Fuelband SE

Oracle AppsLab - Wed, 2015-03-11 11:42

I don’t like wearing stuff on my wrist, but in my ongoing quest to learn more about the wearables our users wear, I have embarked on a journey.

For science! And for better living through math, a.k.a. the quantified self.

And because I’ll be at HCM World later this month talking about wearables, and because wearables are a thing, and we have a Storify to prove it, and we need to understand them better, and the Apple Watch is coming (squee!) to save us all from our phones and restore good old face time (not that Facetime) and and and. Just keep reading.

Moving on, I just finished wearing the Nike+ Fuelband SE for three weeks, and today, I’m starting on a new wearable. It’s a surprise, just wait three weeks.

Now that I’ve compiled a fair amount of anecdotal data, I figured a loosely organized manifest of observations (not quite a review) was in order.

The band

The Fuelband isn’t my first fitness tracker; you might recall I wore the Misfit Shine for a few months. Unlike the minimalist Shine, the Fuelband has quite a few more bells and whistles, starting with its snazzy display.

Check out a teardown of the nifty little bracelet, some pretty impressive stuff inside there, not bad for a shoe and apparel company.

I’ve always admired the design aspects of Nike’s wearables, dating back to 2012 when Noel (@noelportugal) first started wearing one. So, it was a bit sad to hear about a year ago that Nike was closing that division.

Turns out the Fuelband wasn’t dead, and when Nike finally dropped an Android version of the Nike+ Fuelband app, I sprang into action, quite literally.

Anyway, the band didn’t disappoint. It’s lightweight and can be resized using a nifty set of links that can be added or removed.

IMG_20150217_121004

The fit wasn’t terribly tight, and the band is surprisingly rigid, which eventually caused a couple areas on my wrist to rub a little raw, no biggie.

The biggest surprise was the first pinch I got closing the clasp. After a while, it got easier to close and less pinchy, but man that first one was a zinger.

The battery life was good, something that I initially worried about, lasting about a week per full charge. Nike provides an adapter cord, but the band’s clasp can be plugged directly into  a USB port, which is a cool feature, albeit a bit awkward looking.

It’s water-resistant too, which is a nice plus.

Frankly, the band is very much the same one that Noel showed me in 2012, and the lack of advancement is one of the complaints users have had over the years.

The app and data

Entering into this, I fully expected to be sucked back into the statistical vortex that consumed me with the Misfit Shine, and yeah, that happened again. At least, I knew what to expect this time.

Initial setup of the band requires a computer and a software download, which isn’t ideal. Once that was out of the way, I could do everything using the mobile app.

The app worked flawlessly, and it looks good, more good design from Nike. I can’t remember any sync issues or crashes during the three-week period. Surprising, considering Nike resisted Android for so long. I guess I expected their foray into Android to be janky.

I did find one little annoyance. The app doesn’t support the Android Gallery for adding a profile picture, but that’s the only quibble I have.

Everything on the app is easily figured out; there’s a point system, NikeFuel. The band calculates steps and calories too, but NikeFuel is Nike’s attempt to normalize effort for specific activities, which also allows for measurement and competition among participants.

The default the NikeFuel goal for each day is 2,000, a number that can be configured. I left it at 2,000 because I found that to be easy to reach.

The app includes Sessions too, which allow the wearer to specify the type of activity s/he is doing. I suppose this makes the NikeFuel calculation more accurate. I used Sessions as a way to categorize and compare workouts.

I tested a few Session types and was stunned to discover that the elliptical earned me less than half the NikeFuel than running on a treadmill for the same duration.

Screenshot_2015-03-09-23-31-12 Screenshot_2015-03-09-23-31-52 Screenshot_2015-03-09-23-31-47 Screenshot_2015-03-09-23-32-14 Screenshot_2015-03-09-23-32-23

Update: Forgot to mention that the app communicates in real time with the band (vs. periodic syncing), so you can watch your NikeFuel increase during a workout, pretty cool.

Overall, the Android app and the web app at nikeplus.com are both well-done and intuitive. There’s a community aspect too, but that’s not for me. Although I did enjoy watching my progress vs. other men my age in the web app.

One missing feature of the Fuelband, at least compared to its competition, is the lack of sleep tracking. I didn’t really miss this at first, but now that I have it again, with the surprise wearable I’m testing now, I’m realizing I want it.

Honestly, I was a bit sad to take off the Fuelband after investing three weeks into it. Turns out, I really liked wearing it. I even pondered continuing its testing and wearing multiple devices to do an apples-to-apples comparison, but Ultan (@ultan) makes that look good. I can’t.

So, stay tuned for more wearable reviews, and find me at HCM World if you’re attending.

Anything to add? Find the comments.Possibly Related Posts:

#db12c now certified for #em12c repository (MOS Note: 1987905.1) with some restrictions

DBASolved - Wed, 2015-03-11 11:06

Last October (2014), at Oracle Open World 2014, I posted about a discussion where there was confusion on if Oracle Database 12c was supported as the Oracle Management Repository (OMR).  At the time, Oracle had put a temporary suspension on support for the OMR running on Oracle Database 12c. 

Over the last week or so, in discussions with some friends I heard that there may be an announcement on this topic soon.  As of yesterday, I was provided a MOS note number to reference (1987905.1) for OMR support on database 12c.  In checking out the note, it appears that the OMR can now be ran on a database 12c instance (12.1.0.2) with some restrictions.

These restrictions are:

  • Must apply database patch 20243268
  • Must apply patchset 12.1.0.2.1 (OCT PSU) or later

This note (1987905.1) is welcomed by many in the community who want to build their OMS on the latested database version.  What is missing from the note is if installing the OMR into a pluggable database (PDB) is support.  Guess the only way to find out is to try building a new Oracle Enterprise Manager 12c on top of a pluggable and see what happens.  At least for now, Oracle Database 12c is supported as the OMR.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Flashback Logging

Jonathan Lewis - Wed, 2015-03-11 09:21

One of the waits that is specific to ASSM (automatic segment space management) is the “enq: FB – contention” wait. You find that the “FB” enqueue has the following description and wait information when you query v$lock_type, and v$event_name:


SQL> execute print_table('select * from v$lock_type where type = ''FB''')
TYPE                          : FB
NAME                          : Format Block
ID1_TAG                       : tablespace #
ID2_TAG                       : dba
IS_USER                       : NO
DESCRIPTION                   : Ensures that only one process can format data blocks in auto segment space managed tablespaces

SQL> execute print_table('select * from v$event_name where name like ''enq: FB%''')
EVENT#                        : 806
EVENT_ID                      : 1238611814
NAME                          : enq: FB - contention
PARAMETER1                    : name|mode
PARAMETER2                    : tablespace #
PARAMETER3                    : dba
WAIT_CLASS_ID                 : 1893977003
WAIT_CLASS#                   : 0
WAIT_CLASS                    : Other

This tells us that a process will acquire the lock when it wants to format a batch of blocks in a segment in a tablespace using ASSM – and prior experience tells us that this is a batch of 16 consecutive blocks in the current extent of the segment; and when we see a wait for an FB enqueue we can assume that two session have simultaneously tried to format the same new batch of blocks and one of them is waiting for the other to complete the format. In some ways, this wait can be viewed (like the “read by other session” wait) in a positive light – if the second session weren’t waiting for the first session to complete the block format it would have to do the formatting itself, which means the end-user has a reduced response time. On the other hand the set of 16 blocks picked by a session is dependent on its process id, so the second session might have picked a different set of 16 blocks to format, which means in the elapsed time of one format call the segment could have had 32 blocks formatted – this wouldn’t have improved the end-user’s response time, but it would mean that more time would pass before another session had to spend time formatting blocks. Basically, in a highly concurrent system, there’s not a lot you can do about FB waits (unless, of course, you do some clever partitioning of the hot objects).

There is actually one set of circumstances where you can have some control of how much time is spent on the wait, but before I mention it I’d like to point out a couple more details about the event itself. First, the parameter3/id2_tag is a little misleading: you can work out which blocks are being formatted (if you really need to), but the “dba” is NOT a data block address (which you might think if you look at the name and a few values). There is a special case when the FB enqueue is being held while you format the blocks in the 64KB extents that you get from system allocated extents, and there’s probably a special case (which I haven’t bothered to examine) if you create a tablespace with uniform extents that aren’t a multiple of 16 blocks, but in the general case the “dba” consists of two parts – a base “data block address” and a single (hex) digit offset identifying which batch of 16 blocks will be formatted.

For example: a value of 0x01800242 means start at data block address 0x01800240, count forward 2 * 16 blocks then format 16 blocks from that point onwards. Since the last digit can only range from 0x0 to 0xf this means the first 7 (hex) digits of a “dba” can only reference 16 batches of 16 blocks, i.e. 256 blocks. It’s not coincidence (I assume) that a single bitmap space management block can only cover 256 blocks in a segment – the FB enqueue is tied very closely to the bitmap block.

So now it’s time to ask why this discussion of the FB enqueue appears in an article titled “Flashback Logging”. Enable the 10704 trace at level 10, along with the 10046 trace at level 8 and you’ll see. Remember that Oracle may have to log the old version of a block before modifying it and if it’s a block that’s being reused it may contribute to “physical reads for flashback new” – here’s a trace of a “format block” event:


*** 2015-03-10 12:50:35.496
ksucti: init session DID from txn DID:
ksqgtl:
        ksqlkdid: 0001-0023-00000014

*** 2015-03-10 12:50:35.496
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0023-00000014
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0023-00000014
ksqgtl: RETURNS 0
WAIT #140627501114184: nam='db file sequential read' ela= 4217 file#=6 block#=736 blocks=1 obj#=192544 tim=1425991835501051
WAIT #140627501114184: nam='db file sequential read' ela= 674 file#=6 block#=737 blocks=1 obj#=192544 tim=1425991835501761
WAIT #140627501114184: nam='db file sequential read' ela= 486 file#=6 block#=738 blocks=1 obj#=192544 tim=1425991835502278
WAIT #140627501114184: nam='db file sequential read' ela= 522 file#=6 block#=739 blocks=1 obj#=192544 tim=1425991835502831
WAIT #140627501114184: nam='db file sequential read' ela= 460 file#=6 block#=740 blocks=1 obj#=192544 tim=1425991835503326
WAIT #140627501114184: nam='db file sequential read' ela= 1148 file#=6 block#=741 blocks=1 obj#=192544 tim=1425991835504506
WAIT #140627501114184: nam='db file sequential read' ela= 443 file#=6 block#=742 blocks=1 obj#=192544 tim=1425991835504990
WAIT #140627501114184: nam='db file sequential read' ela= 455 file#=6 block#=743 blocks=1 obj#=192544 tim=1425991835505477
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=744 blocks=1 obj#=192544 tim=1425991835505985
WAIT #140627501114184: nam='db file sequential read' ela= 591 file#=6 block#=745 blocks=1 obj#=192544 tim=1425991835506615
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=746 blocks=1 obj#=192544 tim=1425991835507157
WAIT #140627501114184: nam='db file sequential read' ela= 489 file#=6 block#=747 blocks=1 obj#=192544 tim=1425991835507684
WAIT #140627501114184: nam='db file sequential read' ela= 375 file#=6 block#=748 blocks=1 obj#=192544 tim=1425991835508101
WAIT #140627501114184: nam='db file sequential read' ela= 463 file#=6 block#=749 blocks=1 obj#=192544 tim=1425991835508619
WAIT #140627501114184: nam='db file sequential read' ela= 685 file#=6 block#=750 blocks=1 obj#=192544 tim=1425991835509400
WAIT #140627501114184: nam='db file sequential read' ela= 407 file#=6 block#=751 blocks=1 obj#=192544 tim=1425991835509841

*** 2015-03-10 12:50:35.509
ksqrcl: FB,16,18002c2
ksqrcl: returns 0

Note: we acquire the lock (ksqgtl), read 16 blocks by “db file sequential read”, write them to the flashback log (buffer), format them in memory, release the lock (ksqrcl). That lock can be held for quite a long time – in this case 13 milliseconds. Fortunately the all the single block reads after the first have been accelerated by O/S prefetching, your timings may vary.

The higher the level of concurrent activity the more likely it is that processes will collide trying to format the same 16 blocks (the lock is exclusive, so the second will request and wait, then find that the blocks are already formatted when it finally get the lock). This brings me to the special case where waits for the FB enqueue waits might have a noticeable impact … if you’re running parallel DML and Oracle decides to use “High Water Mark Brokering”, which means the parallel slaves are inserting data into a single segment instead of each using its own private segment and leaving the query co-ordinator to clean up round the edges afterwards. I think this is most likely to happen if you have a tablespace using fairly large extents and Oracle thinks you’re going to process a relatively small amount of data (e.g. small indexes on large tables) – the trade-off is between collisions between processes and wasted space from the private segments.


Brian Whitmer No Longer in Operational Role at Instructure

Michael Feldstein - Wed, 2015-03-11 09:17

By Phil HillMore Posts (302)

Just over a year and a half ago, Devlin Daley left Instructure, the company he co-founded. It turns out that both founders have made changes as Brian Whitmer, the other company co-founder, left his operational role in 2014 but is still on the board of directors. For some context from the 2013 post:

Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.

Brian had a lead role until last year for Instructure’s usability design and for it’s open architecture and support for LTI standards.

The reason for Brian’s departure (based on both Brian’s comments and comments from Instructure statements) is based on his family. Brian’s daughter has Rett Syndrome:

Rett syndrome is a rare non-inherited genetic postnatal neurological disorder that occurs almost exclusively in girls and leads to severe impairments, affecting nearly every aspect of the child’s life: their ability to speak, walk, eat, and even breathe easily.

As Instructure grew, Devlin became the road show guy while Brian stayed mostly at home, largely due to family. Brian’s personal experiences have led him to create a new company: CoughDrop.

Some people are hard to hear — through no fault of their own. Disabilities like autism, cerebral palsy, Down syndrome, Angelman syndrome and Rett syndrome make it harder for many individuals to communicate on their own. Many people use Augmentative and Alternative Communication (AAC) tools in order to help make their voices heard.

We work to help bring out the voices of those with complex communication needs through good tech that actually makes things easier and supports everyone in helping the individual succeed.

This work sounds a lot like early Instructure, as Brian related to me this week.

Augmentative Communication is a lot like LMS space was, in need of a reminder of how things can be better.

By the middle of 2014, Brian left all operational duties although he remains on the board (and he plans to remain on the board and acting as an adviser).

How will this affect Instructure? I would look at Brian’s key roles in usability and open platform to see if Instructure keeps up his vision. From my view the usability is just baked into the company’s DNA[1] and will likely not suffer. The question is more on the open side. Brian led the initiative for the App Center as I described in 2013:

The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from EduAppCenter, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).

The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.

What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.

The actual adoption by faculty and institutions of this capability takes far longer than people writing about it (myself included) would desire. It takes time and persistence to keep up the faith. The biggest risk that Instructure faces by losing Brian’s operational role is whether they will keep this vision and maintain their support for open standards and third-party apps – opening up the walled garden, in other words.

Melissa Loble, Senior Director of Partners & Programs at Instructure[2], will play a key role in keeping this open vision alive. I have not heard anything indicating that Instructure is changing, but this is a risk from losing a founder who internally ‘owned’ this vision.

I plan to share some other HR news from the ed tech market in future posts, but for now I wish Brian the best with his new venture – he is one of the truly good guys in ed tech.

Update: I should have given credit to Audrey Watters, who prompted me to get a clear answer on this subject.

  1. Much to Brian’s credit
  2. Formerly Associate Dean of Distance Ed at UC Irvine and key player in Walking Dead MOOC

The post Brian Whitmer No Longer in Operational Role at Instructure appeared first on e-Literate.

APEX Connect June 2015

Denes Kubicek - Wed, 2015-03-11 07:39
APEX Connect in Düsseldorf in June 2015 is going to be the biggest APEX-only event in Germany so far. You should consider joining us.

APEX Connect in Düsseldorf im Juni 2015 wird der größte APEX-Treffen bisher sein. Meldet euch und hilft uns es noch erfolgreicher und größer zu machen. Viele interessante Vorträge und vor allem viele interessante Persönlichkeiten aus der APEX-Welt werden dort sein. Das ist eine ausgezeichnete Gelegenheit viel Neues zu erfahren. Anmeldungsformular kann man hier aufrufen. Die Preise sind moderat und durchaus im Rahmen.

Categories: Development

Annonce : DB12c certifiée avec EM12c

Jean-Philippe Pinte - Wed, 2015-03-11 00:55


Il est désormais possible d’utiliser une base 12.1.0.2.1 pour le référentiel d’ Oracle Enterprise Manager 12.1.0.4 (OMR) : http://ora.cl/tY3  


Adding MySQL driver to Spring Boot CLI Groovy Demo

Pas Apicella - Tue, 2015-03-10 15:18
I previously showed how you can use the Spring Boot CLI to create a simple Restful Application saying Hello World as shown in the link below using Groovy.

http://theblasfrompas.blogspot.com.au/2015/02/spring-boot-hello-world-from-command.html

If you wanted to extend that demo to include additional dependencies JAR file such as MySQL driver jar file we would do the following

1. You can add extensions to the CLI using the install command as shown below to add MySQL driver. This is installed in the LIB folder of the Spring Boot CLI location directory

> spring install mysql:mysql-connector-java:5.1.34

2. Package the application into a JAR which now includes the MySQL driver JAR file to enable you to connect to a MySQL instance from your application. You will need to write the code to do that , BUT now the JAR file is packaged in the JAR created to enable you to do that.

> spring jar -cp /usr/local/Cellar/springboot/1.2.1.RELEASE/lib/mysql-connector-java-5.1.34.jar hello.jar hello.groovy

Note: If you find that you reach the limit of the CLI tool, you will probably want to look at converting your application to full Gradle or Maven built “groovy project”

More Information

http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#cli-using-the-cli


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Dana Center and New Mathways Project: Taking curriculum innovations to scale

Michael Feldstein - Tue, 2015-03-10 15:01

By Phil HillMore Posts (301)

Last week the University of Texas’ Dana Center announced a new initiative to digitize their print-based math curriculum and expand to all 50 community colleges in Texas. The New Mathways Project is ‘built around three mathematics pathways and a supporting student success course’, and they have already developed curriculum in print:

Tinkering with the traditional sequence of math courses has long been a controversial idea in academic circles, with proponents of algebra saying it teaches valuable reasoning skills. But many two-year college students are adults seeking a credential that will improve their job prospects. “The idea that they should be broadly prepared isn’t as compelling as organizing programs that help them get a first [better-paying] job, with an eye on their second and third,” says Uri Treisman, executive director of the Charles A. Dana Center at UT Austin, which spearheads the New Mathways Project. [snip]

Treisman’s team has worked with community-college faculty to create three alternatives to the traditional math sequence. The first two pathways, which are meant for humanities majors, lead to a college-level class in statistics or quantitative reasoning. The third, which is still in development, will be meant for science, technology, engineering, and math majors, and will focus more on algebra. All three pathways are meant for students who would typically place into elementary algebra, just one level below intermediate algebra.

When starting, the original problem was viewed as ‘fixing developmental math’. As they got into the design, the team restated the problem to be solved as ‘developing coherent pathways through gateway courses into modern degrees of study that lead to economic mobility’. The Dana Center worked with the Texas Association of Community Colleges to develop the curriculum, which is focused on active learning and group work that can be tied to the real world.

The Dana Center approach is based on four principles:

  • Courses student take in college math should be connected to their field of study.
  • The curriculum should accelerate or compress to allow students to move through developmental courses in one year.
  • Courses should align with student support more closely, and sophisticated learning support will be connected to campus support structures.
  • Materials should be connected to context-sensitive improvement strategy.

What they have found is that there are multiple programs nationwide working roughly along the same principles, including the California improvement project, Accelerated learning project at Baltimore City College, and work in Tennessee at Austin Peay College. In their view the fact of independent bodies coming to similar conclusions adds validity to the overall concept.

One interesting aspect of the project is that it is targeted for an entire state’s community college system – this is not a pilot approach. After winning an Request for Proposal selection, Pearson[1] will integrate the active-learning content into a customized mix of MyMathLabs, Learning Catalytics, StatCrunch and CourseConnect tools. Given the Dana Center’s small size, one differentiator for Pearson was their size and ability to help a program move to scale.

Another interesting aspect is the partnership approach with TACC. As shared on the web site:

  • A commitment to reform: The TACC colleges have agreed to provide seed money for the project over 10 years, demonstrating a long-term commitment to the project.
  • Input from the field: TACC member institutions will serve as codevelopers, working with the Dana Center to develop the NMP course materials, tools, and services. They will also serve as implementation sites. This collaboration with practitioners in the field is critical to building a program informed by the people who will actually use it.
  • Alignment of state and institutional policies: Through its role as an advocate for community colleges, TACC can connect state and local leaders to develop policies to support the NMP goal of accelerated progress to and through coursework to attain a degree.

MDRC, the same group analyzing CUNY’s ASAP program, will provide independent reporting of the results. There should be implementation data available by the end of the year, and randomized controlled studies to be released in 2016.

To me, this is a very interesting initiative to watch. Given MDRC’s history of thorough documentation, we should be able to learn plenty of lessons from the state-wide deployment.

  1. Disclosure: Pearson is a client of MindWires Consulting.

The post Dana Center and New Mathways Project: Taking curriculum innovations to scale appeared first on e-Literate.

Smart HR IT Brings New Insights to Age-Old HR Challenges

Linda Fishman Hoyle - Tue, 2015-03-10 14:44

A Guest Post written by Oracle's Aaron Lazenby, Profit Magazine

In the age of digital disruption, there’s plenty to distract executives from their core mission. Chief human resource officers (CHROs) are no exception; new technologies (such as big data analytics, social recruiting, and gamification) have the potential to transform the people function. But executives have to balance the adoption of new technologies with the demands of maintaining the current business.

For Joyce Westerdahl, (pictured left), CHRO at Oracle, the key is keeping her eye on the business. “Stay absolutely focused on what’s happening at your own company and in your own industry,” she says. Here, Westerdahl talks to Profit about how she assesses her department’s technology needs, how she approaches new IT trends, and what HR managers should be paying attention to in order to succeed.

Lazenby (pictured right): What drives Oracle’s talent management strategy?

Westerdahl: Our main focus is to make Oracle a destination employer. We want this to be a place where people can grow their careers; a place where employees are challenged but have the support they need to get the job done. And we want our employees to stay—for the sake of their own professional development and the growth of the company. Having the right technology in place—to automate processes, improve the employee experience, and add new insights—is a key part of how we make that possible.

But when I look at the talent management challenges we face at Oracle, for example, I don’t think things have changed that much in the past 20 years. There has always been a war for talent. Even during tough recessions, we are always competing for top talent. We are always looking for better ways to recruit the right people. What have changed are the tools we have at our disposal for finding and engaging them.

Lazenby: How has Oracle’s HR strategy influenced the products the company creates?

Westerdahl: Our acquisition journey has created an incredibly diverse environment, from both a talent and a technology perspective. When we add companies with different business models, platforms, and cultures to the Oracle family, we have a unique opportunity to learn about other businesses from the inside and translate that into new product functionality. Integrating and onboarding acquired employees and transitioning the HR processes and technologies becomes a stream of knowledge, best practices, and requirements that feeds into product development.

When I look back at two of our key acquisitions—PeopleSoft in 2005 and Sun in 2010—I am reminded of how big a difference technology can make in HR’s ability to support a robust acquisition strategy. The offer process is complex and critical in an acquisition. With PeopleSoft, we created 7,500 US paper offer packages for new employees and loaded them onto FedEx trucks for distribution. It was a resource- and time-intensive, manual process that took three weeks. Five years later when we acquired Sun, we used technology to automate the process and it took less than an hour to generate more than 11,000 US offers. We were able to generate and distribute offers and receive acceptances in about a week.

Lazenby: How are trends like big data affecting HR?

Westerdahl: The volume of new HR data has become astounding over the past couple years. Being able to harness and leverage data is critical to HR’s ability to add strategic business value. When we develop an analytics strategy for Oracle HR, we start with a strategy around what the business wants to achieve. Then we translate that into data requirements and measures: What do we need to know? How can we measure our efforts to make sure we’re on track? What are the key trend indicators, and what is the process for translating the data into actionable HR efforts?

There is tremendous value in being able to use data to improve the employee experience so we can attract, engage, and develop the right talent for our business. For example, an employee engagement survey we conducted revealed that new employees were having a hard time onboarding at Oracle. We could also see this reflected in data that measures time to productivity for new employees. But with the survey, we had another measure that showed us not only the productivity aspect but also the employee frustration aspect. It’s a richer view of the problem, which helped us shape the right solution.

Lazenby: What do you think HR managers can learn from Oracle’s experience?

Westerdahl: I think the challenges HR faces are mostly the same as they always have been: how to recruit the best talent, how to onboard recruits when we are growing quickly, and how to retain and develop employees. But now technology supports new ways of doing things, so you have to decide how to use IT to solve these age-old HR challenges within your business. The key is turning things upside down and viewing things from a fresh perspective. There is no one-size-fits-all approach.

Converting a Classic PIA Component to Fluid UI

PeopleSoft Technology Blog - Tue, 2015-03-10 14:17

PeopleSoft just published a new Red-Paper that describes how to convert an existing Classic Component to a new Fluid UI Component.  It's a great resource for anyone that is interested in learning more about Fluid UI and the steps required to move a Pixel-Perfect Component to a new responsive Fluid UI Component.
The Red-Paper is called Converting Classic PIA Components to PeopleSoft Fluid User Interface.  You can find it on My Oracle Support, document Id 1984833.1

There are a few things I want to point out that are in the document. 

  1. It gives great instruction with an example of how to convert a Component to Fluid UI. 
  2. There is a very important section on classic controls that are not supported in Fluid UI, or require some type of conversion.  I can't stress how important that is.
  3. There is an appendix that identifies the delivered PeopleSoft style classes.  You'll find that appendix extremely valuable when you're looking for the right style to get the UI you want.

Of course converting an existing component is only one way to take advantage of the new Fluid UI.  Refactoring existing components to optimize them for the different form factors, or building components are certainly possible.  In many cases, developers want to leverage existing components to take advantage of tried and tested business logic.