Skip navigation.

Feed aggregator

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards)

Richard Foote - Thu, 2015-06-25 01:02
OK, time for a little quiz. One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance. Or so the theory goes …   :) In many cases, this drives DBAs to […]
Categories: DBA Blogs

another way to list invalid objects

Yann Neuhaus - Thu, 2015-06-25 01:00

How often did I type a query like this to list the invalid objects in a database?

select count(*)
  from dba_objects
 where status  'VALID';
    -- and user in/not in

Today I learned another way to do the same.

Why do people show Azure so much love?

Tim Hall - Thu, 2015-06-25 00:56

cloudThe title of this post is taken from tweet I saw a few weeks ago and it keeps coming back to haunt me, so I thought I would comment on it.

Let me start by saying I don’t have any context as to why the tweeter thought people were showing Azure so much love. From my perspective, I kind-of like Azure and I think it is what my employer will end up using, but I’m not a crazed fan-boy about it. :)

Also, I fully understand a move to the cloud is not the right thing for everyone, so this post is focused on those people who do want/need to move to the cloud. Just because it is not right for you, it doesn’t mean it’s not right for everyone. So when I’m talking about running services on the cloud, it is not a recommendation. I’m not telling you you’ve got to. I’m speaking about cloud services to try to explain why someone might say something like the title of this post. I’m hoping this paragraph will stem the hate-comments that invariably come when you mention the cloud. :)

Interface

The Azure interface it pretty neat. It’s clean and reasonably intuitive. I’m a casual user, so I can’t say how I would feel about it if I were managing hundreds or thousands of resources, but from my brief time with it, I like it.

I don’t dislike the AWS interface, but it does feel a bit more cluttered and ugly than the Azure interface. I guess that could be enough to put off some people maybe.

Services

Coming from the Oracle world, we tend to think of UNIX/Linux as being the centre of the universe, but if I think back to the companies I’ve worked for over the years, the majority of their kit has been Windows-based, with the exception of the bits I work on. :) Since most corporate desktops are still Windows-based, Outlook, Office and Active Directory tend to rule the roost. If you are thinking of moving those services on to the cloud, Azure seems the “obvious choice”. Am I saying they are the best products and Azure is the best place to run them? No. What I’m saying is it will be seen as the “obvious choice” for many people wanting to move to the cloud.

The same goes with SQL Server. I happen to like the AWS RDS for SQL Server implementation, but I’m guessing a lot of SQL Server folks will get a warmer and fuzzier feeling about running SQL Server on Azure. Lots of decisions in IT are based on gut instinct or personal bias of the buyers, not necessarily fact. I can see how someone will “feel happier” there.

Once the Oracle Cloud becomes generally available, we may see a similar issue there. People may feel happier about running Oracle products on the Oracle Cloud than on AWS or Azure. Time will tell.

What’s under the hood?

This is where cloud really turns stuff on its head. If I want to run a Linux VM, I can do that on AWS, Azure, Oracle Cloud, VMware vCloud Air etc. From my perspective, if the VM stays up and gives me the performance I paid for, do I really care about what’s under the hood? You can be snobbish about hypervisors, but do I care if Oracle are using less hardware to service the same number of VMs as Azure? No. Where infrastructure as a service (IaaS) is concerned, it is all about the price:performance ratio. As I’ve heard many times, it’s a race for the bottom.

Call me naive, but I really don’t care what is happening under the hood of a cloud service, provided I get what I pay for. I think this is an important factor in how someone like Microsoft can go from zero to hero of the cloud world. If they provide the right services at the right price, people will come.

Conclusion

Q: Why do people show Azure so much love?

A: Because it does what it is meant to do. It provides the services certain companies want at a price they are willing to pay. What’s not to love?

Q: So it’s the best cloud provider right?

A: That depends on your judging criteria. No one cloud provider is “the best”. For some people Azure will be the best option. For others it might be the worst.

Cheers

Tim…

Why do people show Azure so much love? was first posted on June 25, 2015 at 7:56 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

68% of Statistics Are Meaningless, D2L Edition

Michael Feldstein - Wed, 2015-06-24 17:27

By Michael FeldsteinMore Posts (1033)

Two years ago, I wrote about how D2L’s analytics package looked serious and potentially ground-breaking, but that there were serious architectural issues with the underlying platform that were preventing the product from working properly for customers. Since then, we’ve been looking for signs that the company has dealt with these issues and is ready to deliver something interesting and powerful. And what we’ve seen is…uh…

…uh…

Well, the silence has ended. I didn’t get to go to FUSION this year, but I did look at the highlights of the analytics announcements, and they were…

…they were…

OK, I’ll be honest. They were incredibly disappointing in almost every way possible, and good examples of a really bad pattern of hype and misdirection that we’ve been seeing from D2L lately.

You can see a presentation of the “NEW Brightspace Insights(TM) Analytics Suite” here. I would embed the video for you but, naturally, D2L uses a custom player from which they have apparently stripped embedding capabilities. Anyway, one of the first things we learn from the talk is that, with their new, space-age, cold-fusion-powered platform, they “deliver the data to you 20 times faster than before.” Wow! Twenty times faster?! That’s…like…they’re giving us the data even before the students click or something. THEY ARE READING THE STUDENTS’ MINDS!

Uh, no. Not really.

A little later on in the presentation, if you listen closely, you’ll learn that D2L was running a batch process to update the data once every 24 hours. Now, two years after announcing their supposed breakthrough data analytics platform, they are proud to tell us that they can run a batch process every hour. As I write this, I am looking at my real-time analytics feed on my blog, watching people come and go. Which I’ve had for a while. For free. Of course, saying it that way, a batch process every hour, doesn’t sound quite as awesome as

TWENTY TIMES FASTER!!!!!

So they go with that.

There was an honest way in which they could have made the announcement and still sounded great. They could have said something like this:

You know, when LMSs were first developed, nobody was really thinking about analytics, and the technology to do analytics well really wasn’t at a level where it was practical for education anyway. Times have changed, and so we have had to rebuild Brightspace from the inside out to accommodate this new world. This is an ongoing process, but we’re here to announce a milestone. By being able to deliver you regular, intra-day updates, we can now make a big difference in their value to you. You can respond more quickly to student needs. We are going to show you a few examples of it today, but the bigger deal is that we have this new structural capability that will enable us to provide you with more timely analytics as we go.

That’s not a whole lot different in substance than what they actually said. And they really needed to communicate in a hype-free way, because what was the example that they gave for this blazing fast analytics capability? Why, the ability to see if students had watched a video.

Really. That was it.

Now, here again, D2L could have scored real points for this incredibly underwhelming example if they had talked honestly about Caliper and its role in this demo. The big deal here is that they are getting analytics not from Brightspace but from a third-party tool (Kaltura) using IMS Caliper. Regular readers know that I am a big fan of the standard-in-development. I think it’s fantastic that an LMS company has made an early commitment to implement the standard and is pushing it hard as differentiator. That can make the difference between a standard getting traction or remaining an academic exercise. How does D2L position this move? From their announcement:

With our previous analytics products, D2L clients received information on student success even before they took their first test. This has helped them improve student success in many ways, but the data is limited to Brightspace tools. The new Brightspace Insights is able to aggregate student data, leveraging IMS Caliper data, across a wide variety of learning tools within an institution’s technology ecosystem.

We’ve seen explosive growth in the use of external learning tools hooked into Brightspace over the past eighteen months. In fact, we are trending toward 200% growth over 2014. [Emphasis added.] That’s a lot of missing data.

This helps create a more complete view of the student. All of their progress and experiences are captured and delivered through high performance reports, comprehensive data visualizations, and predictive analytics.

Let’s think about an example like a student’s experiences with publisher content and applications. Until now, Brightspace was able to capture final grades but wouldn’t track things like practice quizzes or other assessments a student has taken. It wouldn’t know if a student didn’t get past the table of contents in a digital textbook. Now, the new Brightspace Insights captures all of this data and creates a more complete, living, breathing view of a student’s performance.

This is a big milestone for edtech. No other LMS provider is able to capture data across the learning technology ecosystem like this. [Emphasis added.]

I have no problem with D2L crowing about being early to market with a Caliper implementation. But let’s look at how they positioned it. First, they talked about 200% growth in use of external learning tools in 2015. But what does that mean? Going from one tool to three tools? And what kind of tools are they? And what do we know about how they are being used? OK, on that last question, maybe analytics are needed to answer it. But the point is that D2L has a pattern of punctuating every announcement or talk with an impressive-sounding but meaningless statistic to emphasize how awesome they are. Phil recently caught John Baker using…questionable retention statistics in a speech he gave. In that case, the problem wasn’t that the statistic itself was meaningless but rather that there was no reason to believe that D2L had anything to do with the improvement in the case being cited. And then there’s the slight-of-hand that Phil just called out regarding their LeaP marketing. It’s not as bad as some of the other examples, in my opinion, but still disturbingly consistent with the pattern we are seeing. I am starting to suspect that somebody in the company literally made a rule: Every talk or announcement must have a statistic in it. Doesn’t matter what the statistic is, or whether it means anything. Make one up if you have to, but get it in there.

But back to analytics. The more egregious claim in the quote above is that “no other LMS provider is able to capture data across the learning technology like this [example that we just gave],” because D2L can’t either yet. They have implemented a pre-final draft of a standard which requires both sides to implement in order for it to work. I don’t know of any publishers who have announced they are ready to provide data in the way described in D2L’s example. In fact, there are darned few app providers of any kind who are there yet. (Apparently, Kaltura is one of them.) Again, this could have been presented honestly in a way that made D2L look fantastic. Implementing first puts them in a leadership position, even if that leadership will take a while to pay practical dividends for the customer. But they went for hype instead.

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

I don’t like writing this harshly about a company—particularly one that I have had reason to praise highly in the past. I don’t do it very often. But enough is enough already.

 

The post 68% of Statistics Are Meaningless, D2L Edition appeared first on e-Literate.

About The D2L Claim Of BrightSpace LeaP And Academic Improvements

Michael Feldstein - Wed, 2015-06-24 16:07

By Phil HillMore Posts (333)

Recently I wrote a post checking up on a claim by D2L that seems to imply that their learning platform leads to measurable improvements in academic performance. The genesis of this thread is a panel discussion at the IMS Global conference where I argued that LMS usage in aggregate has not improved academic performance but is important, or even necessary, infrastructure with a critical role. Unfortunately, I found that D2L’s claim from Lone Star was misleading:

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

Subsequently I found a press release at D2L with a claim that appeared to be more rigorous and credible (written in an awful protected web page that prevents select – copy – paste).

D2L Launches the Next Generation of BrightSpace and Strives to Accelerate the Nation’s Path to 60% Attainment

D2L, the EdTech company that created Brightspace, today announces the next generation of its learning platform, designed to develop smarter learners and increase graduation rates. By featuring a new faculty user interface (UI) and bringing adaptive learning to the masses, Brightspace is more flexible, smarter, and easier to use. [snip]

D2L is changing the EdTech landscape by enabling students to learn more with Brightspace LeaP adaptive learning technology that brings personalized learning to the masses, and will help both increase graduation rates and produce smarter learners. The National Scientific Research Council of Canada (NSERC) produced a recent unpublished study that states: “After collating and processing the results, the results were very favourable for LeaP; the study demonstrates, with statistical significance, a 24% absolute gain and a 34% relative gain in final test scores over a traditional LMS while shortening the time on task by 30% all while maintaining a high subjective score on perceived usefulness.”

I asked the company to provide more information on this “unpublished study”, and I got no response.

Hello, Internet search and phone calls – time to do some investigation to see if there is real data to back up claims.

Details on the Study

The Natural Sciences and Engineering Research Council of Canada (NSERC) is somewhat similar to the National Science Foundation in the US – they are funding agency. When I called them they made it perfectly clear that they don’t produce any studies as claimed, they only fund them. I would have to find the appropriate study and contact the lead researcher. Luckily they shared the link to their awards database, and I did some searching on relevant terms. I eventually found some candidate studies and contacted the lead researchers. It turns out that the study in question was led by none other than Dragan Gasevic, founding program co-chair of the International Conference on Learning Analytics & Knowledge (LAK) in 2011 and 2012, and he is now at the University of Edinburgh.

The grant was one of NSERC’s Engage grants which look for researchers to team with companies, and Kowillage was the partner – they have an adaptive learning platform. D2L acquired Knowillage in the middle of the study, and they currently offer the technology as LeaP. LeaP is integrated into the main D2L learning platform (LMS).

The reason the study was not published was simply that Dragan was too busy, including his move to Edinburgh, to complete and publish, but he was happy to share information by Skype.

The study was done on an Introduction to Chemistry course at an unnamed Canadian university. Following ~130 students, the study looked at test scores and time to complete, with two objectives reported – from the class midterm and class final. This was a controlled experiment looking at three groupings:

  • A control group with no LMS, using just search tools and loosely organized content;
  • A group using Moodle as an LMS with no adaptive learning; and
  • A group using Moodle as an LMS with Knowillage / LeaP integrated following LTI standards.

Of note, this study did not even use D2L’s core learning platform, now branded as BrightSpace. It used Moodle as the LMS, but the study was not about the LMS – it was about the pedagogical usage of the adaptive engine used on top of Moodle. It is important to call out that to date, LeaP has been an add-on application that works with multiple LMSs. I have noticed that D2L now redirects their web pages that called out such integrations (e.g. this one showing integration with Canvas and this one with Blackboard) to new marketing just talking about BrightSpace. I do not know if this means D2L no longer allows LeaP integration with other LMSs or not. Update 6/25: Confirmed that LeaP is still being actively marketed to customers of other LMS vendors.

The study found evidence that Knowillage / LeaP allows students to have better test scores than students using just Moodle or no learning platform. This finding was significant even when controlling for students’ prior knowledge and for students’ dispositions (using a questionnaire commonly used in Psychology for motivational strategies and skills). The majority of the variability (a moderate effect size) was still explained by the test condition – use of adaptive learning software.

Dragan regrets the research team’s terminology of “absolute gain” and “relative gain”, but the research did clearly show increased test score gains by use of the adaptive software.

The results were quite different between the mid-term (no significant difference between Moodle+LeaP group and Moodle only group or control group) and the final (significant improvements for Moodle+LeaP well over other groups). Furthermore, the Moodle only group and control group with no LMS reversed gains between midterms and finals. To Dragan, these are study limitations and should be investigated in future research. He still would like to publish these results soon.

Overall, this is an interesting study, and I hope we get a published version soon – it could tell us a bit about adaptive learning, at least in the context of Intro to Chemistry usage.

Back to D2L Claim

Like the Lone Star example, I find a real problem with misleading marketing. D2L could have been more precise and said something like the following:

We acquired a tool, LeaP, that when integrated with another LMS was shown to improve academic performance in a controlled experiment funded by NSERC. We are now offering this tool with deep integration into our learning platform, BrightSpace, as we hope to see similar gains with our clients in the future.

Instead, D2L chose to use imprecise marketing language that implies, or allows the reader to conclude that their next-generation LMS has been proven to work better than a traditional LMS. They never come out and say “it was our LMS”, but they also don’t say enough for the reader to understand the context.

What is clear is that D2L’s LMS (the core of the BrightSpace learning platform) had nothing to do with the study, the actual gains were recorded by LeaP integrated with Moodle, and that the study was encouraging for adaptive learning and LeaP but limited in scope. We also have no evidence that the BrightSpace integration gives any different results than Moodle or Canvas or Blackboard Learn integrations with LeaP. For all we know given the scope of the study, it is entirely possible that there was something unique about the Moodle / LeaP integration that enabled the positive results. We don’t know that, but we can’t rule it out, either.

Kudos to D2L for acquiring Knowillage and for working to make it more available to customers, but once again the company needs to be more accurate in their marketing claims.

The post About The D2L Claim Of BrightSpace LeaP And Academic Improvements appeared first on e-Literate.

Intercepting Table Filter Query and Manipulating VO SQL Statement

Andrejus Baranovski - Wed, 2015-06-24 13:30
I’m going to describe one non declarative use case. Imagine, if there is a table with filter functionality, you may want to intercept filter items and apply the same for another VO. This another VO should be based on the same DB table, so it could apply criteria items against the table.

Sample application - AdvancedViewCriteriaApp.zip, implements a fragment with table component and a chart. Table component can be filtered, criteria is intercepted and applied for the chart, this one is rendered from different VO with GROUP BY query. Chart stays in synch and displays data according to the criteria filtered in the table:


In the log, I’m printing out intercepted criteria from the table filter:


Chart is rendered from the SQL query below:


Table filter criteria is being intercepted by overridden method buildViewCriteriaClauses. Criteria clause is constructed here, we just need select FilterViewCriteria, the one originating from table filter. We could apply this criteria straight ahead to the VO responsible to bring chart data. However, this would not work - ADF BC would wrap original chart SQL query with SELECT * FROM (…) QRSLT WHERE (table filter criteria). This would not work, because table filter criteria is not present in the original chart SQL statement. To make it work, I’m updating original SQL query for chart data, by updating WHERE clause part:


In the last step, we need to pass bind variable values - the ones user is searching for in table filter. This can be done from another overridden method - bindParametersForCollection. We have access to the applied bind variables in this method. Again, you should check for FilterViewCriteria and extract applied bind variables values. Chart VO will be updated with bind variable definitions created on the fly and assigned with values to search for:


I hope this trick will save some of your time, if you are going to implement something similar - to intercept table filter query and apply it to the another VO, based on same DB table.

Groovy Time! How to use XML dateTime and duration in BPM 12c

Jan Kettenis - Wed, 2015-06-24 13:27
In this article I show some examples of handling XML dateTime and durations in Groovy in the context of a Oracle BPM 12c application.

Working with dates and durations in Java has always been painful. Mainly because date and time is a complex thing, with different formats and time zones and all, but I sometimes wonder if it has not been made overly complex. Anyway. Working with XML dates is even more complex because the limited support by XPath functions. Too bad because in BPM applications that work with dates this has to be done very often, and as a result I very often see the need to create all kinds of custom XPath functions to mitigate that.

This issue of complexity is no different for Groovy scripting in Oracle BPM 12c. And let handling of dates be a typical use case for using Groovy scripting because of this limited support by XPath. Therefore, to get you started (and help myself some next time) I would like to share a couple of Groovy code snippets for working with XML dates and durations that may be useful. These example are based on working with the XML dateTime type, and do not handle with the complexity of time zones and different formats. In my practice this is 99% of the use cases that I see.

In my opinion you still should limit using Groovy to handle dates and to the minimum, and rather use custom XPath functions, or create a Java library which you can can import in Groovy. But when you have to, this just might come in handy.

Instantiate an XML DateIf you have an XML element of type dateTime, you use an XmlCalender object. An XmlCalender object with the current time can instantiated as shown below:

Date now = new Date()
GregorianCalendar gregorianNow = new GregorianCalendar()
gregorianNow.setTime(now)
XmlCalendar xmlDate = XmlCalendarFactory.create(gregorianNow)

Instantiate a Duration and Add it to the DateTo instantiate a duration you use an XmlDuration object. In the code below a duration of one day is added to the date:

XmlDuration xmlDuration = new XmlDuration("P1D")
xmlDate.add(xmlDuration)

The string to provide is of type ISO duration.

The imports to use can also be a pain to find. That actually took me the most time of all, but that can just be me. The ones needed for the above are shown in the following picture (you can get to it by using clicking on Select Imports on the top-right corner of the Groovy script.

PaaS Launch and Cloud File Sharing and Collaboration

WebCenter Team - Wed, 2015-06-24 07:28

Thanks for joining us for the big PaaS launch on Monday, June 22nd with Larry Ellison, Thomas Kurian and other senior Oracle and customer executives. Ellison announced more than 24 new services to Oracle Cloud Platform which is a comprehensive, integrated suite of services that make it easier for developers, IT professionals, business users, and analysts to build, extend, and integrate cloud applications and drive co-existence with the existing on-premise infrastructure. For more details on the announcement, cloud services and customer videos, you can catch the on demand online replay of the event.

One of the key areas for PaaS is cloud content and collaboration that drives frictionless user collaboration, content sharing and business process automation anywhere, anytime and on any device. As an enterprise, you will well recognize the need to drive workforce productivity and operational efficiency by enabling secure availability and access to content on any device, within and beyond the firewall and connecting it to the business processes and the applications (SaaS and on-premise) to well, get work done. So, beyond just cloud file sharing which is what current Enterprise File Sync and Share (EFSS) solutions do, you need to be able to share and access content in context and then be able to drive business processes using that content. That is true cloud content and collaboration and that is what will drive output and improve productivity in a digital workplace.

Check out this video that we recently released. While it discusses a specific use case tracking the lifecycle from marketing to sales to service, imagine the power of enabling real time collaboration among employees, remote workforce, external vendors and partners and being able to drive output from anywhere, anytime and any device. And let us know what use case do you come across often in your workplace; we look forward to hearing from you.



Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

Base64 encoding/decoding in OSB

Darwin IT - Wed, 2015-06-24 04:28
Of course there are several java examples to do a base64 encoding on the internet. And there are almost nearly as much encoding implementations in different environments. But which one works in Weblogic/OSB11g? And to implement those examples, compile and jar them, I find myself on a quest for the necessary jars. Of course you can refer to the weblogic.jar in your project of ant file. But that is a little too much, I think. I'd like to find and deliver the bare minimum of jars needed for my project.

For my latest customer/project I came up with this class:

package nl.alliander.osb.base64;

import java.io.IOException;
import java.io.InputStream;
import java.io.ByteArrayInputStream;
import weblogic.utils.encoders.BASE64Decoder;
import weblogic.utils.encoders.BASE64Encoder;
import java.nio.charset.Charset;

public class Base64EncoderDecoder
{
private static final Charset UTF8_CHARSET;

public static void main(final String[] args) {
}

public static String encode(final byte[] bytes) {
final BASE64Encoder encoder = new BASE64Encoder();
final String encodedString = encoder.encodeBuffer(bytes);
return encodedString;
}
public static int getLength(final byte[] bytes) {
int length = bytes.length;
return length;
}
public static byte[] decode(final String b64Document) throws IOException {
final BASE64Decoder decoder = new BASE64Decoder();
final InputStream inputStream = new ByteArrayInputStream(b64Document.getBytes(Base64EncoderDecoder.UTF8_CHARSET));
final byte[] decodedBytes = decoder.decodeBuffer(inputStream);
return decodedBytes;
}

static {
UTF8_CHARSET = Charset.forName("UTF-8");
}
}

And if you use JDeveloper11g as a IDE the only lib you need to compile this is: com.bea.core.utils.full_1.9.0.1.jar. Or com.bea.core.utils.full_1.10.0.0.jar, if using oepe version 11.1.1.8. The jars can be found in ${oracle.home}/modules. Where ${oracle.home} refers to your JDeveloper11g or OEPE-Middleware installation.

By the way, in my OSB project I need to process Attachments in my message (Soap with Attachments), where I need to upload the documents to a ContentServer. Unfortunately the ContentServer needs the filesize (it apparenlty does not determine it by base64-decoding it). So I added the getLength() method to determine it with a java-callout similar to the base64-encode.


Input of the methods is a variable like 'attachmentBin' resulted from an Assing with an expression like:
$attachments/ctx:attachment[ctx:Content-ID/text()=$contentId]/ctx:body/ctx:binary-content


SharePoint 2013 - Office 365 & Power BI

Yann Neuhaus - Wed, 2015-06-24 03:00

 alt

Quick reminder of what is SharePoint 2013 and Office 365

SharePoint 2013

SharePoint 2013 is a collaborative platform that allows organizations to increase the efficiency of their business processes.

link: https://technet.microsoft.com/en-us/library/cc303422.aspx

Office 365 with SharePoint 2013

This is the Online version of SharePoint 2013.

When you sign in to Office 365, or your organization’s corporate intranet site, you’ll see links to Newsfeed, OneDrive, and Sites in the global navigation bar.
These are your entry points into SharePoint.

Organizations use SharePoint to create websites. You can use it as a secure place to store, organize, share, and access information from almost any device.
All you need is a web browser, such as Internet Explorer, Google Chrome, or Mozilla Firefox.

link: https://support.office.com/en-za/article/Get-started-with-SharePoint-2013-909ec2f0-05c8-4e92-8ad3-3f8b0b6cf261?ui=en-US&rs=en-ZA&ad=ZA

 

What is Power BI?

Power BI is a Microsoft Tool which gives you the "Visual Power", it means it allows you to get the best rich visuals to organize and collect data you care the most to focus on. This will keep you in the knowledge of your business activity. 

BI
 

WHAT FOR? what

Depending on the concern, Power BI:

  • MARKETING: 
    • Market Smarter: easily monitor and analyze your marketing campaigns and efficiently allocate your resources to the right channels, all in one place.
    • Monitor your campaign: will give you a view on your campaign efficacy and your tactics performances.

    • Talk to the right customers: demographic filters, customer lifetime values, etc... will help you to get specifics views on your customers activity.


  • SALES:
    • Crush your quotas: Used with Microsoft Dynamics CRM or Salesforce.com, Power BI extends and enhances these services with instant insight into your pipeline.
    • Sales management: Dashboard creation giving more visibility on results to learn from past deal, and get better goals then. 
    • Sales representative: Understand how your previous deals performed so you can execute on future deals more efficiently.



  • CUSTOMER SUPPORT: With Power BI, you will be able to track and have a better view and understanding of Customer Support Activities, drive the team to success.

    CSBI

  • DECISION MAKER: By getting all data "in one", in one dashboard shared with your team, it will help you to take the right decision on time.
  • HUMAN RESSOURCES: All information related to employees on the same dashboard. It will make your HR Meeting and Employees reviews so easiest.

    HR_BI

 

CONNECTING DATA

Dashboards, reports, and datasets are at the middle of Power BI Preview. Connect to or import datasets from a variety of sources:

  • Excel
  • GitHub
  • Google Analytics
  • Marketo
  • Microsoft Dynamics CRM
  • Microsoft Dynamics Marketing
  • Power BI Designer file
  • Salesforce
  • SendGrid
  • SQL Server Analysis Services
  • Zendesk

Data

 

POWER BI DESIGNER

Power Bi Designer is a tool with which you can create robust data models and amazing reports in order to get the best way for your Business Intelligence activities.

PowerBiDesigner

 

POWER BI MOBILE 

phone2

Stay connected to your data from anywhere, anytime with the Power BI app for Windows and iOS.

VERSIONS

There is 2 versions:

  • Power BI: FREE
  • Power BI Pro: with LICENCE ($9.99 user/month)

 

Microsoft Power BI is a user-friendly, intuitive and cloud based self-service BI solution for all your data needs in your own Excel.
including different tools for data extraction, analysis and visualization. 

 

 

PFCLScan Updated and Powerful features

Pete Finnigan - Wed, 2015-06-24 02:20

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Wed, 2015-06-24 02:20

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Empty Leaf Blocks After Rollback Part II (Editions of You)

Richard Foote - Wed, 2015-06-24 01:35
In my last post, I discussed how both 1/2 empty and totally empty leaf blocks can be generated by rolling back a bulk update operation. An important point I made within the comments of the previous post is that almost the exact scenario would have taken place had the transaction committed rather than rolled back. A […]
Categories: DBA Blogs

Multithreaded 12c and 'connect / as sysdba'

Yann Neuhaus - Wed, 2015-06-24 01:21

In Oracle 12c you can run Oracle processes as operating system threads, lowering the number of OS processes. But you can't use OS authentification: you need to provide a password. Here is a way to set an environment so that you can still 'connect / as sysdba' to a multithreaded instance.

Windows

I start with Windows because Oracle has always been multithreaded on windows. Are you able to use operating system authentication then? You  think so because you can 'connect / as sysdba'. But look at your sqlnet.ora:

SQLNET.AUTHENTICATION_SERVICES = (NTS)

You need NTS to connect locally without a password, the same authentication as when you connect remotely. If you don't set NTS then both local and remote connections need a password.

Threaded execution

Back to Linux, I've set my instance with multithreading:

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
threaded_execution boolean TRUE

If I try to connect witout a password I got an error:

SQL> connect / as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied

it's exacly the same as if I set 

SQLNET.AUTHENTICATION_SERVICES = (NONE)

by default on Unix/Linux the AUTHENTICATION_SERVICES is not set, which allows operating system suthentication for Bequeath connections.

When multithreaded, I can only connect with a password:

SQL> connect sys as sysdba
Enter password:
Connected.

But I don't want that. I want to keep she same scripts and procedures as I had before going to multithread instance. I can put the password in an external password file (wallet) and then connect without typing the password. But then I have to use a network service name. I can use TWO_TASK environment variable to add that network service name to connections transparently, but - for waterver reason - I don't want to connect through the listener. So let's see how to set it up.

TNS_ADMIN

I'll setup my own SQL*Net files in a custom directory and use TNS_ADMIN to use them.

$ mkdir /home/franck/tns
$ export TNS_ADMIN=/home/franck/tns

Here are my ORACLE_HOME and ORACLE_SID:

$ env | grep ORACLE
ORACLE_SID=DEMO11
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12102EE
External password file

It has been described before on our blog by Nicolas Jardot.

$ mkstore -wrl $TNS_ADMIN -create
$ mkstore -wrl $TNS_ADMIN -createCredential BEQ_DEMO11_SYS SYS

this as created the wallet containing my user (SYS) and password for the network service name BEQ_DEMO111_SYS

$ ls -l
-rwxrwx---. 1 root vboxsf 589 Jun 23 23:29 cwallet.sso
-rwxrwx---. 1 root vboxsf 0 Jun 23 23:29 cwallet.sso.lck
-rwxrwx---. 1 root vboxsf 544 Jun 23 23:29 ewallet.p12
-rwxrwx---. 1 root vboxsf 0 Jun 23 23:29 ewallet.p12.lck

I have to declare the wallet in my sqlnet.ora

$ cat sqlnet.ora
WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/home/franck/tns)))
SQLNET.WALLET_OVERRIDE=TRUE
Beqeath connection string

Now time to define that BEQ_DEMO11_SYS network service name. I want to connect locally (not through the listener) so I define a BEQ connection string:

$ cat tnsnames.ora
BEQ_DEMO11_SYS=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=/u01/app/oracle/product/12102EE/bin/oracle)(ARGV0=oracleDEMO11)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')(ENVS='ORACLE_HOME=/u01/app/oracle/product/12102EE,ORACLE_SID=DEMO11'))

Here is how a beaqueath (PROTOCOL=BEQ) connection is defined. You need to define the PROGRAM to be run (the oracle binary) and the ARGS. You need to pass the environement variables - at least ORACLE_HOME and ORACLE_SID

The ARGV0 is the name that will be displayed by the ps 'CMD' command, but you can put whatever you want in it (just saying... have fun but not in prod please). The convention is to add the ORACLE_SID to the binary name 'oracle'.

Then I can connect:

SQL> connect /@BEQ_DEMO11_SYS as sysdba
Connected.
TWO_TASK

Finally, I don't want to add the network service name in my scripts, then I can set the TWO_TASK environment variable to it.  I definitely don't want to set it for all my environment because it can be misleading (you think you use the ORACLE_SID but you are not, you change environement with oraenv but TWO_TASK remains,...). So i set it locally when I run sqlplus.

Here is an example where I set TNS_ADMIN and TWO_TASK only when calling sqlplus:

$ TNS_ADMIN=$PWD TWO_TASK=BEQ_DEMO11_SYS sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 24 10:54:58 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

If the scripts does a 'connect / as sysdba' it still work:

SQL> connect / as sysdba
Connected.

but you should now that if the script is connecting with another user, TWO_TASK is still used:

SQL> connect scott/tiger
Connected.

Note that those sessions are multithreaded even if you don't set DEDICATED_THROUGH_BROKER for the listener, because you're not connecting through the listener here. More information about it in Martin Bach's post.

Here is how to check it - process and thread id from v$process:

SQL> select spid, stid, program, execution_type from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));

SPID                     STID                     PROGRAM              EXECUTION_TYPE
------------------------ ------------------------ -------------------- ------------------
21107                    21107                    oracle@VM111         PROCESS

and the info about it from Linux:

SQL> host ps -Lfp &pid
UID        PID  PPID   LWP  C NLWP STIME TTY          TIME CMD
oracle   21107     1 21107  0    1 11:04 ?        00:00:00 oracleDEMO11 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))

 

TWO_TASK is coming from very old version but will be useful to run old scripts in 12c. Here is an example with threaded instance. You can use it also to connect directly to a PDB (but through listener then - you need a service).

But...

There is one thing that doesn't work as I want with external password files. DGMGRL keeps the password provided and uses it to connect to the remote instance - which is why you need same password for sys on standby. But let's see if it works with external password file:

$ TNS_ADMIN=$PWD TWO_TASK=BEQ_DEMO11_SYS dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> switchover to demo11
Performing switchover NOW, please wait...
New primary database "demo11" is opening...
Operation requires start up of instance "DEMO12" on database "demo12"
Starting instance "DEMO12"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up instance "DEMO12" of database "demo12"

I have to finish the switchover manually because the password retreived from the wallet is not used here. Same behaviour than OS authentication here. Tip: if you connect to the primary to do the switchover, then the connection to remote is detected at the begining.

Final note

This is not best practice. Using external password file is a good practice of course because we should never put passwords in our scripts or in command line. Passwords are something to be only typed by human fingers. TWO_TASK and BEQ connection string are not a good practice, but only a workaround to keep old scripts compatible with new features.

LinkedIn, Oracle Service Cloud Customer, Wins Gartner & 1to1 Media Award

Linda Fishman Hoyle - Tue, 2015-06-23 15:50

A Guest Post by Vice President Stephen Fioretti, (pictured left) Oracle Service Cloud

We want to share some exciting Oracle Service Cloud customer award news with you that was just announced this week!

The number one research analyst firm, Gartner, and 1to1 Media named Oracle Service Cloud customer LinkedIn a GOLD winner in the 2015 Gartner & 1to1 Media Customer Experience Excellence Awards. For over a decade, Gartner and 1to1 Media have used this joint awards program to honor select, customer-centric organizations; specifically, those that have achieved outstanding business performance by implementing enterprise-wide, service-focused strategies.

The Oracle product management, Analyst Relations, and sales/CSM teams successfully partnered with Andy Yasutake, Director of Tech Solutions and Operations, at LinkedIn on this submission. As a result, LinkedIn has been featured in the June issue of 1to1 Magazine and on the 1to1 Media website.  The feature article showcases the results LinkedIn has seen with Oracle Service Cloud, including:

  • An 85 percent decrease in average initial response time
  • A 68 percent decrease in average final resolution time
  • A 23 percent increase in overall customer satisfaction scores (CSAT), up from 6.6 to 8.1
  • With enhanced self-service, deflection rates jumped to 97 percent, thereby minimizing the company's potential need for service agents. Instead of employing the expected 5,000-10,000 representatives, LinkedIn has been able to keep staff limited to the minimal 750-800 agents, easing costs and boosting efficiency.

In addition, LinkedIn will be honored and speaking at the Gartner Customer 360 Summit, September 9-11 in San Diego. Please check out and share the exciting LinkedIn 1to1 feature article.

2 ways to move archivelogs - both need RMAN

Yann Neuhaus - Tue, 2015-06-23 14:03

The database must know where are the files. If you move them from the OS, the database will not be aware of it. But there is a database tool to copy or move files and update the database repository (controlfile). It's RMAN. With RMAN you can either:

  • update the repository after you've moved the file from the OS
  • or do the both: move and update the repository
The syntax is a bit weird, so let's have an example.

RMAN> CATALOG

I have the following archived logs in the /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23 directory:

[oracle@VM111 2015_06_23]$ pwd
/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23
[oracle@VM111 2015_06_23]$ ls -alrt
total 188
drwxr-x---. 5 oracle oinstall   4096 Jun 23 21:40 ..
-rw-r-----. 1 oracle oinstall 120320 Jun 23 21:44 o1_mf_1_61_brmfrl7v_.arc
-rw-r-----. 1 oracle oinstall   3584 Jun 23 21:44 o1_mf_1_62_brmfrq6c_.arc
-rw-r-----. 1 oracle oinstall   2560 Jun 23 21:44 o1_mf_1_63_brmfrsts_.arc
-rw-r-----. 1 oracle oinstall   1536 Jun 23 21:44 o1_mf_1_64_brmfrtjs_.arc
-rw-r-----. 1 oracle oinstall  30208 Jun 23 21:45 o1_mf_1_65_brmft8n5_.arc
-rw-r-----. 1 oracle oinstall   6144 Jun 23 21:45 o1_mf_1_66_brmftlpg_.arc
-rw-r-----. 1 oracle oinstall   4096 Jun 23 21:45 o1_mf_1_67_brmftr8w_.arc
-rw-r-----. 1 oracle oinstall   2560 Jun 23 21:45 o1_mf_1_68_brmftvwn_.arc
drwxr-x---. 2 oracle oinstall   4096 Jun 23 21:45 .
and I move them to /u01/DEMO/temp/:
[oracle@VM111 2015_06_23]$ mv * /u01/DEMO/temp/
my current directory is empty:
[oracle@VM111 2015_06_23]$ ls -alrt
total 8
drwxr-x---. 5 oracle oinstall 4096 Jun 23 21:40 ..
drwxr-x---. 2 oracle oinstall 4096 Jun 23 21:50 .
and the /u01 one has my archived logs:
[oracle@VM111 2015_06_23]$ ls -alrt /u01/DEMO/temp
total 188
-rw-r-----.  1 oracle oinstall 120320 Jun 23 21:44 o1_mf_1_61_brmfrl7v_.arc
-rw-r-----.  1 oracle oinstall   3584 Jun 23 21:44 o1_mf_1_62_brmfrq6c_.arc
-rw-r-----.  1 oracle oinstall   2560 Jun 23 21:44 o1_mf_1_63_brmfrsts_.arc
-rw-r-----.  1 oracle oinstall   1536 Jun 23 21:44 o1_mf_1_64_brmfrtjs_.arc
-rw-r-----.  1 oracle oinstall  30208 Jun 23 21:45 o1_mf_1_65_brmft8n5_.arc
-rw-r-----.  1 oracle oinstall   6144 Jun 23 21:45 o1_mf_1_66_brmftlpg_.arc
-rw-r-----.  1 oracle oinstall   4096 Jun 23 21:45 o1_mf_1_67_brmftr8w_.arc
-rw-r-----.  1 oracle oinstall   2560 Jun 23 21:45 o1_mf_1_68_brmftvwn_.arc
drwxr-x---. 10 oracle oinstall   4096 Jun 23 21:49 ..
drwxr-xr-x.  2 oracle oinstall   4096 Jun 23 21:50 .
[oracle@VM111 2015_06_23]$

But let's list the archived logs from RMAN:

[oracle@VM111 2015_06_23]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Tue Jun 23 21:50:48 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: DEMO11 (DBID=684456715)

RMAN> list archivelog all;

using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
82      1    61      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc

84      1    62      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc

86      1    63      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc

88      1    64      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc

90      1    65      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc

92      1    66      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc

94      1    67      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc

96      1    68      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc
the repository (in the controlfile) still have the old location. If I need the redologs to recover the database, then it will fail.

The CROSSCHECK command can be used so that RMAN verifies if the files are still there:

RMAN> crosscheck archivelog all;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc RECID=82 STAMP=883172658
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc RECID=84 STAMP=883172663
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc RECID=86 STAMP=883172665
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc RECID=88 STAMP=883172666
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc RECID=90 STAMP=883172712
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc RECID=92 STAMP=883172722
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc RECID=94 STAMP=883172728
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc RECID=96 STAMP=883172731
Crosschecked 8 objects
validation failed for all of them. They are marked as EXPIRED:
RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
82      1    61      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc

84      1    62      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc

86      1    63      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc

88      1    64      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc

90      1    65      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc

92      1    66      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc

94      1    67      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc

96      1    68      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc
which means that the database do not know any archived logs now.

I did only one part of the job. Now I need to register the new location with the CATALOG command:

RMAN> catalog start with '/u01/DEMO/temp';

searching for all files that match the pattern /u01/DEMO/temp

List of Files Unknown to the Database
=====================================
File Name: /u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc
File Name: /u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc
File Name: /u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc
File Name: /u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc
File Name: /u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc
File Name: /u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc
File Name: /u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc
File Name: /u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc
File Name: /u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc
File Name: /u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc
File Name: /u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc
File Name: /u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc
File Name: /u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc
File Name: /u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc
File Name: /u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc
the file types and the header is read to see if the file belongs to this database. Then they are registered. Before listing them, I remove the expired entries:
RMAN> delete expired archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
82      1    61      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc

84      1    62      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc

86      1    63      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc

88      1    64      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc

90      1    65      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc

92      1    66      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc

94      1    67      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc

96      1    68      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc RECID=82 STAMP=883172658
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc RECID=84 STAMP=883172663
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc RECID=86 STAMP=883172665
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc RECID=88 STAMP=883172666
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc RECID=90 STAMP=883172712
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc RECID=92 STAMP=883172722
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc RECID=94 STAMP=883172728
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc RECID=96 STAMP=883172731
Deleted 8 EXPIRED objects
and I can verify that a crosscheck validates all my files from the new location:
RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc RECID=104 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc RECID=102 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc RECID=103 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc RECID=97 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc RECID=101 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc RECID=99 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc RECID=100 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc RECID=98 STAMP=883173169
Crosschecked 8 objects

RMAN> BACKUP AS COPY

Let's do the same in one command. RMAN is there to do backups. Backups can go to backupsets or they can be a simple copy with BACKUP AS COPY. The destination is defined with the backup FORMAT string. And if we want to move instead of copy, we just add the DELETE INPUT.

RMAN> backup as copy archivelog all format '/u03/DEMO/temp/%U' delete input;

Starting backup at 23-JUN-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=69 RECID=106 STAMP=883173353
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-69_T-1_A-880845261_0bqa8a0b RECID=109 STAMP=883173387
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_69_brmgg9on_.arc RECID=106 STAMP=8
83173353
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=61 RECID=104 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-61_T-1_A-880845261_0cqa8a0d RECID=110 STAMP=883173389
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc RECID=104 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=65 RECID=101 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-65_T-1_A-880845261_0dqa8a0e RECID=111 STAMP=883173390
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc RECID=101 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=70 RECID=108 STAMP=883173387
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-70_T-1_A-880845261_0eqa8a0f RECID=112 STAMP=883173391
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_70_brmghct5_.arc RECID=108 STAMP=8
83173387
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=66 RECID=99 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-66_T-1_A-880845261_0fqa8a0g RECID=113 STAMP=883173392
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc RECID=99 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=67 RECID=100 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-67_T-1_A-880845261_0gqa8a0h RECID=114 STAMP=883173393
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc RECID=100 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=62 RECID=102 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-62_T-1_A-880845261_0hqa8a0i RECID=115 STAMP=883173394
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc RECID=102 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=63 RECID=103 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-63_T-1_A-880845261_0iqa8a0j RECID=116 STAMP=883173395
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc RECID=103 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=68 RECID=98 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-68_T-1_A-880845261_0jqa8a0k RECID=117 STAMP=883173396
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc RECID=98 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=64 RECID=97 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-64_T-1_A-880845261_0kqa8a0l RECID=118 STAMP=883173397
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc RECID=97 STAMP=883173169
Finished backup at 23-JUN-15
The syntax is very different from a move command but it's the same. The file names may have changed (because of the %U format) but who cares? Only RMAN should know what is inside the files. You have a repository (controlfile or rman catalog) which knows all the attributes about the files (DBID, thread#, sequence#, SCN, etc) so better rely on that rather than on a file name and timestamp.

Look at the files, they are now in my third destination:

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
110     1    61      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-61_T-1_A-880845261_0cqa8a0d

115     1    62      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-62_T-1_A-880845261_0hqa8a0i

116     1    63      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-63_T-1_A-880845261_0iqa8a0j

118     1    64      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-64_T-1_A-880845261_0kqa8a0l

111     1    65      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-65_T-1_A-880845261_0dqa8a0e

113     1    66      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-66_T-1_A-880845261_0fqa8a0g

114     1    67      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-67_T-1_A-880845261_0gqa8a0h

117     1    68      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-68_T-1_A-880845261_0jqa8a0k

109     1    69      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-69_T-1_A-880845261_0bqa8a0b

112     1    70      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-70_T-1_A-880845261_0eqa8a0f
and a crosscheck validates that they are accesible there:
RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-61_T-1_A-880845261_0cqa8a0d RECID=110 STAMP=883173389
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-62_T-1_A-880845261_0hqa8a0i RECID=115 STAMP=883173394
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-63_T-1_A-880845261_0iqa8a0j RECID=116 STAMP=883173395
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-64_T-1_A-880845261_0kqa8a0l RECID=118 STAMP=883173397
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-65_T-1_A-880845261_0dqa8a0e RECID=111 STAMP=883173390
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-66_T-1_A-880845261_0fqa8a0g RECID=113 STAMP=883173392
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-67_T-1_A-880845261_0gqa8a0h RECID=114 STAMP=883173393
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-68_T-1_A-880845261_0jqa8a0k RECID=117 STAMP=883173396
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-69_T-1_A-880845261_0bqa8a0b RECID=109 STAMP=883173387
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-70_T-1_A-880845261_0eqa8a0f RECID=112 STAMP=883173391
Crosschecked 10 objects

Conclusion

Which one do you prefer? Managing database files from the OS is old-style. You are doing your backups with RMAN, so why not use RMAN to do any operations on the files. Then you are sure that the repository is up to date. When you will need your archivelogs to recover, you don't want to waste time finding where an archived logs has been moved one day by a collegue that forgot to re-catalog them because of the emergency situation.

What’s in a Name ? USER_TAB_COLS and USER_TAB_COLUMNS are different.

The Anti-Kyte - Tue, 2015-06-23 13:22

My son and I are quite similar in some ways ( although he would vehemently dispute this).
Like me, he works in IT, in his case as a Support Engineer.
Like me, he’s called Mike (well, my Mum likes the name…and I can spell it).
Unlike me – as he would be quick to point out – he still has all his own hair.
These similarities have been known to cause confusion – I’m often contacted by recruitment agents with enticing offers to work on…some newfangled stuff I know nothing about, whilst he’s constantly being offered “exciting” Database related opportunities.

Similar confusion can arise when you’re delving into the Oracle Data Dictionary…

Note – the examples that follow apply to 11gR2. Additionally, apart from the COLS synonym, what is true for USER_TAB_COLUMNS and USER_TAB_COLS also applies to their ALL_ and DBA_ equivalents.

When it comes to getting column meta-data out of the Data Dictionary, you’ve got several choices. To illustrate this, connect as HR and ….

select column_name, data_type
from cols
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_columns
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_cols
where table_name = 'EMPLOYEES'
order by column_id
/

In each case the results are identical :

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
COMMISSION_PCT		       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

11 rows selected.

So, it would appear that COLS, USER_TAB_COLUMNS and USER_TAB_COLS are all synonyms for the same thing…

select synonym_name, table_owner, table_name
from all_synonyms
where synonym_name in ('COLS', 'USER_TAB_COLS', 'USER_TAB_COLUMNS')
order by table_name
/ 

SYNONYM_NAME		       TABLE_OWNER	    TABLE_NAME
------------------------------ -------------------- --------------------
USER_TAB_COLS		       SYS		    USER_TAB_COLS
USER_TAB_COLUMNS	       SYS		    USER_TAB_COLUMNS
COLS			       SYS		    USER_TAB_COLUMNS

…OK, so COLS is indeed a synonym for USER_TAB_COLUMNS. USER_TAB_COLS and USER_TAB_COLUMNS also appear to be identical…

select table_name, comments
from all_tab_comments
where table_name in ('USER_TAB_COLUMNS', 'USER_TAB_COLS')
/

TABLE_NAME	     COMMENTS
-------------------- --------------------------------------------------
USER_TAB_COLS	     Columns of user's tables, views and clusters
USER_TAB_COLUMNS     Columns of user's tables, views and clusters

There you go then. Must be the case…

Unused Columns

Lets create another table in the HR schema as a copy of EMPLOYEES….

create table non_sales_emps as
    select * 
    from employees
    where commission_pct is null
/

Table created.

As the name suggests, we’re not going to have any Sales Staff in this table, so we don’t really need the COMMISSION_PCT column…

SQL> alter table non_sales_emps
  2      set unused column commission_pct
  3  /

Table altered.

SQL> 

So, the table no longer contains the COMMISSION_PCT column…

select column_name, data_type
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER
SYS_C00009_15061918:46:18$     NUMBER

11 rows selected.

Yes, the table now has 10 columns…and here are the details of all 11 of them.
Weren’t expecting that ? Well then you probably won’t be expecting this either….

select column_name, data_type
from user_tab_columns
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

10 rows selected.

The extra column has magically disappeared again. Just what is going on ?

Delving a bit deeper into this particular rabbit-hole…

select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
minus
select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLUMNS'
/

COLUMN_NAME		       DATA_TYPE
------------------------------ ------------------------------
HIDDEN_COLUMN		       VARCHAR2
INTERNAL_COLUMN_ID	       NUMBER
QUALIFIED_COL_NAME	       VARCHAR2
SEGMENT_COLUMN_ID	       NUMBER
VIRTUAL_COLUMN		       VARCHAR2

SQL> 

From this we can see that USER_TAB_COLS contains five additional columns over those available in USER_TAB_COLUMNS.

select column_name, comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
and column_name in ( 'HIDDEN_COLUMN', 'INTERNAL_COLUMN_ID', 
    'QUALIFIED_COL_NAME', 'SEGMENT_COLUMN_ID', 'VIRTUAL_COLUMN')
/

COLUMN_NAME		  COMMENTS
------------------------- --------------------------------------------------
HIDDEN_COLUMN		  Is this a hidden column?
VIRTUAL_COLUMN		  Is this a virtual column?
SEGMENT_COLUMN_ID	  Sequence number of the column in the segment
INTERNAL_COLUMN_ID	  Internal sequence number of the column
QUALIFIED_COL_NAME	  Qualified column name

Furthermore, if we examine the source code for the USER_TAB_COLUMNS view, the reason for it’s similarity with USER_TAB_COLS becomes apparent :

select TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
       DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
       DEFAULT_LENGTH, DATA_DEFAULT, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
       DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
       CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
       GLOBAL_STATS, USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
       V80_FMT_IMAGE, DATA_UPGRADED, HISTOGRAM
  from USER_TAB_COLS
 where HIDDEN_COLUMN = 'NO'

So, just when does it become useful to use USER_TAB_COLS rather than USER_TAB_COLUMNS ?

In 11g, you’d set a column to be unused on a large table in order for the “drop” to happen quickly.
Once you have set it to unused, the only thing you can do with it is drop it altogether to reclaim the space it’s using.
To find out which tables have unused columns, you can use…

select table_name, count
from user_unused_col_tabs
/

TABLE_NAME			    COUNT
------------------------------ ----------
NON_SALES_EMPS				1

…and if you want to drop an unused column, you don’t need to specify it’s name in the command…

alter table non_sales_emps
    drop unused columns
/

Table altered.

Where USER_TAB_COLS does come in handy is when you’re looking for …

Virtual Columns

For example, we could add a column to our NON_SALES_EMPS table to calculate the number of completed years service for each employee.
First, we need a deterministic function to return the number of full years between a given date and today :

create or replace function years_elapsed_fn( i_date in date)
    return number deterministic
as
begin
    return floor( months_between( trunc(sysdate), i_date) / 12);
end;
/

Now we add a virtual column to the table which calls this function :

alter table non_sales_emps
    add years_service generated always as
        (years_elapsed_fn(hire_date))
/

Whilst there’s no way to tell which columns are virtual in USER_TAB_COLUMNS, there is in USER_TAB_COLS :

select column_name, data_type, virtual_column
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
/

COLUMN_NAME		  DATA_TYPE			 VIR
------------------------- ------------------------------ ---
EMPLOYEE_ID		  NUMBER			 NO
FIRST_NAME		  VARCHAR2			 NO
LAST_NAME		  VARCHAR2			 NO
EMAIL			  VARCHAR2			 NO
PHONE_NUMBER		  VARCHAR2			 NO
HIRE_DATE		  DATE				 NO
JOB_ID			  VARCHAR2			 NO
SALARY			  NUMBER			 NO
MANAGER_ID		  NUMBER			 NO
DEPARTMENT_ID		  NUMBER			 NO
YEARS_SERVICE		  NUMBER			 YES

11 rows selected.

SQL> 

The reasons for having two such similar ( and similarly named) dictionary views seem to have been lost in the mists of time.
Whatever the rationale, it’s worth knowing the difference next time you need to go poking around the column meta-data in your database.


Filed under: Oracle, SQL Tagged: all_tab_comments, alter table set unused column, cols, deterministic, hidden columns, user_tab_cols, user_tab_columns, virtual columns

SQL Server 2016 : availability groups and the new SSISDB support

Yann Neuhaus - Tue, 2015-06-23 12:25

This blog post is focused on the new supportability of SSIDB catalog on AlwaysOn architecture.

Others studies are available here:

 

A couple of weeks ago, I was involved in an SSIS infrastructure project with SQL Server 2014. As you know, the SSIS architecture has fundamentally changed since SQL Server 2012 and has lead to a new way of administrating it from the DBA perspective. This is also particularly true when we have to take into account an AlwaysOn architecture with the new SSISDB catalog since SQL Server 2014..

Indeed, when you want to include the SSISDB catalog to an SQL Server 2014 availability group, you have to perform some extra steps that are required according to the Microsoft SSIS blog post here. The task consists in creating manually some SQL Server jobs to leverage a failover event that requires the re-encryption of the database master key by the service master key on the new primary replica. Likewise, you will have to deal with the SSIS Server Maintenance job that is not AlwaysOn aware by default. Thus, deploying the SSISDB catalog in an availability group’s environment is not an easy task with SQL Server 2014 but let’s take a look at the new support AlwaysOn support with SQL Server 2016.

Fortunately, Microsoft has built on the experience gained from the previous version. Unlike SQL Server 2014, the next version will provide an easier way to deploy and manage the SSISDB catalog in an AlwaysOn infrastructure. By referring to the BOL here, you will notice that all the configuration stuff is done directly from the availability groups wizard without scripting any additional object. So, my goal in this blog post will consist in understanding the internal changes made by Microsoft in this area.

First of all, let’s take a look at some changes by trying to add an SSISDB catalog. At this point, you will be asked to provide the password of your database master key before to continue as follows:

 

blog_54_-_1-_aag_ssidb_database_master_key

 

Yes, the SSIDB catalog uses intensively the encryption to protect sensitive data from projects, packages, parameters and so on. At this point,  you may notice a warning icon point. In fact, the wizard warns us about configuring the AlwaysOn support for SSISDB in a final step because it is required in order to leverage availability group failover events.

 

blog_54_-_2-_aag_ssidb_validation_step

 

To enable AlwaysOn support we need to go the Integration Services Catalog node and we must include the concerned replica(s) as shown below:

 

blog_54_-_3-_aag_ssidb_services_integration_services_node

 

...

 

blog_54_-_4-_aag_ssidb_services_integration_services_alwayson_support

 

My configuration is now finished. In a second step, we will have a look at the SQL Server agent jobs. Indeed, during my test I suspected that a lot of stuff was done by SQL Server behind the scene and I was right. It added two additional jobs as shown below:

 

blog_54_-_5-_aag_ssidb_catalog_jobs

 

First of all, the SSIS Failover Monitor Job is designed to run on regular basis in order to detect failover events by refreshing the state of the concerned replica(s) and finally by starting accordingly the SSISDB catalog with the SSISDB.catalog.startup stored procedure. I remember in the past having implemented this kind of failover detection mechanism with the first AlwaysOn release in order to monitor availability group failover events (see my blog post here).

 

DECLARE @role int DECLARE @status tinyint   SET @role =(SELECT [role]                     FROM [sys].[dm_hadr_availability_replica_states] hars                     INNER JOIN [sys].[availability_databases_cluster] adc                            ON hars.[group_id] = adc.[group_id]                     WHERE hars.[is_local] = 1 AND adc.[database_name] ='SSISDB')   IF @role = 1 BEGIN        EXEC [SSISDB].[internal].[refresh_replica_status]              @server_name = N'SQL161',              @status =              @status OUTPUT          IF @status = 1              EXEC [SSISDB].[catalog].[startup] END

 

Moreover, we may also notice some changes about the second job SSIS Server Maintenance Job. Indeed, this job is directly designed to support the AlwaysOn feature regardless the current SSISDB configuration (included or not in an availability group)

 

DECLARE @role int   SET @role =(SELECT [role] FROM [sys].[dm_hadr_availability_replica_states] hars              INNER JOIN [sys].[availability_databases_cluster] adc                     ON hars.[group_id] = adc.[group_id]                     WHERE hars.[is_local] = 1 AND adc.[database_name] ='SSISDB')                     IF DB_ID('SSISDB') IS NOT NULL AND(@role IS NULL OR @role = 1)        EXEC [SSISDB].[internal].[cleanup_server_retention_window]


Finally, just a quick search in the SSISDB database gives us a good overview of the new objects related to the AlwaysOn feature:

 

select        name as [object_name],        type_desc from sys.objects where name like '%replica%'

 

blog_54_-_6-_aag_ssidb_objects

 

  • The procedure add_replica_info seems to be used for adding a new SQL Server instance as replica in the SSISDB catalog.
  • The update_replica_info procedure seems to be used for updating the replica state in the alwayson_support_state internal table.
  • The refresh_replica_status seems to be used by the SSIS Failover Monitor Job on regular basis for updating the replica configuration state as well as detecting failover events.
  • However, according to my tests, I didn’t found neither any explicit call of the delete_replica_info stored procedure or guideline that explains how to exclude a AlwaysOn replica from the SSIDB catalog. I will update this blog post when I get the response.

 

My feeling is that the new SSISDB support for AlwaysOn will be a pretty cool feature of the next version. This is not the most important improvement in this area for sure but it will help each DBA that wants to implement the SSISDB catalog in an AlwaysOn infrastructure by avoiding an additional burden required with SQL Server 2014. I’m looking forward the definitive package soon!

An Example Why LMS Should Not Be Only Part of Learning Ecosystem

Michael Feldstein - Tue, 2015-06-23 11:51

By Phil HillMore Posts (333)

In Michael’s initial post on the Post-LMS, he built on this central theme:

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind.

The idea here is not that the traditional LMS has no value (it can be critical infrastructure, particularly for mainstream faculty adoption), but rather that in the future we both see more learning platform designs being tied to specific pedagogies. This idea is quite relevant given the ongoing LMS users’ conferences (InstructureCon last week, D2L Fusion this week, BbWorld next month, Apereo / Sakai as well as iMoot in the past two months).

Later in the post Michael mentions ASU’s Habitable Worlds as an example of assessing the quality of students’ participation instead of direct grading.

A good example of this is ASU’s Habitable Worlds, which I have blogged about in the past and which will be featured in an episode of the aforementioned e-Literate TV series. Habitable Worlds is roughly in the pedagogical family of CBE and mastery learning. It’s also a PBL [problem-based learning] course. Students are given a randomly generated star field and are given a semester-long project to determine the likelihood that intelligent life exists in that star field. There are a number of self-paced adaptive lessons built on the Smart Sparrow platform. Students learn competencies through those lessons, but they are competencies that are necessary to complete the larger project, rather than simply a set of hoops that students need to jump through. In other words, the competency lessons are resources for the students.

In our recent case study on ASU, Lev Horodyskyj shared his experiences helping to design the course. He specifically called out the difficulties they faced when initially attempting this pedagogical approach with a traditional LMS.

Phil Hill: But the team initially found that the traditional technologies on campus were not suited to support this new personalized learning approach.

Lev Horodyskyj: Within a traditional system it was fairly difficult. Traditional learning management systems aren’t really set up to allow a lot of interactivity. They’re more designed to let you do things that you would normally do in a traditional classroom: multiple choice tests; quizzes; turning in papers; uploading, downloading things.

Especially when you’re teaching science, a range of possibilities are viable answers, and oftentimes when we teach science, we’re more interested in what you’re not allowed to do rather than what you’re allowed to do.

Traditional LMS’s don’t allow you to really program in huge parameter spaces that you can work with. They’re basically looking for, “What are the exact correct answers you are allowed to accept?”

I was brought into the picture once Ariel decided that this could be an interesting way to go, and I started playing around with the system. I instantly fell in love with it because it was basically like PowerPoint. I could drop whatever I wanted wherever I wanted, and then wire it up to behave the way I wanted it to behave.

Now, instead of painstakingly programming all the 60 possible answers that a student might write that are acceptable, I can all of sudden set up a page to take any answer I want and evaluate it in real time. I no longer have to program those 60 answers; I could just say, “Here are the range of answer that are acceptable,” and it would work with that.

Phil Hill: And this was the Smart Sparrow system?

Lev Horodyskyj: This was the Smart Sparrow system, correct. It was really eye-opening because it allowed so many more possibilities. It was literally a blank canvas where I could put whatever I wanted.

This pedagogical approach, supported by appropriate learning platform design, seems to lead to conceptual understanding.

Eric Berkebile: My experiences were very similar. What amazed me the most about it was more how the course was centered upon building concept. It wasn’t about hammering in detail. They weren’t trying to test you on, “How much can you remember out of what we’re feeding you?” It wasn’t about hammering in detail. They weren’t trying to test you on ‘How much can you remember?’

You go through the slides, you go through the different sections, and you are building conceptual knowledge while you are doing it. Once you’ve demonstrated that you can actually apply the concept that they are teaching you, then you can move forward. Until that happens, you’re going to be stuck exactly where you are, and you’re going to have to ask help from other students in the class; you’re going to have to use the resources available.

They want you to learn how to solve problems, they want you to learn how to apply the concepts, and they want you to do it in a way that’s going to work best for you.

Phil Hill: So, it’s multidisciplinary for various disciplines but all held together by project problem-solving around Drake’s equation?

Todd Gilbert: Yeah. One concept really ties it all together, and if you want to answer those questions around that kind of problem, like, “Is there life out there? Are we alone?” you can’t do that with just astronomy, you can’t do that with just biology. It touches everything, from sociology down to physics. Those are very, very different disciplines, so you have to be adaptable.

But I mean if you rise to that kind of a challenge—I can honestly say, this is not hyperbole or anything. It is my favorite class I’ve taken at this college, and it’s a half-semester online course. It is my favorite class I’ve taken at this college.

Eric Berkebile: By far the best course I’ve taken, and I’ve recommended it to everybody I’ve talked to since.

This approach is not mainstream in the sense that the vast majority of courses are not designed as problem-based learning, so I am not arguing that all LMSs should change accordingly or that Smart Sparrow is a superior product. I do, however, think that this episode gives a concrete example of how the traditional LMS should not be the only platform available in a learning ecosystem and how we will likely see more development of platforms tied to specific pedagogical approaches.

The post An Example Why LMS Should Not Be Only Part of Learning Ecosystem appeared first on e-Literate.

Quick and Dirty - Refreshing a Select List of Values

Denes Kubicek - Tue, 2015-06-23 05:14
This is a quick and dirty solution but it works. It shows hot to update a select list of values on demand without a lot of coding.

Categories: Development