Feed aggregator

Importance of Business Intelligence for Education

Nilesh Jethwa - Sun, 2017-06-25 12:54

Providing quality education and running schools as a business are becoming more demanding these days. To maintain the efficiency and effectiveness of K-12 education, schools should really invest in reliable business intelligence tools.

Armed with the right BI tools that focus on several key performance indicators, they will be able to run the school well, as a business, and provide the necessary educational resources to those who depend on them for knowledge.

There is a lingering problem across the world wherein school administrators are expected to make magic with as little funding as possible. Usually, more funding is provided to facilities that deliver exemplary results in terms of student performance.

This is why the school board should monitor KPI metrics that will help them manage their operations and ensure that they provide the best educational resources to the students attending their facility. In doing so, they can generate the kind of performance that will generate them much needed financial support to continuously improve their offerings.

Information relating to school fees, student population, daily attendance, class sizes, program offerings, academic results, and the like are all essential key performance indicators. And with heavier competition to attract new students and supporters, a reliable performance dashboard becomes all the more necessary.

Read more at http://www.infocaptor.com/dashboard/how-important-is-business-intelligence-for-education

Asrock E3C224DI-14S IPMI not reachable

Frank van Bortel - Sun, 2017-06-25 05:48
Documentation missing There's definately some documentation missing on the IPMI settings. I managed to lock myself out of the IPMI (also know as 'Integrated Light Out', or ILO) management interface. Not sure how I managed to do that, but in the quest to find out how to restore devine powers, I noticed quite a lot of people suffered from this. And, the solution is quite simple, when you know Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Cross platform migration using GG

Tom Kyte - Sat, 2017-06-24 23:46
Goldengate is primarily sold as an Replication software, I understand. Can you clarify whether GG can be used to migrate from one database platform to Oracle. Say, MSSQL for example? Or is it required to use SQL Dev to convert the metadata from...
Categories: DBA Blogs

(bash): The most useless commands (6)

Dietrich Schroff - Sat, 2017-06-24 08:35
After talking about
  1. rev
  2. sl
  3. cowsay
 i found another funny command:
cmatrixThe manpage says:
NAME
       CMatrix - simulates the display from "The Matrix"and it is true:

Package Calling Error

Tom Kyte - Sat, 2017-06-24 05:45
Hi, I created a package Order_headers with two procedures price and quantity in Tan Schema .called them in other procedure like this tan.order_headers.price; but i got Error like PLS-00302: component Component 'order_headers' must be declared. but...
Categories: DBA Blogs

based on multiple packages and procedure

Tom Kyte - Sat, 2017-06-24 05:45
Can we access the procedure in a package outside the package using another package? i.e. i need to access a package procedure outside of another package (i need to call from 2nd package to access the first package procedure)
Categories: DBA Blogs

Using ZFS compression for all database files

Tom Kyte - Sat, 2017-06-24 05:45
Hi Tom, I am hoping for some expert advice on the above, We have a T5 server using SAN storage which is allocated by ZFS. We have Solaris 11 OS and Oracle Database 12c with single instances (no RAC) and we do not use ASM, our database is limi...
Categories: DBA Blogs

Passion, Drive and Hunger

Bradley Brown - Fri, 2017-06-23 14:21
Starting a business is nothing short of an ultra marathon times 365.  If you ask people why they do it, you'll get a wide variety of answers.  Some people might say "the money" but behind most any answer you will receive is an iceberg underneath (90% of an iceberg is under the water).  Others will say they are simply crazy.  The true reason is very complex.  It's not usually one reason, but passion, drive and hunger are key components to one's success.

If you like challenges, you'll have plenty.  I view the startup world like a 6 dimensional puzzle.  Move one piece and 6 others (or 1,600,000) move with the piece you moved.  Time is one of the dimensions.  Sometimes you see results quickly and other times results take years.  Sometimes you have a "feeling" something is going to work and other times you'll have empirical data.

I always tell people that if you work your ass off for 20 years straight, you'll wake up an overnight success one day!

So what are you going to do?  Keep dreaming or make your dream a reality?  Do you have a mentor that you can turn to?  1 or 100 mentors?  Friends and family are leaders in self deception...i.e. do you have someone who will give you the real answer?  The shredded wheat answer?  Have you asked a customer for money?  That's usually the best indication of whether you have something real or not.

I started my first business when I was in high school.  Sure, lawn mowing was a business, but I started developing software (i.e. games) and got my 10,000 hours of programming in before I started college.  I sold my games to the manufacturer of my computer when I was in college.  That was exciting!  My passion, drive and hunger continues.

I taught a class called New Venture Creation at the University of Denver.  It was a 4 credit course.  Since college credits cost about $1000 per credit hour (or more), that's $4000 per student.  I brought 20+ mentors (successful business people) into my classroom every quarter.

As a mentor and director in an organization like Founders Institute, along with 30-50 other mentors, we coached a number of amazing people into successful businesses.  I absolutely believe FI is one of the best investments into your future.  You can expect mentors, education and growth through the process.

SQR with 077 umask creates file with 611 permissions

Bobby Durrett's DBA Blog - Fri, 2017-06-23 11:18

I ran across this strange situation. An SQR opened a new data file for output and created the file with 611 permissions. We needed group read access so this caused a problem. I knew that our login script for the PeopleSoft Unix user set umask to 022 but that without running the login scripts the mask is 077. So, my first thought was that we had started the process scheduler without running the login scripts and the mask was 077. But, why would the file that the SQR created be 611 permissions and not 600? The 077 mask should get rid of all the group and others bits. I built a simple test case to show that the SQR creates the file with 611 permissions with a 077 mask.

Here is the test SQR:

begin-report
  Let $Unix_Cmd = 'umask'
  Call System Using $Unix_Cmd #Status
  Let $Unix_Cmd = 'rm /tmp/bobby/bobby.txt'
  Call System Using $Unix_Cmd #Status
  let $testfile = '/tmp/bobby/bobby.txt'
  open $testfile as 1 for-writing  record=1500:fixed
  close 1
  Let $Unix_Cmd = 'ls -l /tmp/bobby/bobby.txt'
  Call System Using $Unix_Cmd #Status
end-report

Here is its output:

SQR for PeopleSoft V8.53.05
077
-rw---x--x   1 psoft      psoft            0 Jun 23 10:54 /tmp/bobby/bobby.txt

Notice the execute bits for group and others.  Why were they not masked out? Also, the default permissions to create a file is 644. So, creating a new file should not set the execute bits at all no matter what mask you are using.

I created a Korn shell script to do the same thing as the SQR:

umask
rm /tmp/bobby/bobby.txt
touch /tmp/bobby/bobby.txt
ls -l /tmp/bobby/bobby.txt

Here is its output:

077
-rw-------   1 psoft      psoft            0 Jun 23 10:58 /tmp/bobby/bobby.txt

Notice that there are no group and others bits which is what I expected with a 077 mask. I tried searching the Internet for SQR and 611 permissions but could not find anything.

As it turns out, we did start the process scheduler with umask 077 so I just modified the script that started it to set umask 022 and that resolved the problem. Here is the output from my test SQR with umask 022:

SQR for PeopleSoft V8.53.05
022
-rw-r--r--   1 psoft      psoft            0 Jun 23 11:01 /tmp/bobby/bobby.txt

This is what we wanted and of course the Korn shell script does the same thing as it should.

022
-rw-r--r--   1 psoft      psoft            0 Jun 23 11:02 /tmp/bobby/bobby.txt

Seems very odd to me. Anyway, I hope that this post helps someone.

This was on HP-UX 11.31 and PeopleTools 8.53.05

Bobby

Categories: DBA Blogs

Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Mobile App Development Platforms

Oracle Press Releases - Fri, 2017-06-23 10:00
Press Release
Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Mobile App Development Platforms Oracle positioned as a leader based on ability to execute and completeness of vision

Redwood Shores, Calif.—Jun 23, 2017

Oracle today announced that it has been named a leader in Gartner’s 2017 “Magic Quadrant for Mobile Application Development Platforms” report[1]. This recognition is another milestone in the tremendous momentum and growth of Oracle Mobile Cloud Service, part of Oracle Cloud Platform, this year.

“We believe this recognition is another acknowledgement of Oracle’s strong market momentum in the mobile and larger PaaS market, driven by the successful adoption of Oracle Cloud Platform offerings by thousands of customers,” said Amit Zavery, senior vice president, Oracle Cloud Platform. “By delivering a comprehensive offering with key differentiating capabilities, such as chatbot support, predictive analytics, and adaptive intelligence to make apps more contextual and smarter, Oracle has given customers a powerful option to meet their ever-evolving development needs.”

Gartner positions vendors within a particular quadrant based on their ability to execute and completeness of vision. According to Gartner, leaders must represent a strong combination of Ability to Execute and Completeness of Vision. In the MADP sector, this means that Leaders are not only good at cross-platform development, deployment and management across the full life cycle, but also have a good vision of the omnichannel and post-app requirements, support for multiple architectures and standards, a solid understanding of IT requirements, and scalable sales channels and partnerships. Leaders must provide platforms that are easy to purchase, program, deploy and upgrade, and which can connect to a range of systems of records and third-party cloud services.

Oracle was recognized as a market leader for more than doubling its customer count for Oracle Mobile Cloud Service. A cloud-based Mobile Application Development Platform (MADP), Oracle Mobile Cloud Service provides a no-code approach to app development. It has been deployed by many midsize and large enterprise customers who praised the product’s integration capabilities and cloud architecture. Oracle continues to expand its mobile cloud platform with chatbot support, expanded analytics, and high-productivity development tools.

Download Gartner’s 2017 “Magic Quadrant for Mobile Application Development Platforms” here.

Oracle Mobile Cloud Service is an open, standards-based platform built from the ground up for the cloud to help developers build better apps faster. Oracle Mobile Cloud Service was designed to help customers to engage with their customers and their employees across web, mobile, wearables, and the growing bot channel with conversational interfaces. Its persona-based approach provides the best tools for each persona involved in a mobile solution to provide a collaborative mobile-first and API-first solution.  It provides a rich set of mobile services that makes it very easy for mobile developers to add mobile context to their apps through services like push notifications, stateless offline sync, and location-based services that supports beacons and GPS. Additionally, Oracle Mobile Application Accelerator capabilities provide business-friendly tools with an easy drag and drop tool to build engaging mobile applications without writing any code. The depth and breadth of mobile analytics provide both developers and business owners with deep insights into mobile application performance, usage and adoption so they can personalize the engagement with their mobile users and increase adoption of their mobile applications. Oracle Mobile Cloud Service provides complete software development life cycle (SDLC) support for mobile app developers in the cloud as one integrated solution.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information

[1] Gartner, “2017 Magic Quadrant for Mobile Application Development Platforms,” Jason Wong | Van L. Baker | Adrian Leow | Marty Resnick, 12 June 2017

Contact Info
Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Sarah Fraser

  • +1.650.743.0660

OBIEE 12c Catalog Validation: Command Line

Rittman Mead Consulting - Fri, 2017-06-23 08:49
 Command Line

I wrote a blog post a while ago describing the catalog validation: an automated process performing a consistency check of the catalog and reporting or deleting the inconsistent artifacts.
In the post I stated that catalog validation should be implemented regularly as part of the cleanup routines and provides precious additional information during the pre and post upgrade phases.

However some time later I noted Oracle's support Doc ID 2199938.1 stating that the startup procedure I detailed in the previous blog post is not supported in any OBI release since 12.2.1.1.0. You can imagine my reaction...

 Command Line

The question then became: How do we run the catalog validation since the known procedure is unsupported? The answer is in catalog manager and the related command line call runcat.sh which, in the server installations (like the SampleApp v607p), can be found under $DOMAIN_HOME/bitools/bin.

How Does it Work?

As for most of command line tools, when you don't have a clue on how it works, the best approach is to run with the -help option which provides the list of parameters to pass.

Catalog Manager understands commands in the following areas:

Development To Production  
createFolder        Creates folder in the catalog  
delete            Deletes the given path from the catalog  
maintenanceMode        Puts the catalog into or out of Maintenance Mode (aka ReadOnly)  
...

Multi-Tenancy  
provisionTenant        Provisions tenants into a web catalog  
...

Patch Management  
tag            Tags all XML documents in a catalog with a unique id and common version string  
diff            Compares two catalogs  
inject            Injects a single item to a diff file  
...

Subject Area Management  
clearQueryCache        Clears the query cache  

Unfortunately none of the options in the list seems to be relevant for catalog validation, but with a close look at the recently updated Doc ID 2199938.1 I could find the parameter to pass: validate.
The full command then looks like

./runcat.sh -cmd validate

In my previous blog I mentioned different types of validation. What type of validation is the default command going to implement? How can I change the behaviour? Again the -help option provides the list of instructions.

# Command : -cmd validate -help 

validate        Validates the catalog

Description  
Validates the catalog

For more information, please see the Oracle Business Intelligence Suite  
Enterprise Edition's Presentation Services Administration Guide.

Syntax  
runcat.cmd/runcat.sh -cmd validate  
    [ -items (None | Report | Clean) [ -links (None | Report | Clean) ] [-folder <path{:path}>] [-folderFromFile <path of inclusion list file>] ] 
    [ -accounts (None | Report | Clean) [ -homes (None | Report | Clean) ] ] 
    -offline <path of catalog> 

Basic Arguments  
None

Optional Arguments  
-items (None | Report | Clean)        Default is 'Report' 
-links (None | Report | Clean)        Default is 'Clean'. Also, '-items' cannot be 'None'. 
-accounts (None | Report | Clean)        Default is 'Clean' 
-homes (None | Report | Clean)        Default is 'Report'. Also, '-accounts' cannot be 'None'. 
-folder <path{:path}>            Which folders in the catalog to validate
-folderFromFile <path of inclusion list file>            File containing folders in the catalog to validate

Common Arguments  
-offline <path of catalog>

-folderFromFile <folder from file>        ----- Sample Folder From File ------
                        /shared/groups/misc
                        /shared/groups/_filters
                        ------------------------------------

Example  
runcat.cmd/runcat.sh -cmd validate -offline c:\oraclebi\data\web\catalog\paint  

Few bits to notice:

  • -offline: the catalog validation needs to happen offline. Either with services down or on a copy of the live catalog. Running catalog validation on a online catalog is dangerous especially with "Clean" options since could delete content in use.
  • -folder: the catalog validation can be run only for a subset of the catalog
  • None | Report | Clean: each validation can be skipped (None), logged (Report) or solved via removal of the inconsistent object (Clean)
  • Also, '-accounts' cannot be 'None'.: some validations are a prerequisite for others to happen
  • Default is 'Clean': some validations have a "Clean" as default value, meaning that will solve the issue by removing the inconsistent object, this may be inappropriate in some cases.

As written before, the initial catalog validation should be done with all options set on Report since this will give a log file of all inconsistencies without deleting pieces of the catalog that could still be valuable. In order to do so the command to execute is:

./runcat.sh -cmd validate -items Report -links Report -accounts Report -homes Report -offline <path_to_catalog> > cat_validation.log

runcat.sh output is displayed direcly in the console, I'm redirecting it to a file called cat_validation.log for further analysis.

If, after the initial run with all options to Report you want the catalog validation utility to "fix" the inconsistent objects, just change the desired options to Clean. Please make sure to take a backup of the catalog before since the automatic fix is done by removing the related objects. Moreover ensure that catalog validation is working on a offline catalog. The command itself can work on top on a online catalog but is never a good idea checking a catalog that could potentially be changed while the tool is running.

The output

Let's see few examples of how Catalog Validation spots inconsistent objects. For the purpose of this test I'll work with Oracle's Sampleapp.

Abandoned and inaccessible homes

Running the validation against the Sampleapp catalog provides some "interesting" results: some homes are declared "abandoned": this could be due to the related user not existing anymore in weblogic console, but that's not the case

E10    saw.security.validate.homes Abandoned home /users/weblogic  

Looking deeper in the logs we can see that the same user folders are flagged as

User facing object '/users/weblogic' has no user permissions and is inaccessible  

Logging in with the user weblogic doesn't allow me to check the "My Folders" in the catalog. When switching to "Admin View" and trying to open "My Folder" I get the following error

 Command Line

As written in the logs looks like the user folder has permission problems. How can we solve this? One option is to use again the runcat.sh command with the forgetAccounts option to remove the inconsistent homes. However this solution deletes all the content related to the user that was stored under the "My Folders".

In order to keep the content we need to overwrite the folder's permission with an administrator account. Unfortunately, when right-clicking on the folder, the "Permission" option is not available.

 Command Line

As a workaround I found that clicking on Properties and then on Set Ownership of this item and all subitems allows you to grant full access to the administrator which is then able to reset the proper user the relevant access privilege.

 Command Line

Once the workaround is implemented the users is able to check his "My Folder" content, however the the errors are still present in catalog validation. The solution is storing the relevant artifacts in another part of the catalog, run runcat.sh with forgetAccounts option and then reimport the objects if needed.

Inconsistent Objects

The main two reasons generating inconsistent objects are:

  • Invalid XML: The object (analysis or dashboard) XML code is not valid. This can be caused by errors during the write to disk or problems during migrations.
  • Broken Links: analysis contained in a dashboard or linked from other analysis have been renamed or deleted.

Let's see how catalog validation shows the errors.

Invalid XML

To test this case I created a simple analysis with two columns and then went to the Advanced tab and deliberately removed an > to make the XML invalid.

 Command Line

When trying to applying the change I got the following error which denied me the possibility to save.

 Command Line

Since I really wanted to ruin my analysis I went directly to the file system under $BI_HOME/bidata/service_instances/ssi/metadata/content/catalog/root/shared/$REQUEST_PATH and changed the XML directly there.

After than I run the catalog validation with only the flag items equal to Report and the rest set to None since I'm looking only at invalid XMLs.
The result as expected is:

Message: Unterminated start tag, 'saw:column', Entity publicId: /app/oracle/biee/user_projects/domains/bi/bidata/service_instances/ssi/metadata/content/catalog/root/shared/rm+demo/notworkinanalysis, Entity systemId: , Line number: 9, Column number: 13  

Which tells me that my analysis notworkinganalysis is invalid with an unterminated start tag, exactly the error I was expecting. Now I have two choices: either fixing the analysis XML manually or rerunning the catalog validation with option Clean which will delete the analysis since it's invalid. As said before there is no automated fix.

I wanted to do a further example on this, instead of removing the >, i removed a quotation mark " to make the analysis invalid

 Command Line

After clicking to Apply OBIEE already tells me that there is something wrong in the analysis. But since it allows me to save and since I feel masochist I saved the analysis.

 Command Line

But... when running the catalog validation as before I end up seeing 0 errors related to my notworkinganalysis.

 Command Line

The answer to Jackie Chan question is that I got 0 errors since in this second case the XML is still valid. Removing a " doesn't make the XML syntax invalid! In order to find and solve that error we would need to use Oracle's Baseline Validation Tool.

Broken Links

To test the broken links case I created the following scenario:

  • Analysis SourceAnalysis which has navigation action to TargetAnalysis

 Command Line

  • Dashboard TestDashboard which contains the TargetAnalysis object.

In order to break things I then deleted the TargetAnalysis.

 Command Line

Running catalog validation with the option links to Report. As expected I get a line

N1    saw.catalog.impl.scour.validateDeadLink Referenced path /shared/RM Demo/TargetAnalysis in file /shared/RM Demo/_portal/TestDashboard/page 1 is inaccessible.  

But I don't get anything on the SourceRequest object, for which navigation is failing.

 Command Line

But if instead of an action link I use TargetAnalysis to filter the results of SourceAnalysis

 Command Line

And then delete TargetAnalysis, I get the expected error:

N1    saw.catalog.impl.scour.validateDeadLink Referenced path /shared/RM Demo/TargetAnalysis in file /shared/RM Demo/SourceAnalysis is inaccessible

Summarizing the broken link validation reports if missing objects are included in the main definition of other objects (as filters or as parts of dashboards) but doesn't seem to report if the missing object is only linked via an action.

Conclusion

My experiments show that catalog validation finds some errors like invalid homes, XML files and broken links which otherwise users would hit at the run-time and that won't make them happy. But there are still some errors which it doesn't log like analysis with wrong column syntax, luckily for most of the cases other tools like the Baseline Validation can spot them easily so use all you have, use as frequently as possible and if you want more details about how it works and how it can be included in the automatic checks for code promotions don't hesitate to contact us!

Categories: BI & Warehousing

Video: 2 Minute Integration with Oracle Integration Cloud Service

OTN TechBlog - Fri, 2017-06-23 07:00

More than just a tip, this video from Oracle ACE Robert van Molken and ACE Associate Phil Wilkins, actually demonstrates that you can set up an integration in Oracle Integration Cloud Service in under two minutes.  The video walks you step-by-step through a simple integration that uses REST and SOAP connections integrated using basic map data.

And these guys should know. Robert and Phil are the co-authors of Implementing Oracle Integration Cloud Service (2017, Packt Publishing). Robert also has a new book coming out next year, Blockchain Across Oracle, also from Packt.

Additional Resources

Partner Webcast – Delivering Contextual Collaboration and Engagement with Oracle Content and ...

The key to digital business transformation are the productivity tools and services that employees rely on to deliver business value. However, the digital workplace of today requires a new set of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Using Patch Wizard to Identify Files Affected by Specific Patches

Steven Chan - Fri, 2017-06-23 02:00

Patch Wizard compares your EBS environment against a list of all recommended EBS patches. It creates a list of high-priority patches that your environment is missing.

You can use Patch Wizard to assess the impact of a specific patch on your environment.  Patch Wizard can report on a variety of areas, including:

  • The applications patched
  • File types installed
  • New files introduced
  • Existing files that are changed
  • Flagged files that are changed
  • Menu navigation trees affected -- responsibilities that lead to patched functionality

Patch Wizard doesn't have the ability to analyze business logic in the affected files, so it cannot report on functionality that is affected by new files delivered by a given patch.  You can review a patch's README to see if there are functional changes introduced by a patch.

References

Related Articles

 

Categories: APPS Blogs

where are sql tuning sets stored and how can you tell how much space a specific one or all of them are using?

Tom Kyte - Thu, 2017-06-22 16:46
It takes a certain amount of database access to be able to create a sql tuning set. It is more than a little fuzzy to me what kind of object they are in the database. My question though is a little more focused than that ( maybe ha ha ). How can...
Categories: DBA Blogs

Roles and Privileges

Tom Kyte - Thu, 2017-06-22 16:46
Hi, Good Day ! Today we have a structured approach to access control, based on use of schemas (to group together database objects), and assigning specific permissions to roles. 2 roles : - USER_ROLE = CONNECT + SELECT/INSERT/UPDATE/DELETE...
Categories: DBA Blogs

Unrolling loop speeds up program

Bobby Durrett's DBA Blog - Thu, 2017-06-22 15:55

This is a follow-up to my earlier post about the assembly language book that I am working through. I have struggled to speed up a program using something that the book recommends, unrolling a loop. I think I have finally found an example where unrolling a loop speeds up a program so I wanted to share it.

I am working on Chapter 17 Exercise 2 of the book which asks you to write a program to find the longest common substring from two strings. I choose an inefficient and simple way to find the common substring and tried to speed the program up without changing the algorithm.

Here is the C version on GitHub: url

The core of the program is three loops. The outer loop tries each character in string 1 as the start of the substring. The middle loop tries each character in string 2 as the start of the substring. The inner loop advances through both strings until it finds the end of the common substring.

The C version ran in 27.2 seconds.

I built an assembly version that uses registers for most of the variables and it ran in about 11.7 seconds. It has the same three loops. Assembly register version: url

I tried to improve on the 11.7 seconds by unrolling each of the three loops. Unrolling the outer and inner loops resulted in no improvement in runtime. I was about to give up but finally decided to try unrolling the middle loop and it caused the program to run in 10.2 seconds. Very cool. Assembly unrolled middle loop version: url

I had to figure out how to use %rep, %assign, and how to have a label that I based on a nasm variable such as .skipif %+ i.

Kind of fun. I realize that this is off topic for my “DBA Blog” but it is something I’m playing with so I thought I would throw it out there. It doesn’t hurt a DBA to know some low-level computer science, even if you are not using it directly in your job. Anyway, it was a bit of a struggle to come up with an example that was faster with the loop unrolling and I think that I have found one.

Bobby

Categories: DBA Blogs

Getting Smarter in Renting with Tableau 10

Rittman Mead Consulting - Thu, 2017-06-22 03:26
Preface

Not a long time ago a friend of mine spent a significant amount of time trying to find a flat to rent. And according to what he said it wasn't an easy task. It took him a decent time and efforts to find something that is big enough (but not too big) not too far from a workplace, had required features and affordable at the same time. And as a specialist in data analysis, I prefer to think about this task as a data discovery one (yes, when you have a hammer everything looks like a nail). And I decided to see if a data analysis tool can help me understand the rental market better. I'm sure you've already read the name of this post so I can't pretend I'm keeping intrigue. This tool is Tableau 10.3.

The Data

The friend I was talking before was looking for a flat in Moscow, but I think that this market is completely unknown to the most of the readers. And also I'd have to spend a half of time translating everything into English so for this exercise I took Brighton and Hove data from http://rightmove.co.uk and got a nice JSON Lines file. JSON Lines files are basically the same JSON as we all know but every file has multiple JSONs delimited by a newline.

{json line #1}
{json line #2}
...
{json line #n}

That could be a real problem but luckily Tableau introduced JSON support in Tableau 10.1 and that means I don't have to transform my data to a set of flat tables. Thanks to Tableau developers we may simply open JSON Lines files without any transformations.

Typical property description looks like this:

10

It has a few major blocks:

  • Property name - 2 bedroom apartment to rent
  • Monthly price - £1,250
  • Description tab:
    • Letting information - this part is more or less standard and has only a small number of possible values. This part has Property name: Property value structure ('Date available':'Now').
    • Key features - this part is an unformalized set of features. Every property may have its own unique features. And it is not a key-value list like Letting information, but a simple list of features.
    • Full description - simply a block of unstructured text.
  • Nearest stations - shows three nearest train stations (there could be underground stations too if they had it in Brighton).
  • School checker - this shows 10 closest primary and 10 secondary schools. For this, I found a kind of API which brought me a detailed description of every school.

And finally, the JSON for one property has the following look. In reality, it is one line but just to make it more easy to read I formatted it to a human readable format. And also I deleted most of the schools' info as it is not as important as it is huge.


Property JSON

{  
   "furnish":"Unfurnished",
   "key_features":[  
      "LARGE BRIGHT SPACIOUS LOUNGE WITH PATIO DOORS",
      "FULLY FITTED KITCHEN",
      "TWO DOUBLE BEDROOMS WITH WARDROBES",
      "A FURTHER SINGLE BEDROOM/OFFICE/STUDY",
      "A GOOD SIZED SHOWER ROOM ",
      "SINGLE GARAGE AND ON STREET PARKING",
      "EASY ACCESS TO THE CITY CENTRE OF CHICHESTER AND COMMUTER ROUTES. ",
      "TO ARRANGE A VIEWING PLEASE CONTACT US ON 01243 839149"
   ],
   "property_price_week":"£254 pw",
   "nearest_stations":[  
      {  
         "station_name":"Fishbourne",
         "station_dist":"(0.4 mi)"
      },
      {  
         "station_name":"Chichester",
         "station_dist":"(1.2 mi)"
      },
      {  
         "station_name":"Bosham",
         "station_dist":"(1.7 mi)"
      }
   ],
   "letting_type":"Long term",
   "secondary_schools":{  
      "schools":[  
         {  
            "distance":"0.6 miles",
            "ukCountryCode":"ENG",
            "name":"Bishop Luffa School, Chichester",
           ...
         }]
    }
   "url":"http://www.rightmove.co.uk/property-to-rent/property-66941567.html",
   "date_available":"Now",
   "date_reduced":"",
   "agent":"On The Move, South",
   "full_description":"<p itemprop=\"description\">We are delighted to bring to market, this fabulous semi detached bungalow ... </p>",
   "primary_schools":{  
      "schools":[  
         {  
            "distance":"0.3 miles",
            "ukCountryCode":"ENG",
            "name":"Fishbourne CofE Primary School",
         }]
    }
   },
   "property_address":[ "Mill Close, Chichester, West Sussex, PO19"],
   "property_name":"3 bedroom bungalow to rent",
   "date_added":"08 June 2017 (18 hours ago)",
   "property_price_month":"£1,100 pcm",
   "let_agreed":null,
   "unknownown_values":"",
   "deposit":"£1384"
}

The full version is here: 6391 lines, I warned you. My dataset is relatively small and has 1114 of such records 117 MB in total.

Just a few things I'd like to highlight. Letting information has only a small number of fixed unique options. I managed to parse them to fields like furnish, letting_type, etc. Key Features list became just an array. We have thousands of various features here and I can't put them to separate fields. Nearest stations list became an array of name and value pairs. My first version of the scrapper put them to a key-value list. Like this:

"nearest_stations":[  
      "Fishbourne": "(0.4 mi)",
      "Chichester": "(1.2 mi)",
      "Bosham": "(1.7 mi)"
      ]

but this didn't work as intended. I got around one hundred of measures with names Fishbourne, Chichester, Bosham, etc. Not what I need. But that could work well if I had only a small number of important POIs (airports for example) and wanted to know distances to this points. So I changed it to this and it worked well:

"nearest_stations":[  
      {  
         "station_name":"Fishbourne",
         "station_dist":"(0.4 mi)"
      },
      {  
         "station_name":"Chichester",
         "station_dist":"(1.2 mi)"
      },
      {  
         "station_name":"Bosham",
         "station_dist":"(1.7 mi)"
      }
   ]
Connect to the Data

When I started this study my knowledge of the UK property rent market was close to this:

20

And it's possible or even likely that some of my conclusions may be obvious for anyone who is deep in the topic. In this blog, I show how a complete newbie (me) can use Tableau and become less ignorant.

So my very first task was to understand what kind of objects are available for rent, what are their prices and so on. That is the typical task for any new subject area.

As I said before Tableau 10 can work with JSON files natively but the question was if it could work with such a complex JSON as I had. I started a new project and opened my JSON file.

30

I expected that I will have to somehow simplify it. But in reality after a few seconds of waiting Tableau displayed a full structure of my JSON and all I had to do was selecting branches I need.

40

After a few more seconds I got a normal Tableau data source.

50

And this is how it looked like in analysis mode

55

First Look at the Data

OK, let's get started. The first question is obvious: "What types of property are available for rent?". Well, it seems that name ('2 bedroom apartment to rent') is what I need. I created a table report for this field.

60

Well, it gives me the first impression of what objects are offered and what my next step should be. First of all the names are ending with "to rent". This just makes strings longer without adding any value. The word "bedroom" also doesn't look important. Ideally, I'd like to parse these strings into fields one of which is # of bedrooms and the second one is Property type. The most obvious action is to try Split function.

80

Well, it partially worked. This function is smart enough and removed 'to rent' part. But except for this, it gave me nothing. On other datasets (other cities) it gave me much better results but it still wasn't able to read my mind and did what I wanted:

85

But I spent 15 seconds for this and lost nothing and if it worked I'd saved a lot of time. Anyway, I'm too old to believe in magic and this almost didn't hurt my feelings.

Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.

Yes, this string literally asks some regular expressions wizardry.

90

I can easily use REGEXP_EXTRACT_NTH and get what I want. Group 1 is the number of bedrooms and Group 3 is the property type. Groups 2 and 4 are just constant words.

100

Explanation for my regular expressionI can describe most of the names in the following way: "digit bedroom property type to rent" and the rest are "property type to rent. So digit and bedroom are optional and property type to rent are mandatory. The expression is easy and obvious: ([0-9]*)( bedroom )*(.*)( to rent)

Regular expressions are one of my favourite hammers and helped me a lot for this analysis. And after all manipulations, I got a much better view of the data (I skipped some obvious steps like create a crosstab or a count distinct measure to save space for anything more interesting).

110

And while this result looks pretty simple it gives me the first insight I can't get simply browsing the site. The most offered are 1 and 2 bedroom properties especially flats and apartments. And if a family needs a bigger something with 4 or 5 bedrooms, well I wish them good luck, not many offers to chose from. Also if we talk about living property only we should filter out things like GARAGE, PARKING or LAND.

120 130

I think both charts work pretty well. The first one presents a nice view of how flats and apartments outnumber all other types and the second one gives a much better understanding of how many of 2 bedroom properties offered compared to all others.

And while I'm not a big fan of fancy visualisations but if you need something less formal and more eye-catching try Bubbles chart. It's not something I'd recommend for an analysis but may work well for a presentation. Every bubble represents particular property type, colour shows a number of bedrooms and size shows the number of properties.

140

Going Deeper

The next obvious question is the price. How much do different properties cost? Is any particular one more expensive than average or less? What influences the price?

As a baseline, I'd like to know what is the average property price. And I obviously don't want just one figure for the city-wide price. It's meaningless. Let's start with a bar chart and see what is the range of prices.

145

Well, we have a lot of options. Flat share costs less than £700 or we may choose a barn for more than £3600. Again a very simple result but I can't get it directly from the site.

The next obvious question is how the number of bedrooms affects the price. Does the price skyrockets with every additional bedroom or maybe more bedrooms mean smaller rooms and price increases not too fast?

150

Well, this chart gives me the answer but it looks bad. Mostly because a lot of properties types don't have enough variance in room number. Studio flats have only one bedroom by definition and the only converted barn has 7 bedrooms. I'd like to remove types which don't have at least 3 options and see how the price changes. For this, I created a new computed field using fixed keyword. It counts the number of bedroom options by property type.

160

And then I use it in the filter 'Bedroom # variance' at least 3. Now I have a much more clean view. And I can see that typically more bedrooms mean significantly higher price with a few exceptions. But in fact, these are not actual exceptions just a problem of a small dataset. I can say that increase in # bedrooms certainly means a significant increase in price. And one more insight. Going above 7 bedrooms may actually double the price.

170

Averages are good but they hide important information of how prices are distributed. For example, six properties priced £1K and one £200 give average £885. And looking at average only may make you think that with £900 you may choose one of 7 options. It's very easy to build a chart to check this. Just create a new calculation called Bins and use in a chart.

180 190

With £100 bins I got the following chart. It shows how many properties have price falling to a particular price range. For example, the £1000 bin shows # of properties with prices £1000-£1100.

200

The distribution looks more or less as expected but the most interesting here is that £1000-£1100 interval seems to be very unpopular. Why? Let's add # of bedrooms to this chart.

210

£1000 is too expensive for 1 bedroom and studios but too cheap for two. Simple. What else can we do here before moving further? Converting this chart to a running total gives a cool view.

220

What can this chart tell us? For example, if we look at the orange line (2 bedrooms) we will find that with £1200 we may choose among 277 of 624 properties. With £1400 budget we have 486 of 624. Further £200 increase in budget won't significantly increase the number of possibilities and if the change from £1200 to £1400 almost doubled the number of possibilities, the next £200 give only 63 new options. I don't have a ready to use insight here, but I got a way to estimate a budget for a particular type of property. With budget £X I will be able to choose one of N properties.

Why It Costs What It Costs

OK, now I know a lot of statistics about prices. And my next question is about factors affecting the price. I'd like to understand does a particular property worth what it cost or not. Of course, I won't be able to determine exact price but even hints may be useful.

The first hypothesis I want to check is if a train station near raises the price or it isn't any important. I made a chart very similar to the previous one and it seems that Pareto principle works perfectly here. 80% or properties are closer than 20% of the maximum distance to a station.

230

But this chart doesn't say anything about the price it just gives me the understanding of how dense train stations are placed. I'd say that most of the properties have a station in 10-15 minutes of walking reach and therefore this should not significantly affect the price. My next chart is a scatter plot for price and distance. Every point is a property and its coordinates on the plot determined by its price and distance to the nearest station. Colour shows # of bedrooms.

240

I'd say that this chart shows no clear correlation between price and distance. And a more classical line chart shows that.

250

The maximum price slightly decreases with distance, minimum price on the contrary increases. Average price more or less constant. I think the hypothesis is busted. There is no clear correlation between the distance a tenant have to walk to a station and the price he has to pay. If you want to rent something and the landlord says that the price is high because of a train station near, tell him that there are stations all around and he should find something more interesting.

What about furnishings? Does it cheaper to get an unfurnished property or a landlord will be happy to meet someone who shares his taste?

260

Unfurnished property is definitely cheaper. And it's interesting that in some cases partly furnished even cheaper than completely unfurnished. But at least for furnished/unfurnished, we can see a clear correlation. When you see a furnished one for the price of unfurnished this may be a good pennyworth.

Another thing I'd like to check. Can we expect I lower price for a property not available immediately? Or is, on the contrary, the best price is offered for already unoccupied properties?

As always start with a general picture. What is the average time of availability by property types?

270

For most popular types it is about one month and if you have a house you typically publish it two or three months in advance. And what is about the price? One more chart that I like in Tableau. In the nutshell, it is a normal line chart showing an average price by days before property availability. But the thickness of lines shows the number of properties at the same time. So I can see not only the price but reliance too. A thick line means it was formed by many properties and a thin line may be formed by few properties and move up or down significantly then something changes. It would be very interesting to get a historical data and see how much time properties stay free or how long it takes before the price is reduced, but unfortunately, I don't have this data.

280

And looking at this chart I'd say that there is no statistically significant dependency for price and availability date. Renting a property available in the distant future won't save you money* (*=statistically).

And the last thing I'd like to investigate is the Key features. What do landlords put as the key features of their properties? How do they affect the price?

The list of popular Key features surprised me.

290

'Unfurnished' looks good to me, it is a really significant part of the deal. But 'Brighton'? For properties in Brighton? '1 Bedroom'. How many bedrooms can '1 bedroom flat to rent' have? Oh, there is a key feature saying '1 bedroom' now I know. But jokes aside. I had to make a lot of cleaning on this data before I could use it. There are six ways to write 'Modern kitchen'. Make everything upper case, then remove quotes, strip spaces and tabs, remove noisy features like 'stylish 1 bedroom apartment' and so on. After this, I got a slightly better list with approximately 3500 features instead of 4500. Note how all variants of writing 'GAS CENTRAL HEATING' now combined into one most popular feature. But there are still too many features. I'm sure that there should be not more than a hundred of them. Even at this screenshot you may see 'Unfurnished' and 'Unfurnished property' features.

300

When I need a visualisation for this amount of points, bar charts or tables won't play well. My weapon of choice is Scatter plot. Every point is a particular feature, axes are minimum and average prices of it, size is determined by the number of properties declaring to have this feature and the colour is the maximum price. So if a feature is located high on the plot it means that in average it will be expensive to have it. If this feature at the same time located close to the left side even cheap properties may have it. For example, if you want a swimming pool be ready to pay at least £3000 and £7000 in average. And the minimum price for tumble dryer is £3250 but average £3965. The cheapest property with a dryer is more expensive than with a pool, but in average pools are more expensive. That is how this chart works.

310

The problems of this chart are obvious. It is littered with unique features. Only one property has 4 acres (the point in top right corner). And actually not so many swimming pools are available for rent in Brighton. I filtered it by "# of properties > 25" and here is how prices for the most popular features are distributed.

320

Central location will cost you at least £100 and £1195 in average and for Great location be ready to pay at least £445 and £1013 in average. Great location seems to be less valuable than the central one.

And now I can see how a particular feature impacts prices. For example 'GAS HEATING'. I made a set with all variants of heating I could find ('GAS CENTRAL HEATING', 'GAS HEAT' and so on). Now I can analyse how this feature impacts properties. And here is how it impacts the price of flats. Blue circles are properties with gas heating and orange are without.

330

Very interesting in my opinion. The minimum price of properties with gas heating (blue circles) is higher than without. That is expected. But average price for properties without gas heating is higher.

And here are kitchen appliances. For 1 bedroom flats, they increase both minimum and average prices significantly. But for bigger flats minimum price with appliances is higher and average price is lower. Possible this option is important for relatively cheap properties, but its weight is not that big for the bigger ones.

340

Summary

350

Categories: BI & Warehousing

Webcast: Testing Oracle E-Business Suite Best Practices

Steven Chan - Thu, 2017-06-22 02:00

Oracle University has a large collection of free recorded webcasts that EBS system administrations may find useful.  Here's a great webcast on testing E-Business Suite:

Prashanti Madireddi, Senior Director Product Development provides an overview of how the Quality Assurance team tests Oracle E-Business Suite. This session covers the main areas that you should consider during functional testing, approaches for new-feature and regression testing, how to reduce the test script generation and execution time, experiences on capturing and presenting metrics to showcase the ROI of testing investment, leveraging automation for testing Oracle E-Business Suite applications and more. This material was presented at Oracle OpenWorld 2015.

Related Articles

Categories: APPS Blogs

DENSE_RANK - AGGREGATE function

Tom Kyte - Wed, 2017-06-21 22:26
Below is the the employee table, EID NAME SALARY COMM_PCT 100 Steven 24000 101 Neena 17000 0.6 102 Lex 17000 0.8 145 John 14000 0.4 If I do the below select to employee table I get the output as 3. SELECT DENSE_RANK(17000,0.6) W...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator