Skip navigation.

Feed aggregator

OTN Virtual Technology Summit - Spotlight on Java Track

OTN TechBlog - Thu, 2015-06-25 14:19
OTN Virtual Technology Summit - Spotlight on Java Track

The OTN Virtual Technology Summit is a quarterly series of interactive online events featuring hands-on sessions by expert presenters drawn from the community. The events are free, but registration is required. Each event has four tracks: Java, Database, Systems, and Middleware. Registration gets you access to all four tracks, along with on-demand access to all sessions after the event so you can binge on all that technical expertise. 

Here's the skinny on the Java track for the next event.

Java Sessions:

Docker for Java Developers
By Roland Huss
Docker, the OS-level virtualisation platform, takes the IT world by storm. In this session, we will see what features Docker has for us Java developers. It is now possible to create truly isolated, self-contained and robust integration tests in which external dependencies are realised as Docker containers. Docker also changes the way we ship applications in that we are not only deploying application artifacts like WARs or EARs but also their execution contexts. Besides elaborating on these concepts and more, this presentation will focus on how Docker can best be integrated into the Java build process by introducing a dedicated Docker Maven plugin which is shown in a live demo.

Pi on Wheels - Make Your Own Robot
By Roland Huss
The Pi on Wheels is an affordable open source DIY robot that is ideal for learning Java-related technologies in the context of the Internet of Things. In this session we will talk about how 3D printing works and how it can be utilized to build robots. The most fascinating aspect of 3D printing is that it is astonishingly easy to customize the robot. It allows you to build something completely new and different. We provide a Java based IDE that allows you to control and program the robot. In addition to that it can be used to programmatically design 3D geometries.

Shakespeare Plays Scrabble
By Jose Paumard
This session will show how lambdas and Streams can be used to solve a toy problem based on Scrabble. We are going to solve this problem with the Scrabble dictionary, the list of the words used by Shakespeare, and the Stream API. The three main steps shown will be the mapping, filtering and reduction. The mapping step converts a stream of a given type into a stream of another type. Then the filtering step is used to sort out the words not allowed by the Scrabble dictionary. Finally, the reduction can be as simple as computing a max over a given stream, but can also be used to compute more complex structures. We will use these tools to extract the three best words Shakespeare could have played.

OTN Wants You!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it: Oracle Community - Rewards & Recognition FAQ.

RAC buffer states: XCUR, SCUR, PI, CI

Yann Neuhaus - Thu, 2015-06-25 13:43

In RAC, blocks are copied across instances by the Global Cache Service. In single instance, we have only two status: CR for consistent read clones where undo is applied, and CUR for the current version that can be modified (then being a dirty block). I'ts a bit more complex in RAC. Here is a brief example to show the buffer status in Global Cache.

SCUR: shared current

I connect to one instance (I have a few singleton services. service ONE is on instance 3 and service TWO is on instance 1)

SQL> connect demo/demo@//192.168.78.252/ONE.racattack
Connected.
and I query a row by ROWID in order to read only one block
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1';
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1'
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD'

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
Here is the status of the buffer in the buffer cache:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         3          1 scur       00000000B9FEA060 N N N N N N
The block has been read from disk by my instance. Without modification it is in SCUR status: it's the current version of the block and can be shared.

SCUR copies

Now connecting to another instance

SQL> connect demo/demo@//192.168.78.252/TWO.racattack
Connected.
and reading the same block
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1';
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1'
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD'

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
let's see what I have in my Global Cache:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 scur       00000000B0FAADC0 N N N N N N
         3          1 scur       00000000B9FEA060 N N N N N N
non modified blocks can be shared: I have a copy on each instance.

XCUR: exclusive current

I'll start a new case, I flush the buffer cache

connecting to the first instance

SQL> connect demo/demo@//192.168.78.252/ONE.racattack
Connected.
I'm now doing a modification with a select for update (which writes the lock in the block, so it's a modification)
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1' for update;
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1' for update
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD' for update

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
now the status in buffer cache is different:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         3          1 cr         00               N N N N N N
         3          1 xcur       00000000B9FEA060 Y N N N N N
So I have two buffers for the same block. The buffer that has been read and will not be current anymore because it has the rows before the modifications. It stays in consistent read (CR) status. The modified one is then the current one but cannot be shared: its the XCUR buffer where modifications will be done.

CR consistent read

Now I'll read it from the second instance

SQL> connect demo/demo@//192.168.78.252/TWO.racattack
Connected.
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1';
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1'
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD'

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
the block is read and I've another CR buffer:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 cr         00               N N N N N N
         3          1 cr         00               N N N N N N
         3          1 xcur       00000000B9FEA060 Y N N N N N
the CR buffer is at another SCN. A block can have several CR blocks (by default up to 6 per instance)

PI: past image

Let's do a modification from the other instance

SQL> connect demo/demo@//192.168.78.252/TWO.racattack
Connected.
SQL> select rowid,DEMO.* from DEMO where rowid='&rowid1' for update;
old   1: select rowid,DEMO.* from DEMO where rowid='&rowid1' for update
new   1: select rowid,DEMO.* from DEMO where rowid='AAAXqxAALAAACUkAAD' for update

ROWID                      ID          N
------------------ ---------- ----------
AAAXqxAALAAACUkAAD         10         10
My modification must be done on the current version, which must be shipped to my instance
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 cr         00               N N N N N N
         1          1 cr         00               N N N N N N
         1          1 xcur       00000000B0FAADC0 Y N N N N N
         3          1 cr         00               N N N N N N
         3          1 pi         00000000B9FEA060 Y N N N N N
and the previous current version remains as a PI - past image. It cannot be used for consistent reads but it is kept for recovery: if current block is lost, redo can be applied to the past image to recover it. See Jonathan Lewis explanation.

Checkpoint

As the past images are there in case of recovery, they are not needed once an instance has checkpointed the current block.

SQL> connect sys/oracle@//192.168.78.252/ONE.racattack as sysdba
Connected.
SQL> alter system checkpoint;
System altered.
afer the checkpoint on the instance that has the XCUR, there is no dirty buffer in any instance:
SQL> select inst_id,class#,status,lock_element_addr,dirty,temp,ping,stale,direct,new from gv$bh where objd=(select data_object_id from dba_objects where owner='DEMO' and object_name='DEMO') and status!='free' order by inst_id,lock_element_addr;

   INST_ID     CLASS# STATUS     LOCK_ELEMENT_ADD D T P S D N
---------- ---------- ---------- ---------------- - - - - - -
         1          1 cr         00               N N N N N N
         1          1 cr         00               N N N N N N
         1          1 xcur       00000000B0FAADC0 N N N N N N
         3          1 cr         00               N N N N N N
         3          1 cr         00               N N N N N N
the PI became a consistent read.

Summary

Here are the states we have seen here:

XCUR: current version of the block - holding an exclusive lock for it

SCUR: current version of the block that can be share because no modification were done

CR: only valid for consistent read, after applying the necessary undo to get it back to requried SCN

PI: past image of a modified current block, kept until the latest version is checkpointed

and the other possible states:

FREE: The buffer is not currently in use.

READ: when the block is being read from disk

MREC: when the block is being recovered for media recovery

IREC: when the block is being recovered for crash recovery

Oracle Priority Support Infogram for 25-JUN-2015

Oracle Infogram - Thu, 2015-06-25 13:36

RDBMS
12c Parallel Execution New Features: 1 SLAVE distribution, from Oracle related stuff.
Package Differences between Oracle 11.2.0.4 and 12.1.0.2?, from Upgrade your Database - NOW!
Solaris
The Solaris 10 Recommended patchset really does contain ALL available OS security fixes!, from Patch Corner.
Ops Center
From the Ops Centerblog: Ops Center 12.3 is Released
Java
Building Simple Java EE REST Service Using Oracle JDeveloper 12c, from Oracle Partner Hub: ISV Migration Center Team.
Asynchronous Processing, from The Java Source.
JDeveloper and ADF
The 10 Most Recently Created Notes for JDeveloper/ADF as of 22 June 2015, from Proactive Support - Java Development using Oracle Tools.
ADF 12c – Allow user to personalize the form items at run time using MDS based Change Persistence, from WebLogic Partner Community EMEA.
And from the same source:
ADF 12c – Allow user to personalize the form items at run time using MDS based Change Persistence
Change Default JSESSION ID Name for ADF Applicatio
MAF and WebSockets Integration – Live Twitter Stream
NetBeans
Take Early JDK 9 For A Spin In Early NetBeans 9, from Geertjan's Blog.
JavaScript
Is WebAssembly the (Eventual) Death of JavaScript?, from Motherboard.
Web Computing
HTTP/2 and Server Push, from The Aquarium.
Security
IT security: Attacks with unknown malware are increasing significantly, from EMEA Midsize Blog.
EBS
From the Oracle E-Business Suite Support blog:
Just Released! Version 200.2 of the iProcurement Item Analyzer
Webcast: Work In Process Scrap - Costing Overview
What's New - Content in Receivables (AR) and Related Products
From the Oracle E-Business Suite Technology blog:
Using a Reverse Proxy as an SSL/TLS Termination Point for EBS 12.1.3
…And Finally

I’m (finally!) back on iPhone after an unpleasant jaunt into Android telephony and found a weather app I love. It makes use of the built in barometer on iPhone combined with user reporting, radar, etc. to give really accurate predictions on precipitation. I tested it the other day with an approaching thunderstorm and it was accurate on the arrival and intensity of the rain to within a few minutes: Dark Sky update looks to 'revolutionize weather forecasting’ by tapping iPhone sensors, from The Verge.

Hive (HiveQL) SQL for Hadoop Big Data

Kubilay Çilkara - Thu, 2015-06-25 13:30


In this  post I will share my experience with an Apache Hadoop component called Hive which enables you to do SQL on an Apache Hadoop Big Data cluster.

Being a great fun of SQL and relational databases, this was my opportunity to set up a mechanism where I could transfer some (a lot)  data from a relational database into Hadoop and query it with SQL. Not a very difficult thing to do these days, actually is very easy with Apache Hive!

Having access to a Hadoop cluster which has the Hive module installed on, is all you need. You can provision a Hadoop cluster yourself by downloading and installing it in pseudo mode on your own PC. Or you can run one in the cloud with Amazon AWS EMR in a pay-as-you-go fashion.

There are many ways of doing this, just Google it and you will be surprised how easy it is. It is easier than it sounds. Next find links for installing it on your own PC (Linux).  Just download and install Apache Hadoop and Hive from Apache Hadoop Downloads

You will need to download and install 3 things from the above link.

  • Hadoop (HDFS and Big Data Framework, the cluster)
  • Hive (data warehouse module)
  • Sqoop (data importer)
You will also need to put the connector of the database (Oracle, MySQL...) you want to extract data from in the */lib folder in your Sqoop installation. For example the MySQL JDBC connector can be downloaded from hereDon't expect loads of tinkering installing Apache Hadoop and Hive or Sqoop, just unzipping binary extracts and few line changes on some config files in directories, that's all. Is not a big deal, and is Free. There are tones of tutorials on internet on this, here is one I used from another blogger bogotobogo.


What is Hive?

Hive is Big Data SQL, the Data Warehouse in Hadoop. You can create tables, indexes, partition tables, use external tables, Views like in a relational database Data Warehouse. You can run SQL to do joins and to query the Hive tables in parallel using the MapReduce framework. It is actually quite fun to see your SQL queries translating to MapReduce jobs and run in parallel like parallel SQL queries we do on Oracle EE Data Warehouses and other databases. :0) The syntax looks very much like MySQL's SQL syntax.

Hive is NOT an OLTP transactional database, does not have transactions of INSERT, UPDATE, DELETE like in OLTP and doesn't conform to ANSI SQL and ACID properties of transactions.


Direct insert into Hive with Apache Sqoop:
After you have installed Hadoop and have hive setup and are able to login to it, you can use Sqoop - the data importer of Hadoop - like in the following command and directly import a table from MySQL via JDBC into Hive using MapReduce.
$  sqoop import -connect jdbc:mysql://mydatbasename -username kubilay -P -table mytablename --hive-import --hive-drop-import-delims --hive-database dbadb --num-mappers 16 --split-by id
Sqoop import options explained:
  •  -P will ask for the password
  • --hive-import which makes Sqoop to import data straight into hive table which it creates for you
  • --hive-drop-import-delims Drops \n\r, and \01 from string fields when importing to Hive. 
  • --hive-database tells it which database in Hive to import it to, otherwise it goes to the default database. 
  • --num-mappers number of parallel maps to run, like parallel processes / threads in SQL
  • --split-by  Column of the table used to split work units, like in partitioning key in database partitioning. 
The above command will import any MySQL table you give in place of mytablename into Hive using MapReduce from a MySQL database you specify.

Once you import the table then you can login to hive and run SQL to it like in any relational database. You can login to Hive in a properly configured system just by calling hive from command line like this:

$ hive
hive> 


More Commands to list jobs:

Couple of other commands I found useful when I was experimenting with this:

List running Hadoop jobs

hadoop job -list

Kill running Hadoop jobs

hadoop job -kill job_1234567891011_1234

List particular table directories in HDFS

hadoop fs -ls mytablename


More resources & Links



Categories: DBA Blogs

How Engaged Are Your OBIEE Users?

Rittman Mead Consulting - Thu, 2015-06-25 10:58

Following on from Jon’s blog post “User Engagement: Why does it matter?”, I would like to take this one step further by talking about measurement. At Rittman Mead we believe that if you can’t measure it, you can’t improve it. So how do you measure user engagement?

Metrics

User engagement for OBIEE is like most web based products or services:

  • both have users who access the product or service and then take actions.
  • users of both use it repeatedly if they get value from those actions.

A lot of thought has gone into measuring the customer experience and engagement for web based products and services. Borrowing some of these concepts will help us understand how to measure user engagement for BI solutions.

We look at three metrics:

  • Frequency of use
  • Recency of use
  • Reach of the system
Usage Tracking Data

OBIEE offers visibility of what its users are doing through its Usage Tracking feature, we can use this to drive our metrics.

Figure 1

UT UE LDM

As we can see from Figure 1, the usage tracking data can support our three metrics.

Frequency of use
  • Number of times a user or group of users visit in a specific period (Day / Month / Year)
  • Number of times a dashboard / report is accessed in a specific period.
  • How are these measures changing over time?
Recency of use
  • How recently was a report / dashboard used by relevant user groups?
  • What are the average days between use of each report / dashboard by relevant use group?
  • Number of dashboards / reports used or not used in a specific period (Day / Month / Year)
  • Number of users that have used or not used OBIEE in a specific period (Day / Month / Year)
  • How are these changing over time?
Reach of the system
  • Overall number of users that have used or not used OBIEE. This can be further broken down by user groups.
  • How is it changing over time?
User engagement KPI perspective

We have compared BI solutions to web-based products and services earlier in this post. Let’s look at some popular KPIs that many web-based products use to measure engagement and how they can be used to measure OBIEE engagement.

  • Stickiness: Generally defined as the amount of time spent at a site over a given period.
  • Daily Active Users (DAU): Number of unique users active in a day
  • Monthly Active Users (MAU): Number if unique users active in a month.

DAU and MAU are also used as a ratio (DAU / MAU) to give an approximation of utility.

The R&D division of Rittman Mead has developed the Rittman Mead User Engagement Toolkit, a set of tools and reports to capture and visualise user engagement metrics. The example charts given below have been developed using the R programming language.

Figure 2 – DAU over time with a trailing 30-day average (Red line)

 MAU trailing 30 day average V0.3

Figure 3 – Forecast DAU/MAU for 30 days after the data was generated

MAU

What Can You Do With These Insights?

Recall that Jon’s blog post points out the folowing drivers of user engagement:

  • User interface and user experience
  • Quality, relevance, and confidence in data
  • Performance
  • Ability to use the system
  • Accessibility – is the system available in the right way, at the right time?

There are several actions you can take to influence the drivers as a result of monitoring the aforementioned metrics.

  • Identify users or groups that are not using the system as much as they used to. Understand their concerns and address the user engagement drivers that are causing this.
  • Verify usage of any significant enhancement to the BI solution over time.
  • Analyse one of the key drivers, performance, from usage data.
  • Determine peak usage to project future hardware needs.
Conclusion

User engagement is the best way users can get value from their OBIEE systems. Measuring user engagement on an ongoing basis is important and can be monitored with the use of some standard metrics and KPIs.

Future blog posts in this series will address some of the key drivers behind user engagement in addition to providing an overview of the Rittman Mead User Engagement Toolkit.

If you are interested in hearing more about User Engagement please sign up to our mailing list below.

#mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */

Other posts in this series
Categories: BI & Warehousing

Is Cloud File Sharing Enough for Your Business?

WebCenter Team - Thu, 2015-06-25 09:08

As a consumer, we have gotten accustomed to the convenience of file sync and share (FSS) solutions like DropBox, WeTransfer, et al. Gone are the days when we would email files, videos and pictures to our friends and family. Using free consumer services we are now sending our files into ether and those get in the hands of our loved ones. In fact, I have noticed that I am even agnostic about such consumer cloud solutions. So long as those are free and sign up is not a hassle and my size limits are permissible, I just go for it.

In the enterprise world though, it is a bit different. The key differences are:

- File sharing is a means to an end. We don't simply send a file in ether or wait to receive those. The real need is around work collaboration. The idea is to not "send" but "share" files with work colleagues - either within our office, in a different location or sometimes even with a vendor, supplier or external company partner.

- Clearly the cloud solution needs to be scalable to allow for file storage and sharing not just among employees but also be able to accommodate the company's ecosystem.

- Because the goal is collaboration, the work needs to be real time. We would need the ability to be able to chat, converse or discuss as we share these files with others working on the same project or working off the same files. And wouldn't it be nice if it tied to the work productivity tools we have installed or are using like Office 365 and others? Oh, and while we are talking about what would be nice, it would be great if the cloud solution could figure out patterns of collaboration and recommend people to be added to the folder for sharing the documents, or based on usage pattern, discover relevant content.

- Again, because it is a means to an end, file sharing is likely part of a business process. So, it NEEDS to be tied to the business process where sharing is triggered as part of a process, and the end result on the documents further triggers off the next steps.

- While this would be cool to have in the consumer world too, in the enterprise world it is more of a need to be able to see/review/work on documents from within the application it is related to. So, for example, if I am working with my colleagues on a Request for Proposal (RFP) response, I would rather that it be embedded in my CRM as part of the opportunity so that my colleagues supporting me on that opportunity have ready access and automatic access to it and the document is related to other files I have on that account.

- One of the biggest advantages of a FSS solution is the ability to have access to documents we need anytime, anywhere on any device. But when the ecosystem is as big as we have for a company, when the breach/hacking/security risk is as big as it is for a company, the security stakes go way up than a consumer solution or even a feeble enterprise version of it. You need to be able to enforce security for your data sitting in the cloud, during transit and at access points (like mobile devices). Plus, there are obvious compliance requirements to be able to track and audit document access trails. And if your company has global presence then data residency policies come into play as well for compliance.

- And when working in an enterprise, we play different roles. We work at times with sensitive documents, and other times with less rigorous ones. The company itself may need to be able to segregate cloud instances by user or by content and still provide users the flexibility of toggling between multiple accounts keeping it just as easy as a consumer solution to be able to share, send and collaborate on documents.

These are just a few reasons why consumer FSS pedigree solutions would not fit the bill for an enterprise. And because point cloud solutions lead to governance and management challenges, even some enterprise solutions fail the litmus test for most enterprises. Net-net, simple file sharing or sending is not enough for an enterprise. Collaboration in a digital workplace goes way beyond that. Here is an infographic (or simply click on the picture above) that summarizes these challenges. Take a look and let me know what you think?

Oracle Midlands : Event #10

Tim Hall - Thu, 2015-06-25 02:47

Just a quick heads-up about the next Oracle Midlands event. It’s good to encourage new speakers, so Mike is giving this new, unknown kid a shot at the limelight. I hope you will all come along to show your support.

om10

Cheers

Tim…

Oracle Midlands : Event #10 was first posted on June 25, 2015 at 9:47 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Quiz Time. Why Do Deletes Cause An Index To Grow ? (Up The Hill Backwards)

Richard Foote - Thu, 2015-06-25 01:02
OK, time for a little quiz. One of the things I’ve seen at a number of sites is the almost fanatical drive to make indexes as small as possible because indexes that are larger than necessary both waste storage and hurt performance. Or so the theory goes …   :) In many cases, this drives DBAs to […]
Categories: DBA Blogs

another way to list invalid objects

Yann Neuhaus - Thu, 2015-06-25 01:00

How often did I type a query like this to list the invalid objects in a database?

select count(*)
  from dba_objects
 where status  'VALID';
    -- and user in/not in

Today I learned another way to do the same.

Why do people show Azure so much love?

Tim Hall - Thu, 2015-06-25 00:56

cloudThe title of this post is taken from tweet I saw a few weeks ago and it keeps coming back to haunt me, so I thought I would comment on it.

Let me start by saying I don’t have any context as to why the tweeter thought people were showing Azure so much love. From my perspective, I kind-of like Azure and I think it is what my employer will end up using, but I’m not a crazed fan-boy about it. :)

Also, I fully understand a move to the cloud is not the right thing for everyone, so this post is focused on those people who do want/need to move to the cloud. Just because it is not right for you, it doesn’t mean it’s not right for everyone. So when I’m talking about running services on the cloud, it is not a recommendation. I’m not telling you you’ve got to. I’m speaking about cloud services to try to explain why someone might say something like the title of this post. I’m hoping this paragraph will stem the hate-comments that invariably come when you mention the cloud. :)

Interface

The Azure interface it pretty neat. It’s clean and reasonably intuitive. I’m a casual user, so I can’t say how I would feel about it if I were managing hundreds or thousands of resources, but from my brief time with it, I like it.

I don’t dislike the AWS interface, but it does feel a bit more cluttered and ugly than the Azure interface. I guess that could be enough to put off some people maybe.

Services

Coming from the Oracle world, we tend to think of UNIX/Linux as being the centre of the universe, but if I think back to the companies I’ve worked for over the years, the majority of their kit has been Windows-based, with the exception of the bits I work on. :) Since most corporate desktops are still Windows-based, Outlook, Office and Active Directory tend to rule the roost. If you are thinking of moving those services on to the cloud, Azure seems the “obvious choice”. Am I saying they are the best products and Azure is the best place to run them? No. What I’m saying is it will be seen as the “obvious choice” for many people wanting to move to the cloud.

The same goes with SQL Server. I happen to like the AWS RDS for SQL Server implementation, but I’m guessing a lot of SQL Server folks will get a warmer and fuzzier feeling about running SQL Server on Azure. Lots of decisions in IT are based on gut instinct or personal bias of the buyers, not necessarily fact. I can see how someone will “feel happier” there.

Once the Oracle Cloud becomes generally available, we may see a similar issue there. People may feel happier about running Oracle products on the Oracle Cloud than on AWS or Azure. Time will tell.

What’s under the hood?

This is where cloud really turns stuff on its head. If I want to run a Linux VM, I can do that on AWS, Azure, Oracle Cloud, VMware vCloud Air etc. From my perspective, if the VM stays up and gives me the performance I paid for, do I really care about what’s under the hood? You can be snobbish about hypervisors, but do I care if Oracle are using less hardware to service the same number of VMs as Azure? No. Where infrastructure as a service (IaaS) is concerned, it is all about the price:performance ratio. As I’ve heard many times, it’s a race for the bottom.

Call me naive, but I really don’t care what is happening under the hood of a cloud service, provided I get what I pay for. I think this is an important factor in how someone like Microsoft can go from zero to hero of the cloud world. If they provide the right services at the right price, people will come.

Conclusion

Q: Why do people show Azure so much love?

A: Because it does what it is meant to do. It provides the services certain companies want at a price they are willing to pay. What’s not to love?

Q: So it’s the best cloud provider right?

A: That depends on your judging criteria. No one cloud provider is “the best”. For some people Azure will be the best option. For others it might be the worst.

Cheers

Tim…

Why do people show Azure so much love? was first posted on June 25, 2015 at 7:56 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

68% of Statistics Are Meaningless, D2L Edition

Michael Feldstein - Wed, 2015-06-24 17:27

By Michael FeldsteinMore Posts (1033)

Two years ago, I wrote about how D2L’s analytics package looked serious and potentially ground-breaking, but that there were serious architectural issues with the underlying platform that were preventing the product from working properly for customers. Since then, we’ve been looking for signs that the company has dealt with these issues and is ready to deliver something interesting and powerful. And what we’ve seen is…uh…

…uh…

Well, the silence has ended. I didn’t get to go to FUSION this year, but I did look at the highlights of the analytics announcements, and they were…

…they were…

OK, I’ll be honest. They were incredibly disappointing in almost every way possible, and good examples of a really bad pattern of hype and misdirection that we’ve been seeing from D2L lately.

You can see a presentation of the “NEW Brightspace Insights(TM) Analytics Suite” here. I would embed the video for you but, naturally, D2L uses a custom player from which they have apparently stripped embedding capabilities. Anyway, one of the first things we learn from the talk is that, with their new, space-age, cold-fusion-powered platform, they “deliver the data to you 20 times faster than before.” Wow! Twenty times faster?! That’s…like…they’re giving us the data even before the students click or something. THEY ARE READING THE STUDENTS’ MINDS!

Uh, no. Not really.

A little later on in the presentation, if you listen closely, you’ll learn that D2L was running a batch process to update the data once every 24 hours. Now, two years after announcing their supposed breakthrough data analytics platform, they are proud to tell us that they can run a batch process every hour. As I write this, I am looking at my real-time analytics feed on my blog, watching people come and go. Which I’ve had for a while. For free. Of course, saying it that way, a batch process every hour, doesn’t sound quite as awesome as

TWENTY TIMES FASTER!!!!!

So they go with that.

There was an honest way in which they could have made the announcement and still sounded great. They could have said something like this:

You know, when LMSs were first developed, nobody was really thinking about analytics, and the technology to do analytics well really wasn’t at a level where it was practical for education anyway. Times have changed, and so we have had to rebuild Brightspace from the inside out to accommodate this new world. This is an ongoing process, but we’re here to announce a milestone. By being able to deliver you regular, intra-day updates, we can now make a big difference in their value to you. You can respond more quickly to student needs. We are going to show you a few examples of it today, but the bigger deal is that we have this new structural capability that will enable us to provide you with more timely analytics as we go.

That’s not a whole lot different in substance than what they actually said. And they really needed to communicate in a hype-free way, because what was the example that they gave for this blazing fast analytics capability? Why, the ability to see if students had watched a video.

Really. That was it.

Now, here again, D2L could have scored real points for this incredibly underwhelming example if they had talked honestly about Caliper and its role in this demo. The big deal here is that they are getting analytics not from Brightspace but from a third-party tool (Kaltura) using IMS Caliper. Regular readers know that I am a big fan of the standard-in-development. I think it’s fantastic that an LMS company has made an early commitment to implement the standard and is pushing it hard as differentiator. That can make the difference between a standard getting traction or remaining an academic exercise. How does D2L position this move? From their announcement:

With our previous analytics products, D2L clients received information on student success even before they took their first test. This has helped them improve student success in many ways, but the data is limited to Brightspace tools. The new Brightspace Insights is able to aggregate student data, leveraging IMS Caliper data, across a wide variety of learning tools within an institution’s technology ecosystem.

We’ve seen explosive growth in the use of external learning tools hooked into Brightspace over the past eighteen months. In fact, we are trending toward 200% growth over 2014. [Emphasis added.] That’s a lot of missing data.

This helps create a more complete view of the student. All of their progress and experiences are captured and delivered through high performance reports, comprehensive data visualizations, and predictive analytics.

Let’s think about an example like a student’s experiences with publisher content and applications. Until now, Brightspace was able to capture final grades but wouldn’t track things like practice quizzes or other assessments a student has taken. It wouldn’t know if a student didn’t get past the table of contents in a digital textbook. Now, the new Brightspace Insights captures all of this data and creates a more complete, living, breathing view of a student’s performance.

This is a big milestone for edtech. No other LMS provider is able to capture data across the learning technology ecosystem like this. [Emphasis added.]

I have no problem with D2L crowing about being early to market with a Caliper implementation. But let’s look at how they positioned it. First, they talked about 200% growth in use of external learning tools in 2015. But what does that mean? Going from one tool to three tools? And what kind of tools are they? And what do we know about how they are being used? OK, on that last question, maybe analytics are needed to answer it. But the point is that D2L has a pattern of punctuating every announcement or talk with an impressive-sounding but meaningless statistic to emphasize how awesome they are. Phil recently caught John Baker using…questionable retention statistics in a speech he gave. In that case, the problem wasn’t that the statistic itself was meaningless but rather that there was no reason to believe that D2L had anything to do with the improvement in the case being cited. And then there’s the slight-of-hand that Phil just called out regarding their LeaP marketing. It’s not as bad as some of the other examples, in my opinion, but still disturbingly consistent with the pattern we are seeing. I am starting to suspect that somebody in the company literally made a rule: Every talk or announcement must have a statistic in it. Doesn’t matter what the statistic is, or whether it means anything. Make one up if you have to, but get it in there.

But back to analytics. The more egregious claim in the quote above is that “no other LMS provider is able to capture data across the learning technology like this [example that we just gave],” because D2L can’t either yet. They have implemented a pre-final draft of a standard which requires both sides to implement in order for it to work. I don’t know of any publishers who have announced they are ready to provide data in the way described in D2L’s example. In fact, there are darned few app providers of any kind who are there yet. (Apparently, Kaltura is one of them.) Again, this could have been presented honestly in a way that made D2L look fantastic. Implementing first puts them in a leadership position, even if that leadership will take a while to pay practical dividends for the customer. But they went for hype instead.

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

I don’t like writing this harshly about a company—particularly one that I have had reason to praise highly in the past. I don’t do it very often. But enough is enough already.

 

The post 68% of Statistics Are Meaningless, D2L Edition appeared first on e-Literate.

About The D2L Claim Of BrightSpace LeaP And Academic Improvements

Michael Feldstein - Wed, 2015-06-24 16:07

By Phil HillMore Posts (335)

Recently I wrote a post checking up on a claim by D2L that seems to imply that their learning platform leads to measurable improvements in academic performance. The genesis of this thread is a panel discussion at the IMS Global conference where I argued that LMS usage in aggregate has not improved academic performance but is important, or even necessary, infrastructure with a critical role. Unfortunately, I found that D2L’s claim from Lone Star was misleading:

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

Subsequently I found a press release at D2L with a claim that appeared to be more rigorous and credible (written in an awful protected web page that prevents select – copy – paste).

D2L Launches the Next Generation of BrightSpace and Strives to Accelerate the Nation’s Path to 60% Attainment

D2L, the EdTech company that created Brightspace, today announces the next generation of its learning platform, designed to develop smarter learners and increase graduation rates. By featuring a new faculty user interface (UI) and bringing adaptive learning to the masses, Brightspace is more flexible, smarter, and easier to use. [snip]

D2L is changing the EdTech landscape by enabling students to learn more with Brightspace LeaP adaptive learning technology that brings personalized learning to the masses, and will help both increase graduation rates and produce smarter learners. The National Scientific Research Council of Canada (NSERC) produced a recent unpublished study that states: “After collating and processing the results, the results were very favourable for LeaP; the study demonstrates, with statistical significance, a 24% absolute gain and a 34% relative gain in final test scores over a traditional LMS while shortening the time on task by 30% all while maintaining a high subjective score on perceived usefulness.”

I asked the company to provide more information on this “unpublished study”, and I got no response.

Hello, Internet search and phone calls – time to do some investigation to see if there is real data to back up claims.

Details on the Study

The Natural Sciences and Engineering Research Council of Canada (NSERC) is somewhat similar to the National Science Foundation in the US – they are funding agency. When I called them they made it perfectly clear that they don’t produce any studies as claimed, they only fund them. I would have to find the appropriate study and contact the lead researcher. Luckily they shared the link to their awards database, and I did some searching on relevant terms. I eventually found some candidate studies and contacted the lead researchers. It turns out that the study in question was led by none other than Dragan Gasevic, founding program co-chair of the International Conference on Learning Analytics & Knowledge (LAK) in 2011 and 2012, and he is now at the University of Edinburgh.

The grant was one of NSERC’s Engage grants which look for researchers to team with companies, and Kowillage was the partner – they have an adaptive learning platform. D2L acquired Knowillage in the middle of the study, and they currently offer the technology as LeaP. LeaP is integrated into the main D2L learning platform (LMS).

The reason the study was not published was simply that Dragan was too busy, including his move to Edinburgh, to complete and publish, but he was happy to share information by Skype.

The study was done on an Introduction to Chemistry course at an unnamed Canadian university. Following ~130 students, the study looked at test scores and time to complete, with two objectives reported – from the class midterm and class final. This was a controlled experiment looking at three groupings:

  • A control group with no LMS, using just search tools and loosely organized content;
  • A group using Moodle as an LMS with no adaptive learning; and
  • A group using Moodle as an LMS with Knowillage / LeaP integrated following LTI standards.

Of note, this study did not even use D2L’s core learning platform, now branded as BrightSpace. It used Moodle as the LMS, but the study was not about the LMS – it was about the pedagogical usage of the adaptive engine used on top of Moodle. It is important to call out that to date, LeaP has been an add-on application that works with multiple LMSs. I have noticed that D2L now redirects their web pages that called out such integrations (e.g. this one showing integration with Canvas and this one with Blackboard) to new marketing just talking about BrightSpace. I do not know if this means D2L no longer allows LeaP integration with other LMSs or not. Update 6/25: Confirmed that LeaP is still being actively marketed to customers of other LMS vendors.

The study found evidence that Knowillage / LeaP allows students to have better test scores than students using just Moodle or no learning platform. This finding was significant even when controlling for students’ prior knowledge and for students’ dispositions (using a questionnaire commonly used in Psychology for motivational strategies and skills). The majority of the variability (a moderate effect size) was still explained by the test condition – use of adaptive learning software.

Dragan regrets the research team’s terminology of “absolute gain” and “relative gain”, but the research did clearly show increased test score gains by use of the adaptive software.

The results were quite different between the mid-term (no significant difference between Moodle+LeaP group and Moodle only group or control group) and the final (significant improvements for Moodle+LeaP well over other groups). Furthermore, the Moodle only group and control group with no LMS reversed gains between midterms and finals. To Dragan, these are study limitations and should be investigated in future research. He still would like to publish these results soon.

Overall, this is an interesting study, and I hope we get a published version soon – it could tell us a bit about adaptive learning, at least in the context of Intro to Chemistry usage.

Back to D2L Claim

Like the Lone Star example, I find a real problem with misleading marketing. D2L could have been more precise and said something like the following:

We acquired a tool, LeaP, that when integrated with another LMS was shown to improve academic performance in a controlled experiment funded by NSERC. We are now offering this tool with deep integration into our learning platform, BrightSpace, as we hope to see similar gains with our clients in the future.

Instead, D2L chose to use imprecise marketing language that implies, or allows the reader to conclude that their next-generation LMS has been proven to work better than a traditional LMS. They never come out and say “it was our LMS”, but they also don’t say enough for the reader to understand the context.

What is clear is that D2L’s LMS (the core of the BrightSpace learning platform) had nothing to do with the study, the actual gains were recorded by LeaP integrated with Moodle, and that the study was encouraging for adaptive learning and LeaP but limited in scope. We also have no evidence that the BrightSpace integration gives any different results than Moodle or Canvas or Blackboard Learn integrations with LeaP. For all we know given the scope of the study, it is entirely possible that there was something unique about the Moodle / LeaP integration that enabled the positive results. We don’t know that, but we can’t rule it out, either.

Kudos to D2L for acquiring Knowillage and for working to make it more available to customers, but once again the company needs to be more accurate in their marketing claims.

The post About The D2L Claim Of BrightSpace LeaP And Academic Improvements appeared first on e-Literate.

Intercepting Table Filter Query and Manipulating VO SQL Statement

Andrejus Baranovski - Wed, 2015-06-24 13:30
I’m going to describe one non declarative use case. Imagine, if there is a table with filter functionality, you may want to intercept filter items and apply the same for another VO. This another VO should be based on the same DB table, so it could apply criteria items against the table.

Sample application - AdvancedViewCriteriaApp.zip, implements a fragment with table component and a chart. Table component can be filtered, criteria is intercepted and applied for the chart, this one is rendered from different VO with GROUP BY query. Chart stays in synch and displays data according to the criteria filtered in the table:


In the log, I’m printing out intercepted criteria from the table filter:


Chart is rendered from the SQL query below:


Table filter criteria is being intercepted by overridden method buildViewCriteriaClauses. Criteria clause is constructed here, we just need select FilterViewCriteria, the one originating from table filter. We could apply this criteria straight ahead to the VO responsible to bring chart data. However, this would not work - ADF BC would wrap original chart SQL query with SELECT * FROM (…) QRSLT WHERE (table filter criteria). This would not work, because table filter criteria is not present in the original chart SQL statement. To make it work, I’m updating original SQL query for chart data, by updating WHERE clause part:


In the last step, we need to pass bind variable values - the ones user is searching for in table filter. This can be done from another overridden method - bindParametersForCollection. We have access to the applied bind variables in this method. Again, you should check for FilterViewCriteria and extract applied bind variables values. Chart VO will be updated with bind variable definitions created on the fly and assigned with values to search for:


I hope this trick will save some of your time, if you are going to implement something similar - to intercept table filter query and apply it to the another VO, based on same DB table.

Groovy Time! How to use XML dateTime and duration in BPM 12c

Jan Kettenis - Wed, 2015-06-24 13:27
In this article I show some examples of handling XML dateTime and durations in Groovy in the context of a Oracle BPM 12c application.

Working with dates and durations in Java has always been painful. Mainly because date and time is a complex thing, with different formats and time zones and all, but I sometimes wonder if it has not been made overly complex. Anyway. Working with XML dates is even more complex because the limited support by XPath functions. Too bad because in BPM applications that work with dates this has to be done very often, and as a result I very often see the need to create all kinds of custom XPath functions to mitigate that.

This issue of complexity is no different for Groovy scripting in Oracle BPM 12c. And let handling of dates be a typical use case for using Groovy scripting because of this limited support by XPath. Therefore, to get you started (and help myself some next time) I would like to share a couple of Groovy code snippets for working with XML dates and durations that may be useful. These example are based on working with the XML dateTime type, and do not handle with the complexity of time zones and different formats. In my practice this is 99% of the use cases that I see.

In my opinion you still should limit using Groovy to handle dates and to the minimum, and rather use custom XPath functions, or create a Java library which you can can import in Groovy. But when you have to, this just might come in handy.

Instantiate an XML DateIf you have an XML element of type dateTime, you use an XmlCalender object. An XmlCalender object with the current time can instantiated as shown below:

Date now = new Date()
GregorianCalendar gregorianNow = new GregorianCalendar()
gregorianNow.setTime(now)
XmlCalendar xmlDate = XmlCalendarFactory.create(gregorianNow)

Instantiate a Duration and Add it to the DateTo instantiate a duration you use an XmlDuration object. In the code below a duration of one day is added to the date:

XmlDuration xmlDuration = new XmlDuration("P1D")
xmlDate.add(xmlDuration)

The string to provide is of type ISO duration.

The imports to use can also be a pain to find. That actually took me the most time of all, but that can just be me. The ones needed for the above are shown in the following picture (you can get to it by using clicking on Select Imports on the top-right corner of the Groovy script.

PaaS Launch and Cloud File Sharing and Collaboration

WebCenter Team - Wed, 2015-06-24 07:28

Thanks for joining us for the big PaaS launch on Monday, June 22nd with Larry Ellison, Thomas Kurian and other senior Oracle and customer executives. Ellison announced more than 24 new services to Oracle Cloud Platform which is a comprehensive, integrated suite of services that make it easier for developers, IT professionals, business users, and analysts to build, extend, and integrate cloud applications and drive co-existence with the existing on-premise infrastructure. For more details on the announcement, cloud services and customer videos, you can catch the on demand online replay of the event.

One of the key areas for PaaS is cloud content and collaboration that drives frictionless user collaboration, content sharing and business process automation anywhere, anytime and on any device. As an enterprise, you will well recognize the need to drive workforce productivity and operational efficiency by enabling secure availability and access to content on any device, within and beyond the firewall and connecting it to the business processes and the applications (SaaS and on-premise) to well, get work done. So, beyond just cloud file sharing which is what current Enterprise File Sync and Share (EFSS) solutions do, you need to be able to share and access content in context and then be able to drive business processes using that content. That is true cloud content and collaboration and that is what will drive output and improve productivity in a digital workplace.

Check out this video that we recently released. While it discusses a specific use case tracking the lifecycle from marketing to sales to service, imagine the power of enabling real time collaboration among employees, remote workforce, external vendors and partners and being able to drive output from anywhere, anytime and any device. And let us know what use case do you come across often in your workplace; we look forward to hearing from you.



Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

Base64 encoding/decoding in OSB

Darwin IT - Wed, 2015-06-24 04:28
Of course there are several java examples to do a base64 encoding on the internet. And there are almost nearly as much encoding implementations in different environments. But which one works in Weblogic/OSB11g? And to implement those examples, compile and jar them, I find myself on a quest for the necessary jars. Of course you can refer to the weblogic.jar in your project of ant file. But that is a little too much, I think. I'd like to find and deliver the bare minimum of jars needed for my project.

For my latest customer/project I came up with this class:

package nl.alliander.osb.base64;

import java.io.IOException;
import java.io.InputStream;
import java.io.ByteArrayInputStream;
import weblogic.utils.encoders.BASE64Decoder;
import weblogic.utils.encoders.BASE64Encoder;
import java.nio.charset.Charset;

public class Base64EncoderDecoder
{
private static final Charset UTF8_CHARSET;

public static void main(final String[] args) {
}

public static String encode(final byte[] bytes) {
final BASE64Encoder encoder = new BASE64Encoder();
final String encodedString = encoder.encodeBuffer(bytes);
return encodedString;
}
public static int getLength(final byte[] bytes) {
int length = bytes.length;
return length;
}
public static byte[] decode(final String b64Document) throws IOException {
final BASE64Decoder decoder = new BASE64Decoder();
final InputStream inputStream = new ByteArrayInputStream(b64Document.getBytes(Base64EncoderDecoder.UTF8_CHARSET));
final byte[] decodedBytes = decoder.decodeBuffer(inputStream);
return decodedBytes;
}

static {
UTF8_CHARSET = Charset.forName("UTF-8");
}
}

And if you use JDeveloper11g as a IDE the only lib you need to compile this is: com.bea.core.utils.full_1.9.0.1.jar. Or com.bea.core.utils.full_1.10.0.0.jar, if using oepe version 11.1.1.8. The jars can be found in ${oracle.home}/modules. Where ${oracle.home} refers to your JDeveloper11g or OEPE-Middleware installation.

By the way, in my OSB project I need to process Attachments in my message (Soap with Attachments), where I need to upload the documents to a ContentServer. Unfortunately the ContentServer needs the filesize (it apparenlty does not determine it by base64-decoding it). So I added the getLength() method to determine it with a java-callout similar to the base64-encode.


Input of the methods is a variable like 'attachmentBin' resulted from an Assing with an expression like:
$attachments/ctx:attachment[ctx:Content-ID/text()=$contentId]/ctx:body/ctx:binary-content


SharePoint 2013 - Office 365 & Power BI

Yann Neuhaus - Wed, 2015-06-24 03:00

 alt

Quick reminder of what is SharePoint 2013 and Office 365

SharePoint 2013

SharePoint 2013 is a collaborative platform that allows organizations to increase the efficiency of their business processes.

link: https://technet.microsoft.com/en-us/library/cc303422.aspx

Office 365 with SharePoint 2013

This is the Online version of SharePoint 2013.

When you sign in to Office 365, or your organization’s corporate intranet site, you’ll see links to Newsfeed, OneDrive, and Sites in the global navigation bar.
These are your entry points into SharePoint.

Organizations use SharePoint to create websites. You can use it as a secure place to store, organize, share, and access information from almost any device.
All you need is a web browser, such as Internet Explorer, Google Chrome, or Mozilla Firefox.

link: https://support.office.com/en-za/article/Get-started-with-SharePoint-2013-909ec2f0-05c8-4e92-8ad3-3f8b0b6cf261?ui=en-US&rs=en-ZA&ad=ZA

 

What is Power BI?

Power BI is a Microsoft Tool which gives you the "Visual Power", it means it allows you to get the best rich visuals to organize and collect data you care the most to focus on. This will keep you in the knowledge of your business activity. 

BI
 

WHAT FOR? what

Depending on the concern, Power BI:

  • MARKETING: 
    • Market Smarter: easily monitor and analyze your marketing campaigns and efficiently allocate your resources to the right channels, all in one place.
    • Monitor your campaign: will give you a view on your campaign efficacy and your tactics performances.

    • Talk to the right customers: demographic filters, customer lifetime values, etc... will help you to get specifics views on your customers activity.


  • SALES:
    • Crush your quotas: Used with Microsoft Dynamics CRM or Salesforce.com, Power BI extends and enhances these services with instant insight into your pipeline.
    • Sales management: Dashboard creation giving more visibility on results to learn from past deal, and get better goals then. 
    • Sales representative: Understand how your previous deals performed so you can execute on future deals more efficiently.



  • CUSTOMER SUPPORT: With Power BI, you will be able to track and have a better view and understanding of Customer Support Activities, drive the team to success.

    CSBI

  • DECISION MAKER: By getting all data "in one", in one dashboard shared with your team, it will help you to take the right decision on time.
  • HUMAN RESSOURCES: All information related to employees on the same dashboard. It will make your HR Meeting and Employees reviews so easiest.

    HR_BI

 

CONNECTING DATA

Dashboards, reports, and datasets are at the middle of Power BI Preview. Connect to or import datasets from a variety of sources:

  • Excel
  • GitHub
  • Google Analytics
  • Marketo
  • Microsoft Dynamics CRM
  • Microsoft Dynamics Marketing
  • Power BI Designer file
  • Salesforce
  • SendGrid
  • SQL Server Analysis Services
  • Zendesk

Data

 

POWER BI DESIGNER

Power Bi Designer is a tool with which you can create robust data models and amazing reports in order to get the best way for your Business Intelligence activities.

PowerBiDesigner

 

POWER BI MOBILE 

phone2

Stay connected to your data from anywhere, anytime with the Power BI app for Windows and iOS.

VERSIONS

There is 2 versions:

  • Power BI: FREE
  • Power BI Pro: with LICENCE ($9.99 user/month)

 

Microsoft Power BI is a user-friendly, intuitive and cloud based self-service BI solution for all your data needs in your own Excel.
including different tools for data extraction, analysis and visualization. 

 

 

PFCLScan Updated and Powerful features

Pete Finnigan - Wed, 2015-06-24 02:20

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Wed, 2015-06-24 02:20

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Empty Leaf Blocks After Rollback Part II (Editions of You)

Richard Foote - Wed, 2015-06-24 01:35
In my last post, I discussed how both 1/2 empty and totally empty leaf blocks can be generated by rolling back a bulk update operation. An important point I made within the comments of the previous post is that almost the exact scenario would have taken place had the transaction committed rather than rolled back. A […]
Categories: DBA Blogs