BI & Warehousing

Modern Customer Experience 2018 was Legendary

Tim Dexter - Fri, 2018-05-18 17:53

During his keynote at Modern Customer Experience 2018, Des Cahill, Head CX Evangelist, stated that CX should stand for Continuous Experimentation. He encouraged 4,500 enthusiastic marketers, customer service, sales, and commerce professional us to try new strategies, to take risks, strive to be remarkable, and triumph through sheer determination.

Casey Neistat echoed Des, challenging us to “do what you can’t,” while best-selling author Cheryl Strayed inspired us to look past our fears and be brave. “Courage isn’t success,” she reminded us, “it’s doing what’s hard regardless of the outcome.”

CX professionals today face numerous challenges: the relentless rise of customer expectations, the accelerating pace of innovation, evolving regulations like GDPR, increase ROI, plus the constant pressure to raise the bar. Modern Customer Experience not only inspired attendees to become the heroes of their organization, but it armed each with the tools to do so.

If you missed Carolyne-Matseshe Crawford, VP of Fan Experience at Fanatics talk about how her company’s culture pervades the entire customer experience, or how Magen Hanrahan VP of Product Marketing at Kraft Heinz is obsessed with data driven marketing tactics, give them a watch. And don’t miss Comcast’s Executive VP, Chief Customer Experience Officer, Charlie Herrin, who wants to build proactive customer experience and dialogue into Comcast’s products themselves with artificial intelligence.

The Modern Customer Experience X Room showcased CX innovation, like augmented and virtual reality, artificial intelligence, and the Internet of Things. But it wasn’t all just mock-ups and demos, a Mack Truck, a Yamaha motorcycle, and an Elgin Street Sweeper were on display, showcasing how Oracle customers put innovation to use to create legendary customer experiences.

Attendees were able to let off some steam during morning yoga and group runs. They relived the 90s with Weezer during CX Fest, and our Canine Heroes from xxxxx were a highlight of everyone’s day.

But don’t just take it from us. Here’s what a few of our attendees had to say about the event.

“Modern Customer Experience gives me the ability to learn about new products on the horizon, discuss challenges, connect with other MCX participants, learn best practices and understand we’re not alone in our journey.” – Matt Adams, Sales Cloud Manager, ArcBest

 “Modern Customer Experience really allows me to do my job more effectively. Without it, I don’t know where I would be! It’s the best conference of the year.” – Joshua Parker, Digital Marketing and Automation Manager, Rosetta Stone

We’re still soaking it all in. You can watch all the highlights from Modern Customer Experience keynotes on YouTube, and peruse the event’s photo slideshow. Don’t forget to share your images on social media, with #ModernCX and sign up for alerts when registration for Modern Customer Experience 2019 opens!

Categories: BI & Warehousing

The Most Important Stop on Your Java Journey

Tim Dexter - Fri, 2018-05-18 14:52

Howdy, Pardner. Have you moseyed over to JavaRanch lately? Pull up a stool at the OCJA or OCJP Wall of Fame and tell your tale or peruse the tales of others. 

Ok - I'm not so great at the cowboy talk, but if you're serious about a Java career and haven't visited JavaRanch, you are missing out! 

JavaRanch, a self-proclaimed "friendly place for Java greenhorns [beginners]" was created in 1997 by Kathy Sierra, co-author of at least 5 Java guides for Oracle Press. The ranch was taken over in subsequent years by Paul Wheaten who continues to run this space today.

In addition to a robust collection of discussion forums about all things Java, JavaRanch provides resources to learn and practice Java, book recommendations, and resources to create your first Java program and test your Java skills.

One of our favorite features of JavaRanch remains the Walls of Fame! This is where you can read the personal experiences of other candidates certified on Java. Learn from their processes and their mistakes. Be inspired by their accomplishments. Share your own experience. 

Visit the Oracle Certified Java Associate Wall of Fame

Visit the Oracle Certified Java Professional Well of Fame

Get the latest Java Certification from Oracle

Oracle Certified Associate, Java SE 8 Programmer

Oracle Certified Professional, Java SE 8 Programmer

Oracle Certified Professional, Java SE 8 Programmer (upgrade from Java SE 7)

Oracle Certified Professional, Java SE 8 Programmer (upgrade from Java SE 6 and all prior versions)

Related Content

Test Your Java Knowledge With FREE Sample Questions

Program Your Future With Java

Categories: BI & Warehousing

What's New with Oracle Certification - May

Tim Dexter - Fri, 2018-05-18 14:49
Stay up to date with the Oracle Certification Program.
Keep informed with new exams released into production,
get information on current promotions, and learn about new program announcements. New Exams and Certifications

Oracle Mobile Cloud Enterprise 2018 Associate Developer | 1Z0-927: This certification covers implementation topics of related Oracle Paas Services such as: Visual Builder Cloud Service, Java Cloud Service, Developer Cloud Service, Application Container Cloud Service, and Container Native Apps. This certification validates understanding of the Application Development portfolio and capacity to configure the services.

Oracle Management Cloud 2018 Associate | 1Z0-930: Passing this exams demonstrates the skills and knowledge to architect and implement Oracle Management Cloud. This individual can configure Application Performance Monitoring, Oracle Infrastructure Monitoring, Oracle Log Analytics, Oracle IT Analytics, Oracle Orchestration, Oracle Security Monitoring and Analytics and Oracle Configuration and Compliance.

Oracle Cloud Security 2018 Associate | 1Z0-933: Passing this exam validates understanding of Oracle Cloud Security portfolio and capacity to configure the services. This certification covers topics such as: Identity Security Operations Center Framework, Identity Cloud Service, CASB Cloud Service, Security Monitoring and Analytics Cloud Service, Configuration and Compliance Service, and services Architecture and Deployment.

Oracle Data Integration Platform Cloud 2018 Associate | 1Z0-935: Passing this exam validates understanding of Oracle Application Integration to implement the service. This certification covers topics such as: Oracle Cloud Application Integration basics, Application Integration: Oracle Integration Cloud (OIC), Service-Oriented Architecture Cloud Service (SOACS), Integration API Platform Cloud Service, Internet of Things - Cloud Service (IOTCS), and Oracle's Process Cloud Service.

Oracle Analytics Cloud 2018 Associate | 1Z0-936: Passing this exam provides knowledge required to perform provisioning, build dimensional modelling and create data visualizations. The certified professional can use Advanced Analytics capabilities, create a machine learning model and configure Oracle Analytics Cloud Essbase.

Explore All Certifications

 

How Does the DBA Keep Their Role Relevant? 

By having the skills to meet the new demands for business optimization along with a reputation of continuous learning and improvement. Check out how training + certification keeps a DBA relevant. Read full article.

 

Benefits of Upgrading Your OCA certification to Database 12c Release 2

Building upon the competencies in the Oracle Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c includes the advanced knowledge and skills required of top-performing database administrators which includes development and deployment of backup, recovery and Cloud computing strategies. Find out how to upgrade with this exam!

Categories: BI & Warehousing

Oracle BI Publisher 12.2.1.4 Now Available !!

Tim Dexter - Fri, 2018-04-27 12:35

Last week Oracle BI Suite 12.2.1.4.0 has been released and that includes Oracle BI Publisher 12.2.1.4.0. 

The links to download the installation files, documentation and release notes are available from BI Publisher OTN home page. You can also download the software from Oracle Software Delivery Cloud.

The new features introduced in Analytics Cloud Suite for both Data Visualization and BI Publisher are now available for on-prem. You can find the  list of new features for Data Visualization here and for BI Publisher here.

Upgrading to Oracle Business Intelligence from 12.2.1.x to 12.2.1.4 is an in-place upgrade performed by Upgrade Assistant. Refer to the upgrade guide for details on upgrade from a previous 12c version.

 

Stay tuned for more updates.

Have a great weekend !

Categories: BI & Warehousing

Adding Native Pivot Charts and Tables to your Excel Reports!!

Tim Dexter - Sun, 2018-04-01 23:59

A report in Excel format is a very common requirement and BI Publisher can generate excel output using RTF, XSL or Excel Template. Excel template is recommended when the requirement is to create pixel perfect column width, to use built in excel functions, to create multi-sheet output, to handle preceding zeroes in data, to maintain data formatting, to manage high number of columns of data, etc.

How about adding native charts and pivot tables in the excel report ? Well, excel templates can handle that too.  

There is no wizard in the Excel Template Builder to create charts or pivot table, but you can certainly include Excel Pivot Charts and Pivot Tables in your report using MS Excel features. Here is a step-by-step guide:

 

Step 1: Create Excel Template to build data for Pivot Chart & Pivot Table

Use Excel Template Builder to create Excel Template

Load a sample XML data. Add data column header names.

Use "Insert Field" option from BI Publisher Ribbon Menu and create data place holders as shown below.

You will see an interim dialog box from the Template Builder that a metadata sheet will be created. Click OK on it.

 

Add looping of data using Insert Repeating Group. Select the For Each entry at the repeating node level

 

Preview the output. This will bring all records in the excel sheet in a separate .xls output file.

 

Step 2: Create Pivot Chart & Pivot Table

You can close the output .xls file and stay in the Excel Template. Now select all the data columns to be used in the Pivot Chart and table. You can click on column headers and select the entire column to be included or you can just select the table with column headers and single row of data placeholders. From Excel Menu Insert, select Pivot Chart & Pivot Table option.

 

In the dialog box "Create PivotTable", you can keep selected the option "Select a table or range" and leave the Table/Range that appears by default based on the selection.

You can choose to create the Pivot Chart and Pivot Table in a new work sheet (recommended).Click OK.

This will add a new Sheet in the Excel file and insert a Pivot Table and Chart place holder, with Pivot Table fields on the right panel

Here you can select the fields for the Pivot table and chart, to be depicted as Axis, Legend and Values. In this example we have included Product Type, Product, LOB and Brand as Axis and Revenue as Values.

Please note that by default the function selected under Values is Count. Therefore, select the drop down next to Count function and choose Value Field Settings, where you can change this to Sum function.

 

 

One more thing to note is the presence of Field Buttons in the chart. You can hide these Field Buttons. With Pivot Chart selected, go to Analyze Menu in the Ribbon style Menu, and under Show/Hide section choose "Hide all Field Buttons".

Finally the template will look like this

 

Step 3: Include dynamic data generated by BI Publisher for Pivot Chart & Pivot Table

Right click on Pivot Chart, select PivotChart Options, select Data tab. Here select the option "Refresh data when opening the file". This will bring the data dynamically into the PIvot Chart and Pivot Table.

 

 

You can run preview of the excel output and you will see the pivot table and chart displaying dynamic data.

You will notice blank data appearing in the Pivot Table and Chart. This is due to the way the looping works against the dynamic data. You can hide this blank data by filtering the blank data from the parent field in the pivot table of the output excel file. In this example, we will remove the blank data from Product field and the complete blank section will be removed without affecting rest of the data. To do this, just hover over Product in the right side pane under Pivot Chart Fields and click on the down arrow. This will open the filter options for Product field. Uncheck the Blank value from filter list. 

 

So, this completes the template design and the final output will look as shown below

You can further include excel functions and formula within these pivot table and charts as necessary for your requirement. You can even change the chart type, style etc. to create the most appropriate visual representation of the data. You can upload the excel template on BI Publisher server and run it against live data. You can include as many sheets with different pivot charts and tables, as required for your report. 

Also note that excel template can be run against any data source type in BI Publisher Data Model. Therefore you can use BI Analysis or even run a BIJDBC SQL query against RPD layer, and bring complex calculations, aggregations as a part of your data. 

 

Hope this was helpful. If you want to check the sample template and data, download it from here.

Have a great day !!

Categories: BI & Warehousing

Happy 20th Anniversary to Applied OLAP!

Tim Tow - Sun, 2018-03-18 13:08

Today, March 18, 2018, is the 20 year anniversary of our incorporation! I has been a long journey since that time; here are some of the highlights:
  • 1998 - We were a one-man shop and wrote a reporting and budgeting application for a customer in New York. I spent about 150 nights that year on the road.
  • 1999 - ActiveOLAP for Essbase was released and we earned the trust of two of our long-term customers. It was during the next couple of years that I traveled to the West Coast about 35 times in one year.
  • 2003 - Portions of our web-service technology was acquired by Hyperion Solutions and we wrote the Hyperion Objects product based on that technology.
  • 2007 - The Dodeca Spreadsheet Management System was released.
  • 2014 - We hired our first resource to focus solely on sales. Prior to that, we marketed our software via 'word of mouth'.
  • 2016 - The Dodeca Excel Add-In for Essbase was released and we acquired the DrillBridge product.
During this time, we have grown the company organically without outside investment. While this strategy meant we had slower growth, the benefit is that it has allowed us to focus solely on the needs of our customers and not the needs of 'investors'. It also meant that we 'ate a lot of beans' in the early days. Those were great lessons in the value of a dollar that we carry with us today in the value of the software we provide to our customers.

Thank you to all of our customers. We feel lucky to work with each and every one of you and we continue to learn from each of you. We pledge to continue working hard to make your companies successful.

Tim Tow
Founder and President
Applied OLAP, Inc
Categories: BI & Warehousing

Rittman Mead at OUG Norway 2018

Rittman Mead Consulting - Mon, 2018-03-05 04:45
Rittman Mead at OUG Norway 2018

This week I am very pleased to represent Rittman Mead by presenting at the Oracle User Group Norway Spring Seminar 2018 delivering two sessions about Oracle Analytics, Kafka, Apache Drill and Data Visualization both on-premises and cloud. The OUGN conference it's unique due to both the really high level of presentations (see related agenda) and the fascinating location being the Color Fantasy Cruiseferry going from Oslo to Kiev and back.

Rittman Mead at OUG Norway 2018

I'll be speaking on Friday 9th at 9:30AM in Auditorium 2 about Visualizing Streams on how the world of Business Analytics has changed in recent years and how to successfully build a Modern Analytical Platform including Apache Kafka, Confluent's recently announced KSQL and Oracle's Data Visualization.

Rittman Mead at OUG Norway 2018

On the same day at 5PM, always in Auditorium 2, I'll be delivering the session OBIEE: Going Down the Rabbit Hole: providing details, built on experience, on how diagnostic tools, non standard configuration and well defined processes can enhance, secure and accelerate any analytical project.

If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.

Categories: BI & Warehousing

Rittman Mead at OUG Norway 2018

Rittman Mead Consulting - Mon, 2018-03-05 04:45
Rittman Mead at OUG Norway 2018

This week I am very pleased to represent Rittman Mead by presenting at the Oracle User Group Norway Spring Seminar 2018 delivering two sessions about Oracle Analytics, Kafka, Apache Drill and Data Visualization both on-premises and cloud. The OUGN conference it's unique due to both the really high level of presentations (see related agenda) and the fascinating location being the Color Fantasy Cruiseferry going from Oslo to Kiev and back.

Rittman Mead at OUG Norway 2018

I'll be speaking on Friday 9th at 9:30AM in Auditorium 2 about Visualizing Streams on how the world of Business Analytics has changed in recent years and how to successfully build a Modern Analytical Platform including Apache Kafka, Confluent's recently announced KSQL and Oracle's Data Visualization.

Rittman Mead at OUG Norway 2018

On the same day at 5PM, always in Auditorium 2, I'll be delivering the session OBIEE: Going Down the Rabbit Hole: providing details, built on experience, on how diagnostic tools, non standard configuration and well defined processes can enhance, secure and accelerate any analytical project.

If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.

Categories: BI & Warehousing

Spring into action with our new OBIEE 12c Systems Management & Security On Demand Training course

Rittman Mead Consulting - Mon, 2018-02-19 05:49

Rittman Mead are happy to release a new course to the On Demand Training platform.

The OBIEE 12c Systems Management & Security course is the essential learning tool for any developers or administrators who will be working on the maintenance & optimisation of their OBIEE platform.

Baseline Validation Tool

View lessons and live demos from our experts on the following subjects:

  • What's new in OBIEE 12c
  • Starting & Stopping Services
  • Managing Metadata
  • System Preferences
  • Troubleshooting Issues
  • Caching
  • Usage Tracking
  • Baseline Validation Tool
  • Direct Database Request
  • Write Back
  • LDAP Users & Groups
  • Application Roles
  • Permissions

Get hands on with the practical version of the course which comes with an OBIEE 12c training environment and 9 lab exercises.
System Preferences

Rittman Mead will also be releasing a theory version of the course. This will not include the lab exercises but gives each of the lessons and demos that you'd get as part of the practical course.

Course prices are as follows:

OBIEE 12c Systems Management & Security - PRACTICAL - $499

  • 30 days access to lessons & demos
  • 30 days access to OBIEE 12c training environment for lab exercises
  • 30 days access to Rittman Mead knowledge base for Q&A and lab support

OBIEE 12c Systems Management & Security - THEROY - $299

  • 30 days access to lessons & demos
  • 30 days access to Rittman Mead knowledge base for Q&A

To celebrate the changing of seasons we suggest you Spring into action with OBIEE 12c by receiving a 25% discount on both courses until 31st March 2018 using voucher code:

ODTSPRING18

Access both courses and the rest of our catalog at learn.rittmanmead.com

Categories: BI & Warehousing

Spring into action with our new OBIEE 12c Systems Management & Security On Demand Training course

Rittman Mead Consulting - Mon, 2018-02-19 05:49

Rittman Mead are happy to release a new course to the On Demand Training platform.

The OBIEE 12c Systems Management & Security course is the essential learning tool for any developers or administrators who will be working on the maintenance & optimisation of their OBIEE platform.

Baseline Validation Tool

View lessons and live demos from our experts on the following subjects:

  • What's new in OBIEE 12c
  • Starting & Stopping Services
  • Managing Metadata
  • System Preferences
  • Troubleshooting Issues
  • Caching
  • Usage Tracking
  • Baseline Validation Tool
  • Direct Database Request
  • Write Back
  • LDAP Users & Groups
  • Application Roles
  • Permissions

Get hands on with the practical version of the course which comes with an OBIEE 12c training environment and 9 lab exercises.
System Preferences

Rittman Mead will also be releasing a theory version of the course. This will not include the lab exercises but gives each of the lessons and demos that you'd get as part of the practical course.

Course prices are as follows:

OBIEE 12c Systems Management & Security - PRACTICAL - $499

  • 30 days access to lessons & demos
  • 30 days access to OBIEE 12c training environment for lab exercises
  • 30 days access to Rittman Mead knowledge base for Q&A and lab support

OBIEE 12c Systems Management & Security - THEROY - $299

  • 30 days access to lessons & demos
  • 30 days access to Rittman Mead knowledge base for Q&A

To celebrate the changing of seasons we suggest you Spring into action with OBIEE 12c by receiving a 25% discount on both courses until 31st March 2018 using voucher code:

ODTSPRING18

Access both courses and the rest of our catalog at learn.rittmanmead.com

Categories: BI & Warehousing

Confluent Partnership

Rittman Mead Consulting - Mon, 2018-02-12 09:14

Confluent

Here at Rittman Mead, we are continually broadening the scope and expertise of our services to help our customers keep pace with today's ever-changing technology landscape. One significant change we have seen over the last few years is the increased adoption of data streaming. These solutions can help solve a variety of problems, from real-time data analytics to forming the entire backbone of an organisation's data architecture. We have worked with a number of different technologies that can enable this, however, we often see that Kafka ticks the most boxes.

This is reflected by some of the recent blog posts you will have seen like Tom Underhill hooking up his gaming console to Kafka and Paul Shilling’s piece on collating sailing data. Both these posts try and use day to day or real-world examples to demonstrate some of the concepts behind Kafka.

In conjunction with these, we have been involved in more serious proofs of concepts and project with clients involving Kafka, which no doubt we will write about in time. To help us further our knowledge and also immerse ourselves in the developer community we have decided to become Confluent partners. Confluent was founded by the people who initially developed Kafka at LinkedIn and provides a managed and supported version of Kafka through their platform.

We chose Confluent as we saw them as the driving force behind Kafka, plus the additions they are making to the platform such as the streaming API and KSQL are opening a lot of doors for how streamed data can be used.

We look forward to growing our knowledge and experience in this area and the possibilities that working with both Kafka and Confluent will bring us.

Categories: BI & Warehousing

Confluent Partnership

Rittman Mead Consulting - Mon, 2018-02-12 09:14

Confluent

Here at Rittman Mead, we are continually broadening the scope and expertise of our services to help our customers keep pace with today's ever-changing technology landscape. One significant change we have seen over the last few years is the increased adoption of data streaming. These solutions can help solve a variety of problems, from real-time data analytics to forming the entire backbone of an organisation's data architecture. We have worked with a number of different technologies that can enable this, however, we often see that Kafka ticks the most boxes.

This is reflected by some of the recent blog posts you will have seen like Tom Underhill hooking up his gaming console to Kafka and Paul Shilling’s piece on collating sailing data. Both these posts try and use day to day or real-world examples to demonstrate some of the concepts behind Kafka.

In conjunction with these, we have been involved in more serious proofs of concepts and project with clients involving Kafka, which no doubt we will write about in time. To help us further our knowledge and also immerse ourselves in the developer community we have decided to become Confluent partners. Confluent was founded by the people who initially developed Kafka at LinkedIn and provides a managed and supported version of Kafka through their platform.

We chose Confluent as we saw them as the driving force behind Kafka, plus the additions they are making to the platform such as the streaming API and KSQL are opening a lot of doors for how streamed data can be used.

We look forward to growing our knowledge and experience in this area and the possibilities that working with both Kafka and Confluent will bring us.

Categories: BI & Warehousing

Real-time Sailing Yacht Performance - Getting Started (Part 1)

Rittman Mead Consulting - Fri, 2018-01-19 03:54

In this series of articles, I intend to look at collecting and analysing our yacht’s data. I aim to show how a number of technologies can be used to achieve this and the thought processes around the build and exploration of the data. Ultimately, I want to improve our sailing performance with data, not a new concept for professional teams but well I have a limited amount of hardware and funds, unlike Oracle it seems, time for a bit of DIY!

In this article, I introduce some concepts and terms then I'll start cleaning and exploring the data.

Background

I have owned a Sigma 400 sailing yacht for over twelve years and she is used primarily for cruising but every now and then we do a bit of offshore racing.

In the last few years we have moved from paper charts and a very much manual way of life to electronic charts and IOS apps for navigation.

In 2017 we started to use weather modelling software to predict the most optimal route of a passage taking wind, tide and estimated boat performance (polars) into consideration.

The predicted routes are driven in part by a boat's polars, the original "polars" are a set of theoretical calculations created by the boat’s designer indicating/defining what the boat should do at each wind speed and angle of sailing. Polars give us a plot of the boat's speed given a true wind speed and angle. This in turn informs us of the optimal speed the boat could achieve at any particular angle to wind and wind speed (not taking into consideration helming accuracy, sea state, condition of sails and sail trim - It may be possible for me to derive different polars for different weather conditions). Fundamentally, polars will also give us an indication of the most optimal angle to wind to get to our destination (velocity made good).

The polars we use at the moment are based on a similar boat to the Sigma 400 but are really a best guess. I want our polars to be more accurate. I would also like to start tracking the boats performance real-time and post passage for further analysis.

The purpose of this blog is to use our boats instrument data to create accurate polars for a number of conditions and get a better understanding of our boats performance at each point of sail. I would also see what can be achieved with the AIS data. I intend to use Python to check and decode the data. I will look at a number of tools to store, buffer, visualise and analyse the outputs.

So let’s look at the technology on-board.

Instrumentation Architecture

The instruments are by Raymarine. We have a wind vane, GPS, speed sensor, depth sounder and sea temperature gauge, electronic compass, gyroscope, and rudder angle reader. These are all fed into a central course computer. Some of the instrument displays share and enrich the data calculating such things as apparent wind angles as an example. All the data travels through a proprietary Raymarine messaging system called SeaTalk. To allow Raymarine instruments to interact with other instrumentation there is an NMEA-0183 port. NMEA-0183 is a well-known communication protocol and is fairly well documented so this is the data I need to extract from the system. I currently have an NMEA-0183 cable connecting the Raymarine instruments to an AIS transponder. The AIS transponder includes a Wireless router. The wireless router enables me to connect portable devices to the instrumentation.

The first task is to start looking at the data and get an understanding of what needs to be done before I can start analysing.

Analysing the data

There is a USB connection from the AIS hub however the instructions do warn that this should only be used during installation. I did spool some data from the USB port, it seemed to work OK. I could connect directly to the NMEA-0183 output however that would require me to do some wiring so will look at that if the reliability of the wireless causes issues. The second option was to use the wireless connection. I start by spooling the data to a log file using nc (nc is basically OSX's version of netcat, a TCP and UDP tool).

Spooling the data to a log file

nc  -p 1234 192.168.1.1 2000 > instrument.log

The spooled data gave me a clear indication that there would need to be some sanity checking of the data before it would be useful. The data is split into a number of different message types each consisting of a different structure. I will convert these messages into a JSON format so that the messages are more readable downstream. In the example below the timestamps displayed are attached using awk but my Python script will handle any enrichment as I build out.

The data is comma separated so this makes things easy and there a number of good websites that describe the contents of the messages. Looking at the data using a series of awk commands I clearly identify three main types of messages. GPS, AIS and Integrated instrument messages. Each message ends in a two-digit hex code this can be XOR'd to validate the message.

Looking at an example wind messages

We get two messages related to the wind true and apparent the data is the same because the boat was stationary.

$IIMWV,180.0,R,3.7,N,A*30
$IIMWV,180.0,T,3.8,N,A*30

These are Integrated Instrument Mast Wind Vain (IIMWV) * I have made an assumption about the meaning of M so if you are an expert in these messages feel free to correct me ;-)*

These messages break down to:

  1. $IIMWV II Talker, MWV Sentence
  2. 180.0 Wind Angle 0 - 359
  3. R Relative (T = True)
  4. 3.7 Wind Speed
  5. N Wind Speed Units Knots (N = KPH, M = MPH)
  6. A Status (A= Valid)
  7. *30 Checksums

And in English (ish)

180.0 Degrees Relative wind speed 1.9 Knots.

Example corrupted message

$GPRMC,100851.00,A,5048.73249,N,00005.86148,W,0.01**$GPGGA**,100851.00,5048.73249,N,00005.8614$GPGLL,5048.73249,N,00005.86148,W,100851.0

Looks like the message failed to get a new line. I notice a number of other types of incomplete or corrupted messages so checking them will be an essential part of the build.

Creating a message reader

I don't really want to sit on the boat building code. I need to be doing this while traveling and at home when I get time. So, spooling half an hour of data to a log file gets me started. I can use Python to read from the file and once up and running spool the log file to a local TCP/IP port and read using Python socket library.

Firstly, I read the log file and loop through the messages, each message I check to see if it's valid using the checksum, line length. I used this to log the number of messages in error etc. I have posted the test function, I'm sure there are better ways to write the code but it works.

#DEF Function to test message
 def is_message_valid (orig_line):

  #check if hash is valid
  #set variables
  x = 1
  check = 0
  received_checksum = 0
  line_length = len(orig_line.strip())

  while (x <= line_length):="" current_char="orig_line[x]" #checksum="" is="" always="" two="" chars="" after="" the="" *="" if="" "*":="" received_checksum="orig_line[x+1]" +="" orig_line[x+2]="" #check="" where="" we="" are="" there="" more="" to="" decode="" then="" #have="" take="" into="" account="" new="" line="" line_length=""> (x+3):
        check = 0

      #no need to continue to the end of the 
      #line either error or have checksum
      break

    check = check^ord(current_char)
    x = x + 1; 
  
  if format(check,"2X") == received_checksum:
    #substring the new line for printing
    #print "Processed nmea line >> " + orig_line[:-1] + " Valid message" 
    _Valid = 1
  else:
    #substring the new line for printing
    _Valid = 0
  
  return _Valid

Now for the translation of messages. There are a number of example Python packages in GitHub that translate NMEA messages but I am only currently interested in specific messages, I also want to build appropriate JSON so feel I am better writing this from scratch. Python has JSON libraries so fairly straight forward once the message is defined. I start by looking at the wind and depth messages. I'm not currently seeing any speed messages hopefully because the boat wasn't moving.

def convert_iimwv_json (orig_line):
 #iimwv wind instrumentation

 column_list = orig_line.split(",")
 
 #star separates the checksum from status
 status_check_sum = column_list[5].split("*")
 checksum_value = status_check_sum[1]
 
 json_str = 
 {'message_type' : column_list[0], 
 'wind_angle' : column_list[1], 
 'relative' : column_list[2], 
 'wind_speed' : column_list[3], 
 'wind_speed_units' : column_list[4], 
 'status' : status_check_sum[0], 
 'checksum' : checksum_value[0:2]}
 
 json_dmp = json.dumps(json_str)
 json_obj = json.loads(json_dmp)

 return json_str

I now have a way of checking, reading and converting the message to JSON from a log file. Switching from reading a file to to using the Python socket library I can read the stream directly from a TCP/IP port. Using nc it's possible to simulate the message being sent from the instruments by piping the log file to a port.

Opening port 1234 and listening for terminal input

nc -l 1234

Having spoken to some experts from Digital Yachts it maybe that the missing messages are because Raymarine SeakTalk is not transmitting an NMEA message for speed and a number of other readings. The way I have wired up the NMEA inputs and outputs to the AIS hub may also be causing the doubling up of messages and apparent corruptions. I need more kit! A bi-direction SeaTalk to NMEA converter.

In the next article, I discuss the use of Kafka in the architecture. I want to buffer all my incoming raw messages. If I store all the incoming I can build out the analytics over time i.e as I decode each message type. I will also set about creating a near real time dashboard to display the incoming metrics. The use of Kafka will give me scalability in the model. I'm particularly thinking of Round the Island Race 1,800 boats a good number of these will be transmitting AIS data.


Real-time Sailing Yacht Performance - stepping back a bit (Part 1.1)

Real-time Sailing Yacht Performance - Kafka (Part 2)

Categories: BI & Warehousing

Real-time Sailing Yacht Performance - Getting Started (Part 1)

Rittman Mead Consulting - Fri, 2018-01-19 03:54

In this series of articles, I intend to look at collecting and analysing our yacht’s data. I aim to show how a number of technologies can be used to achieve this and the thought processes around the build and exploration of the data. Ultimately, I want to improve our sailing performance with data, not a new concept for professional teams but well I have a limited amount of hardware and funds, unlike Oracle it seems, time for a bit of DIY!

In this article, I introduce some concepts and terms then I'll start cleaning and exploring the data.

Background

I have owned a Sigma 400 sailing yacht for over twelve years and she is used primarily for cruising but every now and then we do a bit of offshore racing.

In the last few years we have moved from paper charts and a very much manual way of life to electronic charts and IOS apps for navigation.

In 2017 we started to use weather modelling software to predict the most optimal route of a passage taking wind, tide and estimated boat performance (polars) into consideration.

The predicted routes are driven in part by a boat's polars, the original "polars" are a set of theoretical calculations created by the boat’s designer indicating/defining what the boat should do at each wind speed and angle of sailing. Polars give us a plot of the boat's speed given a true wind speed and angle. This in turn informs us of the optimal speed the boat could achieve at any particular angle to wind and wind speed (not taking into consideration helming accuracy, sea state, condition of sails and sail trim - It may be possible for me to derive different polars for different weather conditions). Fundamentally, polars will also give us an indication of the most optimal angle to wind to get to our destination (velocity made good).

The polars we use at the moment are based on a similar boat to the Sigma 400 but are really a best guess. I want our polars to be more accurate. I would also like to start tracking the boats performance real-time and post passage for further analysis.

The purpose of this blog is to use our boats instrument data to create accurate polars for a number of conditions and get a better understanding of our boats performance at each point of sail. I would also see what can be achieved with the AIS data. I intend to use Python to check and decode the data. I will look at a number of tools to store, buffer, visualise and analyse the outputs.

So let’s look at the technology on-board.

Instrumentation Architecture

The instruments are by Raymarine. We have a wind vane, GPS, speed sensor, depth sounder and sea temperature gauge, electronic compass, gyroscope, and rudder angle reader. These are all fed into a central course computer. Some of the instrument displays share and enrich the data calculating such things as apparent wind angles as an example. All the data travels through a proprietary Raymarine messaging system called SeaTalk. To allow Raymarine instruments to interact with other instrumentation there is an NMEA-0183 port. NMEA-0183 is a well-known communication protocol and is fairly well documented so this is the data I need to extract from the system. I currently have an NMEA-0183 cable connecting the Raymarine instruments to an AIS transponder. The AIS transponder includes a Wireless router. The wireless router enables me to connect portable devices to the instrumentation.

The first task is to start looking at the data and get an understanding of what needs to be done before I can start analysing.

Analysing the data

There is a USB connection from the AIS hub however the instructions do warn that this should only be used during installation. I did spool some data from the USB port, it seemed to work OK. I could connect directly to the NMEA-0183 output however that would require me to do some wiring so will look at that if the reliability of the wireless causes issues. The second option was to use the wireless connection. I start by spooling the data to a log file using nc (nc is basically OSX's version of netcat, a TCP and UDP tool).

Spooling the data to a log file

nc  -p 1234 192.168.1.1 2000 > instrument.log

The spooled data gave me a clear indication that there would need to be some sanity checking of the data before it would be useful. The data is split into a number of different message types each consisting of a different structure. I will convert these messages into a JSON format so that the messages are more readable downstream. In the example below the timestamps displayed are attached using awk but my Python script will handle any enrichment as I build out.

The data is comma separated so this makes things easy and there a number of good websites that describe the contents of the messages. Looking at the data using a series of awk commands I clearly identify three main types of messages. GPS, AIS and Integrated instrument messages. Each message ends in a two-digit hex code this can be XOR'd to validate the message.

Looking at an example wind messages

We get two messages related to the wind true and apparent the data is the same because the boat was stationary.

$IIMWV,180.0,R,3.7,N,A*30
$IIMWV,180.0,T,3.8,N,A*30

These are Integrated Instrument Mast Wind Vain (IIMWV) * I have made an assumption about the meaning of M so if you are an expert in these messages feel free to correct me ;-)*

These messages break down to:

  1. $IIMWV II Talker, MWV Sentence
  2. 180.0 Wind Angle 0 - 359
  3. R Relative (T = True)
  4. 3.7 Wind Speed
  5. N Wind Speed Units Knots (N = KPH, M = MPH)
  6. A Status (A= Valid)
  7. *30 Checksums

And in English (ish)

180.0 Degrees Relative wind speed 1.9 Knots.

Example corrupted message

$GPRMC,100851.00,A,5048.73249,N,00005.86148,W,0.01**$GPGGA**,100851.00,5048.73249,N,00005.8614$GPGLL,5048.73249,N,00005.86148,W,100851.0

Looks like the message failed to get a new line. I notice a number of other types of incomplete or corrupted messages so checking them will be an essential part of the build.

Creating a message reader

I don't really want to sit on the boat building code. I need to be doing this while traveling and at home when I get time. So, spooling half an hour of data to a log file gets me started. I can use Python to read from the file and once up and running spool the log file to a local TCP/IP port and read using Python socket library.

Firstly, I read the log file and loop through the messages, each message I check to see if it's valid using the checksum, line length. I used this to log the number of messages in error etc. I have posted the test function, I'm sure there are better ways to write the code but it works.

#DEF Function to test message
 def is_message_valid (orig_line):

  #check if hash is valid
  #set variables
  x = 1
  check = 0
  received_checksum = 0
  line_length = len(orig_line.strip())

  while (x <= line_length):="" current_char="orig_line[x]" #checksum="" is="" always="" two="" chars="" after="" the="" *="" if="" "*":="" received_checksum="orig_line[x+1]" +="" orig_line[x+2]="" #check="" where="" we="" are="" there="" more="" to="" decode="" then="" #have="" take="" into="" account="" new="" line="" line_length=""> (x+3):
        check = 0

      #no need to continue to the end of the 
      #line either error or have checksum
      break

    check = check^ord(current_char)
    x = x + 1; 

  if format(check,"2X") == received_checksum:
    #substring the new line for printing
    #print "Processed nmea line >> " + orig_line[:-1] + " Valid message" 
    _Valid = 1
  else:
    #substring the new line for printing
    _Valid = 0

  return _Valid

Now for the translation of messages. There are a number of example Python packages in GitHub that translate NMEA messages but I am only currently interested in specific messages, I also want to build appropriate JSON so feel I am better writing this from scratch. Python has JSON libraries so fairly straight forward once the message is defined. I start by looking at the wind and depth messages. I'm not currently seeing any speed messages hopefully because the boat wasn't moving.

def convert_iimwv_json (orig_line):
 #iimwv wind instrumentation

 column_list = orig_line.split(",")

 #star separates the checksum from status
 status_check_sum = column_list[5].split("*")
 checksum_value = status_check_sum[1]

 json_str = 
 {'message_type' : column_list[0], 
 'wind_angle' : column_list[1], 
 'relative' : column_list[2], 
 'wind_speed' : column_list[3], 
 'wind_speed_units' : column_list[4], 
 'status' : status_check_sum[0], 
 'checksum' : checksum_value[0:2]}

 json_dmp = json.dumps(json_str)
 json_obj = json.loads(json_dmp)

 return json_str

I now have a way of checking, reading and converting the message to JSON from a log file. Switching from reading a file to to using the Python socket library I can read the stream directly from a TCP/IP port. Using nc it's possible to simulate the message being sent from the instruments by piping the log file to a port.

Opening port 1234 and listening for terminal input

nc -l 1234

Having spoken to some experts from Digital Yachts it maybe that the missing messages are because Raymarine SeakTalk is not transmitting an NMEA message for speed and a number of other readings. The way I have wired up the NMEA inputs and outputs to the AIS hub may also be causing the doubling up of messages and apparent corruptions. I need more kit! A bi-direction SeaTalk to NMEA converter.

In the next article, I discuss the use of Kafka in the architecture. I want to buffer all my incoming raw messages. If I store all the incoming I can build out the analytics over time i.e as I decode each message type. I will also set about creating a near real time dashboard to display the incoming metrics. The use of Kafka will give me scalability in the model. I'm particularly thinking of Round the Island Race 1,800 boats a good number of these will be transmitting AIS data.


Categories: BI & Warehousing

Using MDX for Generated Members in Essbase Reports

Tim Tow - Thu, 2018-01-11 18:30

There are times when Essbase users may need to see an ad-hoc collection of members aggregated together in Essbase, and that isn’t always an easy task.  If it were an aggregation that is needed on a recurring basis, the Essbase administrator may add an alternate hierarchy to assist.  Other times, users might just create a spreadsheet with the desired members in different rows or columns and use Excel formulas to add them together.  In this blog post, I will cover a third option, the use of MDX to create dynamically-generated members, how to run them in Smart View, and how to make them much easier to use in Dodeca.

In order to illustrate how dynamically-generated members can be used, let’s consider an example using the Sample Basic database.  Here is a simple quarterly income statement query that I will use as the basis for this blog post:

SELECT
    {[Year].Children, Year} on COLUMNS,
    Hierarchize(Descendants([Profit]), POST) ON ROWS
FROM 
    Sample.Basic
WHERE 
    ([Market].[New York], [Product].[Colas], Actual)

The results from this simple query look like this:



This MDX is pretty straightforward, but what if you wanted to see how New York and Connecticut would look if they were combined?  This is the question that a generated member can return for you.

Generated members in MDX are created using the WITH MEMBER clause.  Moreover, the generated member can then be used anywhere a normal member can be used, even in a slicer dimension (or what we would call a ‘page field’ in the classic Essbase add-in or a point-of-view in Smart View).  Here is the query modified to use the new generated member:

WITH MEMBER
    [Market].[SelectedMarkets] AS 'SUM({[New York], [Connecticut]})'
SELECT
    {[Year].Children, Year} on COLUMNS,
    Hierarchize(Descendants([Profit]), POST) ON ROWS
FROM
    Sample.Basic
WHERE
    ([Market].[SelectedMarkets], Colas, Actual)

The results from this query look like this:


So far, so good, but there are a couple of things to note.  First, the member displayed in the POV is not a real member; that is to be expected.  This leads to the second thing in that you cannot refresh the query as an ad-hoc analysis; the dynamically generated member name will be replaced with the dimension member name in its place.

To go even further, what if you want to have multiple generated members?  In that case, the syntax is easy as you just continue with another MEMBER clause:

WITH MEMBER
    [Market].[SelectedMarkets] AS 'SUM({[New York], [Connecticut]})'
MEMBER
  [Product].[SelectedProducts] AS 'SUM({[Colas], [Grape]})'
SELECT
    {[Year].Children, Year} on COLUMNS,
    Hierarchize(Descendants([Profit]), POST) ON ROWS
FROM
    Sample.Basic
WHERE
    ([Market].[SelectedMarkets], [Product].[SelectedProducts], Actual)

The results of this query look like this:



The syntax for creating and using generated members is not that difficult, but there are a couple of things that make it a bit more difficult than it should be for end users to use this approach.

First, any time end users start having to deal with scripts of any kind, the level of complexity goes up exponentially.  As one of my mentors used to say, “The difference between zero lines of code and one line of code is much greater than the difference between one line of code and a hundred lines of code”.  In other words, it is hard to get users to deal with code of any kind.

Second, once an end user has to ‘write a line of code’, or script in this case, then they assume the responsibility for it being correct.  As there are differing levels of comfort and skill among users, the risk of error goes up.

Finally, when users use a script like the one used in this example, they have to type in the correct member names or, again, risk error. Here is the new MDX dialog in Smart View 11.1.2.5.720 showing where users type in the MDX including the member names.



To make it much easier for end users, Dodeca does a couple of things.  First, Dodeca developers can configure reports to use MDX without the end user ever having to know that MDX is powering the report ‘under-the-covers’.  Further, Dodeca has flexible Point-of-View selectors that allow the end user to simply pick which members they want to use in the query.

Dodeca report developers use tokens as a sort of substitution variables in the script.  The tokens are replaced in the script at run-time by the members selected by end users.  Here is the same script with tokens in place of the hard-coded values:

WITH MEMBER
  [Market].[SelectedMarkets] AS 'SUM({[T.Market]})'
MEMBER
    [Product].[SelectedProducts] AS 'SUM({[T.Product]})' 
SELECT
    {[Year].Children, Year} on COLUMNS,
    Hierarchize(Descendants([Profit]), POST) ON ROWS
FROM
    Sample.Basic
WHERE
    ([Market].[SelectedMarkets], [Product].[SelectedProducts], Actual)

The Dodeca Essbase Scripts editor has tools to help the report developer create and test MDX scripts.  Here are the Test Tokens available in the editor that allow developers to simulate the values plugged in by the Point-of-View selectors:


And the script itself in the scripts editor which has built-in testing facilities:



Finally, here is a Dodeca view that utilizes the tokenized MDX query and allows users to easily select the members they want dynamically aggregated and the report is produced without the risk of error.



Let me know if you would like to learn more about Dodeca and how it could help your company.


Categories: BI & Warehousing

Why Cloud? The reason changed in 2017…twice

Look Smarter Than You Are - Fri, 2018-01-05 10:13
In 2017, the predominant reason companies considered moving to the Cloud changed multiple times. While the “how” tends to shift frequently, seeing the “why” fundamentally shift twice in one year was fascinating (though not quite as fascinating as yesterday when LinkedIn suggested I might know both Jessica Alba and Ashton Kutcher).

The Cloud will save us money
2017 started off with companies moving to the Cloud to save money. This makes sense in a theoretical sense: you pay-as-you-go for your software instead of all up-front, you don’t have to buy your own servers, there’s no need to do installations, and there’s no IT staff needed to handle the frequent maintenance that an on-premises solution requires.

But while that’s 100% correct in the abstract (any new company would buy Cloud first before ever considering an on-prem product), there’s a sunk cost issue with existing solutions: companies already paid for all their software (minus the annual “support maintenance”), they already bought their servers, someone already installed the software, and there’s an existing staff dedicated to maintaining servers that has plenty of other things they can be doing once they stop dealing with the drudgery of daily maintenance activities. While there’s money to be saved with new solutions, and there’s definitely money to be saved in the long-run on converting existing implementations to the Cloud, the short-term savings are trumped by the sunk cost fallacy.

As companies started moving en masse to the Cloud, a compelling new motivation began appearing in Spring of 2017.

Let’s make our server someone else’s problem
Companies began realizing that servers and data centers are a huge headache: a distraction from their core competencies. Trying to make sure servers stay up and running whenever we need to access them shouldn’t be any more of a focus than starting our car: the engine should always work and if it doesn’t, someone far more qualified than we are should fix it.

All of a sudden, people were going to the Cloud so they never had to deal with their servers again: uptime was assumed, patches were someone else’s problem, and backups just happened. And as this happened, the Cloud became more like Google: when was the last time you pondered where Google’s servers are located or when the last time was that Google did a backup? And the reason you don’t invest brain power into Google maintenance thought experiments is that it’s Google’s problem. While the Cloud may be causing someone else sleepless nights keeping those servers up and running, that someone is not making their problem your problem.

So, we spent the next several months of “The Year of the Cloud” (trademark pending) going to the Cloud so we never had to deal with our servers again.

Power to the People!
In late 2017, organizations going to the Cloud began to notice something weird: business people were starting to own their own systems and access their data directly. A noble aim long desired by users everywhere, this has heretofore been impossible because on-premises systems take a lot of effort to administrate. It took consultants or IT personnel to build the systems, modify them, and in the end, those same people controlled access to the systems.

The Cloud changed all that: with a new focus on end users and self-service, the power to change things (add an account, build a new report, modify a form, create new analysis) moved to the people who are the first to know when a change needs to be met. At first, I thought this self-service paradigm would increase the workload on the business, but it turns out that they were having to do all the requesting of the changes anyway and quickly making those changes themselves was far faster. Why should I have to make a request to see my own data rather than just go wander through it on my own (preferably on a mobile device)?

And so we ended 2017 with a new drive – a new “why” – of the Cloud. Give the power to the people. The other reasons aren’t lost: they just took a backseat to the new user-first world of the Cloud. Now when someone asks me “why should our company move to the Cloud?”, I tell them “because it gives your business people the power to make better business decisions faster.”

At least, that’s my answer at the start of 2018.

What’s the next shift?
Each year, I conduct a global survey of Business Analytics. Last year, I asked over 250 companies how they were doing in the world of reporting, analysis, planning, and consolidation.  If you want to see where the next shift is coming from before it happens, I’m unveiling the results of this year’s survey on a webcast January 31, 2018, at 2PM Eastern, where you’ll learn how your BI & EPM (Business Intelligence & Enterprise Performance Management) stacks up against the rest of the world. To register, go to:


If you have any questions, ask them in the comments or tweet them to me @ERoske.
Categories: BI & Warehousing

Windows 10 Update Killed Essbase On My Laptop!

Tim Tow - Thu, 2018-01-04 00:32

Like many Essbase consultants and developers, I run Essbase server on my Windows 10 laptop. It was a lengthy ‘Creator’s Update’ Windows update and, once it was complete, Essbase was dead on my machine. So, what do I do? First, I didn’t panic; us pilots have a way of not panicking when things don’t go as planned. We have several people internally who had this happen to them over the past several months and we fixed it each time, so there was nothing to worry about.

The root cause was that my OPMN service, which runs Essbase, was gone. This happened on the other machines we have that experienced that in the past, so I went to talk with one of our resident infrastructure gurus, Jay Zuercher. I remembered there was a command that I could run to recreate the service; Jay had the command filed away somewhere and within a couple of minutes, he sent it to me:

SC CREATE "OracleProcessManager_epmsystem1" binPath="C:\oracle\middleware\epmsystem11r1\opmn\bin\opmn.exe -S -I c:\oracle\middleware\user_projects\epmsystem1”
I ran this command – as an administrator – and then went into services to set the service to start automatically and start the service running. That did not, however, result in Essbase coming back to life. Next, I looked at the Essbase logs and noted several issues having to do with security. Initially, I thought there may have been due to an issue with Shared Services, but then I remembered about the fairly common Essbase issue regarding a corrupted essbase.sec file. I don’t know if the corruption was related to the Windows Update, but the timing sure was suspect. I replaced the essbase.sec file with a backup copy and I was back in business.

Hopefully this doesn’t happen to you when you update Windows but, if it does, perhaps this blog post will make your recovery quick and painless.
Categories: BI & Warehousing

Introducing Pixel Perfect Reporting in Oracle Analytics Cloud

Tim Dexter - Wed, 2017-12-20 08:30

 

For all you BI Publisher fans, here is the good news - BI Publisher is now available with Oracle Analytics Cloud !!

Oracle Analytics Cloud (OAC) is a scalable and secure public cloud service that provides a full set of capabilities to explore and perform collaborative analytics for your enterprise. You can take data from any source, explore with Data Visualization and collaborate with real-time data. It is available in three flavors - Standard Edition, Data Lake Edition and Enterprise Edition, with Standard Edition giving the base ability to explore data, Data Lake Edition allowing insights into big data, and Enterprise Edition offering the full platter of data exploration, big data analytics, dashboard, enterprise reporting, Essbase etc. Refer to this documentation for additional details on different editions.

With OAC 17.4.5 Enterprise Edition, now you can create pixel perfect report and deliver to a variety of destinations such as email, printer, fax, file server using ftp or WebDAV, Webcenter Content and Content & Experience Cloud. The version of BI Publisher here is 12.2.4.0.

If you have used BI Publisher On-prem, the experience will be very similar feature wise and look-and-feel wise, and therefore you will find it easy to get on-board. If you are new to BI Publisher, you will now be able to create pixel perfect and highly formatted business documents in OAC such as Invoices, Purchase Orders, Dunning Letters, Marketing Collateral, EFT & EDI documents, Financial Statements, Government Forms, Operational Reports, Management Reports, Retail Reports, Shipping Labels with barcodes, Airline boarding passes with PDF417 barcode, Market to Mobile content using QR code, Contracts with fine-print on alternate page, Cross-tab reports, etc.

You can connect to a variety of data sources including BI Subject Areas, BI Analysis and RPD; Schedule your report to run once or as a recurring job; and even burst documents to render in multiple formats and be delivered to multiple destinations.

 

Can we move from BI Publisher on-prem to BI Publisher on OAC?

Well yes, you can. You will have to understand your on-prem deployment and plan accordingly. If your data can be migrated to OAC, that will be the best otherwise you can plan to extend your network to Oracle Cloud allowing OAC to access your on-prem data. The repository can be migrated by archiving and unarchiving mechanism. User data management will be another task where application roles from On-prem will need to be added to OAC application roles. Details on this will be coming soon.

 

Benefits of BI Publisher on OAC

First of all OAC comes with many great features around data exploration and visualization with advanced analytics capabilities. BI Publisher compliments this environment for pixel perfect reporting. So now you have an environment that is packed with Industry leading BI products providing an end-to-end solution for an enterprise. 

Managing Server instances will be a cake walk now, with just few clicks you will be able to scale up/down to a different compute shape or scale out/in to manage nodes in the cluster, saving you both time and money.

Many self service features to manage reports and server related resources.

 

What's new in BI Publisher 12.2.4.0?

BI Publisher in OAC includes all features of 12.2.1.3 and has the following new features in this release:

Accessible PDF Support (Tagged PDF & PDF/UA-1) New Barcodes - QR Code and PDF417 Ability to purge Job History Ability to view diagnostic log for online report Widow-orphan support for RTF template

 

So why wait? You can quickly check this out by creating a free trial account here. Once you login, you are in OAC home page. To get to BI Publisher you need to click on the Page Menu on right side top of the page and then select option "Open Classic Home". BI Publisher options are available under Published Reporting in the classic home page.

For further details on pixel perfect reporting, check the latest Oracle Analytics Cloud Documentation.

 

Stay tuned for more updates on upgrade and new features !

Categories: BI & Warehousing

Rittman Mead at UKOUG 2017

Rittman Mead Consulting - Mon, 2017-12-04 02:58

For those of you attending the UKOUG this year, we are giving three presentations on OBIEE and Data Visualisation.

Francesco Tisiot has two on Monday:

  • 14.25 // Enabling Self-Service Analytics With Analytic Views & Data Visualization From Cloud to Desktop - Hall 7a
  • 17:55 // OBIEE: Going Down the Rabbit Hole - Hall 7a

Federico Venturin is giving his culinary advice on Wednesday:

  • 11:25 // Visualising Data Like a Top Chef - Hall 6a

And Mike Vickers is diving into BI Publisher, also on Wednesday

  • 15:15 // BI Publisher: Teaching Old Dogs Some New Tricks - Hall 6a

In addition, Sam Jeremiah and I are also around, so if anyone wants to catch up, grab us for a coffee or a beer.

Categories: BI & Warehousing

Rittman Mead at UKOUG 2017

Rittman Mead Consulting - Mon, 2017-12-04 02:58

For those of you attending the UKOUG this year, we are giving three presentations on OBIEE and Data Visualisation.

Francesco Tisiot has two on Monday:

  • 14.25 // Enabling Self-Service Analytics With Analytic Views & Data Visualization From Cloud to Desktop - Hall 7a
  • 17:55 // OBIEE: Going Down the Rabbit Hole - Hall 7a

Federico Venturin is giving his culinary advice on Wednesday:

  • 11:25 // Visualising Data Like a Top Chef - Hall 6a

And Mike Vickers is diving into BI Publisher, also on Wednesday

  • 15:15 // BI Publisher: Teaching Old Dogs Some New Tricks - Hall 6a

In addition, Sam Jeremiah and I are also around, so if anyone wants to catch up, grab us for a coffee or a beer.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator - BI &amp; Warehousing