Feed aggregator

Data pump import unable to import table that data pump export exported

Tom Kyte - Tue, 2018-11-06 16:46
I'm having trouble importing a table that export exported. The error I'm getting is "ORA-14308: partition bound element must be one of: string, datetime or interval literal, number, or NULL", but according to the documentation "The partit...
Categories: DBA Blogs

New White Paper: Oracle EBS 12.2 with Database 12.1.0.2 Advanced Row Compression

Steven Chan - Tue, 2018-11-06 16:09

Oracle Database 12c Advanced Compression provides features designed to compress structured data (Advanced Row Compression), unstructured data (Advanced LOB Deduplication and Compression), database backups (RMAN and Data Pump Compression), and network traffic (Data Guard Redo Log Transport and Advanced Network Compression). 

Our EBS architects have released a new whitepaper that focuses on Advanced Row Compression:

This whitepaper quantifies the storage benefits for Oracle E-Business Suite and documents its effect on performance. Advanced Row Compression can reduce your current storage requirements as well as defer the future purchase and integration of additional storage. It includes optimization for Flashback Data Archive (FDA) history tables, which reduces the storage requirements for using FDA to track changes to tables.

Importantly, Advanced Row Compression is completely transparent to Oracle E-Business Suite and can even be used with Transparent Data Encryption (TDE) tablespace encryption because compression is applied before the data blocks are encrypted. The compression ratio depends on the data being compressed and more specifically the cardinality of the data. Customers have reported a reduction in storage by a factor of 2x to 4x across a range of systems. This new white paper examines whether this is the case for Oracle E-Business Suite. 

Related Articles

Categories: APPS Blogs

To or To Not Specify Thread# When Creating SRL?

Michael Dinh - Tue, 2018-11-06 09:24

Just quick and dirty notes on some references.

Notice that the Thread # is 0 for the SRL’s. Resist the temptation to explicitly assign the SRL to a specific thread. If you are using Oracle RAC, a SRL with with Thread # of 0 can be used by any primary instance, any thread from the primary.
https://community.oracle.com/docs/DOC-1007036

Warning: standby redo logs not configured for thread <n> on <db_unique_name> (Doc ID 1956103.1)
Bug 14674237 – VALIDATE DATABASE from BROKER incorrectly reports on standby redo log files (Doc ID 14674237.8)

Warning: standby redo logs not configured for thread 0 on DATABASE
https://jameshuangsj.wordpress.com/2018/01/17/warning-standby-redo-logs-not-configured-for-thread-0-on-database/
When run “Validate Database” DGML command, get “Warning: standby redo logs not configured for thread 0 ” and “Insufficient SRLs” warnings on 12.1.0.2 database with “Database Patch Set Update : 12.1.0.2.170814 (26609783)”.

Oracle 12c DataGuard – Insufficient SRLs reported by DGMGRL VALIDATE DATABASE VERBOSE
https://blog.dbi-services.com/oracle-12c-dataguard-insufficient-srls-reported-by-dgmgrl-validate-database-verbose/
Even on a single instance, use the thread number in your create Standby Redo Log statement.

Oracle Data Cloud Launches ‘Pre-Bid by Moat’ to Help Marketers Identify Brand Safe, Fraud Safe, and Viewable Ad Inventory

Oracle Press Releases - Tue, 2018-11-06 07:00
Press Release
Oracle Data Cloud Launches ‘Pre-Bid by Moat’ to Help Marketers Identify Brand Safe, Fraud Safe, and Viewable Ad Inventory Pre-Bid by Moat Will Be Integrated with Top DSPs Including Adobe Advertising Cloud, Amobee, Basis by Centro, dataxu, MediaMath, The Trade Desk, and VideoAmp

Redwood City, Calif.—Nov 6, 2018

Oracle Data Cloud today announced Pre-Bid by Moat, a placement solution for marketers to identify and utilize ad inventory that meets their high standards for third-party viewability and invalid traffic (IVT). These new capabilities will join existing brand safety segments available through Oracle Data Cloud to create a powerful suite of media spend protection offerings. To make implementation seamless and simple, Pre-Bid by Moat will be integrated with top DSPs, including Adobe Advertising Cloud, Amobee, Basis by Centro, dataxu, MediaMath, The Trade Desk, and VideoAmp.

“Savvy marketers recognize the tremendous financial and reputational risks from misplaced advertising,” said Eric Roza, senior vice president and general manager, Oracle Data Cloud. “Fraud and viewability issues can drain your ad spend and damage your results, while association with offensive or inappropriate content can result in lasting brand damage. Oracle’s Pre-Bid by Moat offers a unified solution to protect media spend for advertisers by bringing together powerful tools to set real-time requirements for viewability, invalid traffic, and brand safety.”

Pre-Bid by Moat helps advertisers to optimize their media buys with:

  • Granular Viewability Standards. Pre-Bid by Moat will allow marketers to set viewability rates for specific ad slots and sizes, offering unprecedented precision before placement.
  • Invalid Traffic Avoidance. Pre-Bid by Moat is powered by Moat’s leading IVT measurement technology. Moat’s IVT solutions also benefit from learnings from Oracle’s other anti-fraud acquisitions, including web security and managed DNS from Oracle Dyn, which helps identify abnormal traffic patterns on the web and detect and block malicious non-human traffic.
  • Contextual Brand Safety Analysis. Using the contextual intelligence platform from its Grapeshot acquisition, Oracle Data Cloud analyzes and interprets the actual content of each web page, not just its URL-level keywords.
 

“The growth of digital advertising over the past decade has been staggering. In order to sustain that growth in the years to come, marketers must trust the effectiveness of the media they purchase,” said Lewis Rothkopf, general manager of media and growth channels, MediaMath. “We’re excited to be one of the first to make Moat’s first-in-class ad measurement actionable for media buyers. We’re confident this collaboration will help advertisers reduce waste and protect consumer trust.”

“Some brand safety solutions ask websites and apps to grade their own homework through self-reported URL keywords describing the content on their sites,” said Patrick Jones, group vice president and general manager, partnerships, Oracle Data Cloud. “Our contextual intelligence platform reviews and classifies billions of pages and apps with language-based algorithms, so we can categorize the content of each page as safe and brand-appropriate. By integrating that brand safety with our leading IVT and viewability tools, Pre-Bid by Moat gives marketers confidence that their ads are reaching real people, viewable, and appearing in brand-safe environments.”

For more information about Oracle Data Cloud’s Pre-Bid by Moat, please contact the Oracle Data Hotline at https://www.oracle.com/cloud/data-hotline.html.

Contact Info
Simon Jones
Oracle PR
+1-415-202-4574
s.jones@oracle.com
About Oracle Data Cloud

Oracle Data Cloud helps marketers use data to capture consumer attention and drive results. Used by 199 of the world's 200 largest advertisers, our Audience, Context and Measurement solutions extend across the top media platforms and a global footprint of more than 100 countries. We give marketers the data and tools needed for every stage of the marketing journey, from audience planning to pre-bid brand safety, contextual relevance, viewability confirmation, fraud protection, and ROI measurement. Oracle Data Cloud combines the leading technologies and talent from Oracle’s acquisitions of AddThis, BlueKai, Crosswise, Datalogix, Grapeshot, and Moat.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • +1-415-202-4574

Oracle Empowers Global Retailers to Streamline Omnichannel Complexity

Oracle Press Releases - Tue, 2018-11-06 07:00
Press Release
Oracle Empowers Global Retailers to Streamline Omnichannel Complexity Better Inventory Visibility and Integration Across Transactions Enables Retailers to Improve Customer Experiences and Loyalty

Redwood Shores, Calif.—Nov 6, 2018

Recent enhancements to Oracle Retail Order Management System Cloud Service provide retailers with real-time visibility into inventory and customer activity throughout the transaction lifecycle, helping brands navigate omnichannel fulfillment complexity. By integrating all aspects of the transaction lifecycle across all channels and points of service, retailers open a wealth of opportunities to increase revenue, enhance customer loyalty, streamline operations, reduce markdowns, and cut costs.

Oracle Retail Order Management System Cloud Service integrates with Oracle Retail Order Broker Cloud Service to manage not only order creation, tracking, and payment but also the steps of the fulfillment process. The Oracle Retail Order Management Cloud Service unifies order maintenance, customer notifications, contact center, and payment settlements, while Oracle Retail Order Broker Cloud Service handles enterprise inventory, the routing engine, drop shipments, and the store connection for the associates.

“Oracle's OMS utilizes its Intelligent Order Routing to allow routing engines to make decisions as to the most profitable location to fulfill an order from while satisfying customers' requirement and still satisfying margin/profitability calculations via Order Broker,” said Victoria Brown, Research Manager, Global Supply Chain Strategy & Execution, IDC. “Oracle's users have found the system to have a tremendous speed to ROI and appreciate the speed to go live from the ease of configuration on the cloud.”

“Oracle Retail is honored to serve as a strategic partner that helps a global community prepare for the next wave of omnichannel innovation,” said Jeff Warren, Vice President Solutions and Strategy, Oracle Retail. “By providing our customers with an enterprise view of inventory and order across our omnichannel suite of solutions, brands are able to create new business standards that enhance customer loyalty, streamline operations and maximize profitability.” 

Oracle Retail Delivers a Prescriptive Approach to Order Management
  • Open Integration: In addition to pre-integration between Order Management and solutions within the Oracle portfolio, including with POS, CRM and ecommerce offerings, which work cohesively and solve end-to-end supply chain needs, open integration allows other third-party systems to handle the necessary business flows.
  • User Experience: A modern contact center with a streamlined workflow allows a quick and easy way for customer service reps to leverage customer information more effectively throughout the transaction and as part of their marketing, merchandising, and customer service efforts.
  • Globalization: To support global retailers, the introduction of language translation and fiscal requirements enables users in new markets. With these enhancements, we can empower contact center agents in France, Spain, Italy, Switzerland, Austria and Germany.
  • Modern Payments: The addition of a payment extension layer eases the integration with providers. A secure integration with Oracle Retail Customer Engagement for our gift card integration allows retailers to take advantage of additional features that are offered.

“The modern user experience present in Oracle Retail Order Management System Cloud Service empowers contact center agents to provide more thoughtful service by serving up all relevant information in a single, highly useable panel,” said Chris Sarne, Senior Director, Oracle Retail. “Visibility into order tracking and inventory location for service associates is critical to encourage customer loyalty as the omnichannel environment for retailers becomes more complex.”

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

Why Your Developer Story is Important

OTN TechBlog - Tue, 2018-11-06 02:45

Stories are a window into life. They can if they resonate provide insights into our own lives or the lives of others.They can help us transmit  knowledge, pass on traditions, solve present day problems or allow us to imagine alternate realities. Open Source software is an example of an alternate reality in software development, where proprietary has been replaced in large part with sharing code that is free and open. How is this relevant to not only developers but people who work in technology? It is human nature that we continue to want to grow, learn and share.

 

With this in mind, I started 60 Second Developer Stories and tried it out at various Oracle Code One events, at Developer Conferences and now at Oracle OpenWorld 2018/Code One. For the latter we had a Video Hangout in the Groundbreakers Hub at CodeOne where anyone with a story to share could do so. We livestream the story via Periscope/Twitter and record it and edit/post it later on YouTube.  In the Video Hangout, we use a green screen and through the miracles of technology Chroma key it in and put in a cool backdrop. Below are some photos of the Video Hangout as well as the ideas we give as suggestions.

Oracle 60 Second Developer Story 2IMG_3756.JPG

Oracle 60 Second Developer Story.jpg

60 Sec with Background.png

  •     Share what you learned on your first job
  •     Share a best coding practice.
  •     Explain how  a tool or technology works
  •     What have you learned recently about building an App?
  •     Share a work related accomplishment
  •     What's the best decision you ever made?
  •     What’s the worst mistake you made and the lesson learned?
  •     What is one thing you learned from a mentor or peer that has really helped you?
  •     Any story that you want to share and community can benefit from

 

 

 

 

Here are some FAQs about the 60 Second Developer Story

 

Q1. I am too shy, and as this is live what if I get it wrong?

A1. It is your story, there is no right or wrong. If you mess up, it’s not a problem we can do a retake.

 

Q2. There are so many stories, how do I pick one?

A2. Share something specific an event that has a beginning, middle an end. Ideally there was a challenge or obstacle and you overcame it. As long as it is meaningful to you it is worth sharing.

 

Q3. What if it’s not exactly 60 seconds, if it’s shorter or longer?

A3. 60 Seconds is a guideline. I will usually show you a cue-card to let you know when you have 30 secs. and 15 secs. left. A little bit over or under is not a big deal.

 

Q4. When can I see the results?

A4. Usually immediately. Whatever Periscope/Twitter handle we are sharing on, plus if you have a personal Twitter handle, we tweet that before you go live, so it will show up on your feed.

 

Q4. What if I am not a Developer?

A5. We use Developer in a broad sense. It doesn’t matter if you are a DBA or Analyst, or whatever. If you are involved with technology and have a story to share, we want to hear it.

 

 

Here is an example of a  a 60 Second Developer Story.

We hope to have the Video Hangout at future Oracle Code and other events and look forward for you to share your 60 Second story.

Exciting News for Unify

Rittman Mead Consulting - Tue, 2018-11-06 02:37
Announcement: Unify for Free

We are excited to announce we are going to make Unify available for free. To get started send an email to unify@rittmanmead.com, we will ask you to complete a short set of qualifying questions, then we can give you a demo, provide a product key and a link to download the latest version.

The free version of Unify will come with no support obligations or SLAs. On sign up, we will give you the option to join our Unify Slack channel, through which you can raise issues and ask for help.

If you’d like a supported version, we have built a special Expert Service Desk package for Unify which covers

  • Unify support, how to, bugs and fixes
  • Assistance with configuration issues for OBIEE or Tableau
  • Assistance with user/role issues within OBIEE
  • Ad-hoc support queries relating to OBIEE, Tableau and Unify

Beyond supporting Unify, the Expert Service Desk package can also be used to provide technical support and expert services for your entire BI and analytics platform, including:

  • An agreed number of hours per month for technical support of Oracle and Tableau's BI and DI tools
  • Advisory, strategic and roadmap planning for your platform
  • Use of any other Rittman Mead accelerators including support for our other Open Source tools and DevOps Developer Toolkits
  • Access to Rittman Mead’s On Demand Training
New Release: Unify 10.0.17

10.0.17 is the new version of Unify. This release doesn’t change how Unify looks and feels, but there are some new features and improvements under the hood.

The most important feature is that now you can get more data from OBIEE using fewer resources. While we are not encouraging you to download all your data from OBIEE to Tableau all time (please use filters, aggregation etc.), we realise that downloading the large datasets is sometimes required. With the new version, you can do it. Hundreds of thousands of rows can be retrieved without causing your Unify host to grind to a halt.

The second feature we would like to highlight is that now you can use OBIEE instances configured with self-signed SSL certificates. Self-signed certificates are often used for internal systems, and now Unify supports such configurations.

The final notable change is that you can now run Unify Server as a Windows service. It wasn't impossible to run Unify Server at system startup before, but it is even easier.

And, of course, we fixed some bugs and enhanced the logging. We would like to see our software function without bugs, but sometimes they just happen, and when they do, you will get a better explanation of what happened.

On most platforms, Unify Desktop should auto update, if it doesn’t, then please download manually.

Unify is 100% owned and maintained by Rittman Mead Consulting Ltd, and while this announcement makes it available for free, all copies must be used under an End User Licence Agreement (EULA) with Rittman Mead Consulting Ltd.

Categories: BI & Warehousing

Authorization with External User Access in Application Express

Tom Kyte - Mon, 2018-11-05 22:26
I need to be able to have external (business) users to be able to access only a few features of the site, for example, accounting can only see pages A and B while executives can see A, B and C. I need to be able to have multiple groups of people...
Categories: DBA Blogs

Execute package procedures in parallel

Tom Kyte - Mon, 2018-11-05 22:26
Hi all, For our customer, we start a process in the night to generate orders for 400 stores. Each store can regenerate its order during the day so the procedure is executed separately for each store. For each store we start a scheduled job with...
Categories: DBA Blogs

Insert Rows from one table to columns of other table.

Tom Kyte - Mon, 2018-11-05 22:26
Hi, Hope you are doing good and the Open World was exciting. I have got 2 tables, Table1 has the columns and data that has to be inserted as rows into Table2. I have shared an example of 5 columns, however the number is not constant. This can ch...
Categories: DBA Blogs

Upgrade DR to 12.2.0.1

Tom Kyte - Mon, 2018-11-05 22:26
Hi, The project is to upgrade 11gR2 RAC database to 12cR2(12.2.0.1) with minimal downtime. The approach is using standby database. Logical standby database cannot be done. Number of nodes in 11gr2 cluster 2. What i am doing >>> 1> P...
Categories: DBA Blogs

Pass summit – dbi visit day 1

Yann Neuhaus - Mon, 2018-11-05 18:16
IMG_1022 Designing Modern Data and Analytic Solution in Azure

After having explained the pros and cons of Azure and the decision drivers for going to an Azure Architecture, some interesting messages were delivered like the decoupling of the storage and compute aspect on Azure, even if some of services still combine both. Another message that we all know but is essential to remind on a regular basis is that cost control is an important aspect. Developers and architects can influence the cost of cloud based platform hugely.

The fact that Azure is evolving a lot, that new services are offered or that their features are changing , you have to be constantly informed about these  new or deprecated capabilities. You are sometimes faced to some migration works to adapt to these evolutions. Proof of concept must be an habit, a routine due to this constantly moving environment.

Organizing and configuring your resources in Azure is also something to consider seriously not to be in trouble and lost with all the resources you deployed within your organization. You have some notions that become important to use to organize your Azure platform like

  • Subscriptions
  • Resource groups
  • Tags
  • Policies
  • Locations of your services
  • Naming convention

Using Tags on Azure can help a lot for reporting reasons.
A right naming convention (i.e. Purpose + type of service + environment), can help you to deploy easier your different environments, but a trick is also not use special characters because they cannot be used consistently in azure services. Using policies will support you to control the services are created the way your governance wants them.

A presentation of a data analytic architecture was presented including the following services in order to managed structured data:

  • Azure data factory
  • Azure Blob Storage
  • Azure data bricks
  • Azure SQL Data Warehouse
  • Azure Analysis Services
  • Power BI

Then following services are suggested additionally to add real-time data analytic

  • Azure HDInsight Kafka
  • Azure Databricks Spark
  • Azure Cosmos DB

They presented then the different layers of the architecture and the possible azure services, data acquisition, data storage, data computing and data visualization.
When doing on premise data warehouse project for 15 years , I was mentioning always that I want to privilege ELT vs. ETL , I was glad to hear that now ELT is really mostly used terminology and Data Factory was presented that way. The new version enables now to run SSIS packages too. The different data storage possibilities, like blob storage, data lake, SQL DWH, SQL DB and Cosmos DB have been reviewed.  Choosing the right data storage is always to find the right balanced between data schema read and data schema write. But in a modern BI Azure platform you will find a a so called polyglot data storage solution, combining several types a storage services.

A component often emerging in this tools constellation is Databricks running Apache Spark for the data engineering and data science. The advantage is that it is closer to open source world and support several languages SQL, Python, R, Scala and can be used for use cases like bath data processing, Interactive analytics, machine learnings, Stream event processing,…

In such environment, the data virtualization is also an interesting capabilities on such architecture. It is possible using for instance polybase, allowing to query the disparate stored data avoiding to replicate, duplicate it.

The per-conference day finished with the presentation of Power BI and Azure Analysis Service and some automation technics with ARM and PowerShell

Again an interesting data analytic day…stay tuned…

Cet article Pass summit – dbi visit day 1 est apparu en premier sur Blog dbi services.

Partner Webcast – Enterprise ready production microservices on Oracle Cloud Infrastructure ...

There is a huge shift in application development where containerisation has become very popular with the move away from monolith application to microservices. Containers provide a level of isolation...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The wait is over! Google Cloud Search with third-party connectivity is now available. Here’s what you need to know.

This month, Google Cloud opened the general availability of Cloud Search with third-party connectivity. This is an evolution of the Cloud Search available across G Suite, the set of cloud-native intelligent productivity and collaboration apps including Gmail, Drive and Docs. We’ve been working with pre-release versions of Cloud Search since January and are excited to finally share the news and capabilities of this exciting new functionality more broadly. This version supports third-party connectors and integrations with non-Google data sources both on-premise and in the cloud opening up use to all enterprise customers.

What is Google Cloud Search?

Google Cloud Search combines Google’s search expertise with features customized for business. Cloud Search can index both G Suite content like Gmail and Drive as well as third-party data both on-premise and in the cloud. This provides a unified search experience and enforces document-level permissions already in place in your repositories. Cloud Search boasts Google’s industry-leading machine learning relevancy and personalization to bring the speed, performance and reliability of Google.com to enterprise customers.

Who can use Cloud Search?

Any enterprise can purchase Cloud Search as a standalone platform edition regardless of whether you use other Google products such as G Suite or Google Cloud Platform. If users in your domain already has G Suite Enterprise licenses, they will now be able to access results from third-party data via the Cloud Search application provided as part of the G Suite. You will be allotted a fixed quota of third-party data that you can index, based on the number of Enterprise licenses. G Suite customers can also purchase the standalone platform edition if additional quota or search applications are required.

What’s new in this release? Third-Party Data Source Connectors

To enable easy indexing of third-party data both on-premise and in the cloud, Google Cloud has released the following reference connectors.

While the above list covers many popular sources, numerous other data sources exist within individual organizations. For this reason, Google Cloud also released content and identity  SDKs to enable custom connector development. Fishbowl has been working with these SDKs for nearly a year and we’ve released the following connectors for Cloud Search:

Embeddable Search Widget

As part of this release, Google Cloud has introduced two new options for consuming search results that include third-party data. The first is an embeddable search widget which provides a customizable search interface for use within an organization’s internal web applications. With only a small amount of HTML and JavaScript, the widget enables customers to integrate with Cloud Search. Using the search widget offers common search features such as facets and pagination while minimizing required development efforts. You can customize components of the interface with your own CSS and JavaScript if desired.

Query API

The second new search option is the Cloud Search Query API. The API provides search and suggest services for creating fully custom search interfaces powered by Cloud Search. It also offers more flexibility than the search widget when embedding search results in an existing application.

Note that in addition to the API and search widget, Google Cloud also offers a pre-built search interface available to customers at cloudsearch.google.com and via Google’s Cloud Search mobile apps available for iOS and Android. These interfaces now support the inclusion of third-party results.

Is this a replacement for Google Search Appliance?

Cloud Search is Google’s next-generation search platform. It is not a GSA in the Cloud but may be an excellent replacement option for many GSA customers. GSA customers who sign a 2-year Cloud Search contract before the end of the year can extend their appliances through the end of 2019 if needed. Google is also offering these customers a 30% discount on their first year of Cloud Search. If you have a GSA that is approaching expiration and are wondering whether Cloud Search would be a good fit, please contact us.

What’s next?

If you’d like to learn more about Google Cloud Search, schedule a demo, or discuss whether your search use case is a good fit, please get in touch.

Fishbowl Solutions is a Google Cloud Partner and authorized Cloud Search reseller.

The post The wait is over! Google Cloud Search with third-party connectivity is now available. Here’s what you need to know. appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

New in Developer Cloud - Fn Support and Wercker Integration

OTN TechBlog - Mon, 2018-11-05 12:19

Over the weekend we rolled out an update to your Oracle Developer Cloud Service instances which introduces several new features. In this blog we'll quickly review two of them - Support for the Fn project and integration with the Wercker CI/CD solution. These new features further enhance the scope of CI/CD functionality that you get in our team development platform.

Project Fn Build Support

Fn is a function-as-a-service open-source platform lead by Oracle and available for developers looking to develop portable functions with a variety of languages. If you are not familiar with Project Fn a good intro on why you should care is this blog, and you can learn more on it through the Fn project's home page on GitHub.

In the latest version of Developer Cloud you have a new option in the build steps menu that helps you define various Fn related commands as part of your build process. So for example if you Fn project code is hosted in the Git repository provided by your DevCS project, you can use the build step to automate a process of building and deploying the function you created.

Fn Build Step

Wercker/ Oracle Container Pipelines Integration

A while back Oracle purchased a docker native CI/CD solution called Wercker - which is now also offered as part of  Oracle Cloud Infrastructure under the name Oracle Container Pipelines. Wercker is focused on offering CI/CD automation for Docker & Kubernetes based micro services. As you probably know we also offer similar support for Docker and Kubernetes in Developer Cloud Service which has support for declarative definition of Docker build steps, and ability to run Kubectl scripts in its build pipelines.

If you have investment in Wercker based CI/C, and you want a more complete agile/DevOps set of features - such as the functionality offered by Developer Cloud Service (including free private Git repositories, issue tracking, agile boards and more) - now you can integrate the two solutions without loosing your investement in Wercker pipelines.

For a while now Oracle Containers Pipeline provides support for picking up the code directly from a git repository hosted in Developer Cloud Service. 

Wercker selecting DevCS

Now we added support for Developer Cloud Service to invoke pipelines you defined in Wercker directly as part of a build job and pipelines in Developer Cloud Service. Once you provide DevCS with your personal token for logging into Wercker, you can pick up specific applications, and pipelines that you would like to execute as part of your build jobs.

Wercker build step

 

There are several other new features and enhancements in this month's release of Oracle Developer Cloud you can read about those in our What's New page.

 

Updated: Minimizing EBS 12.2.8 Upgrade Downtimes

Steven Chan - Mon, 2018-11-05 11:39

We have updated our recommendations for minimizing downtimes when upgrading to EBS 12.2.8:

Written by Jim Machin, a senior member of our Applications Performance group, this white paper contains a wealth of detailed technical recommendations for reducing the amount of time needed for an
EBS 12.2.8 upgrade. 

Events vs. sessions bottlenecks diagram

It covers:

  • Technical preparation and planning activities
  • Diagnostics for monitoring and troubleshooting performance issues
  • Common solutions to performance issues
  • Oracle Applications Tablespace Model (OATM) considerations
  • Methods for purging old data
  • Gathering statistics
  • Detailed recommendations for improving resource management, sizing, and workloads

Latest updates to this whitepaper

This white paper is updated regularly with new tips and recommendations based upon our Application Performance group's work with actual EBS 12.2 customers and their Service Requests.  All EBS 12.2 sysadmins should monitor this white paper regularly.  This latest version includes the following updates:

  • High Level of "enq: KO - fast object checkpoint" wait event – Internal Only
  • Middle Tier Sizing / Applications Tier Bottlenecks
  • Additional Fixed Object and Dictionary statistics information and advice.
  • Additional Materialized View advice, including ADZDMVREFRESHNEEDED.sql
  • Explicit mention of Online Patching Readiness reports and compliance checker in pre upgrade activities.
  • Improved explanation of how AD_ZD_TABLE_APPLY.sql uses DBMS_PARALLEL_EXECUTE to split work into chunks and call DBMS scheduler.
  • Addition of solutions for long running AD_ZD_TABLE_APPLY.sql.
  • Specific instances where extent size and high row length could impact batch size and performance adversely.
  • Advice for initialization parameter db_writer_processes
  • Additional references to documents 554539.1 1065393.1 and 2125596.1. Remove references to document 1585857.1
  • Amend SGA and Huge Pages advice

Related Articles

 

Categories: APPS Blogs

Yamamay Turns to Oracle Retail to Uncover Inventory Performance Insights, Localize Inventory and Optimize Margins

Oracle Press Releases - Mon, 2018-11-05 10:00
Press Release
Yamamay Turns to Oracle Retail to Uncover Inventory Performance Insights, Localize Inventory and Optimize Margins Top Italian Clothing Brand Deploys Oracle Retail Solutions Across Growing EMEA Footprint

Redwood Shores, Calif.—Nov 5, 2018

Leading Italian brand Yamamay, a part of Pianoforte Group, has invested in and deployed an Oracle Retail suite of solutions to help support its long-term global growth strategy.  The retailer has implemented Oracle Retail Merchandising Financial Planning, Oracle Retail Assortment Planning and Oracle Retail Xstore Point-of-Service solutions to localize offerings and personalize the customer experience as it expands the brand beyond an existing footprint of 680 franchise and owned stores in 39 countries across Europe and the Middle East.

In a hyper-competitive market, Yamamay sought to deliver a faster sell-through of goods, drive more significant insight into customer preferences and inspire management to adapt to local market demands as the company continues to add new locations. By more effectively analyzing customer behavior with Oracle Retail technology, Yamamay empowers their planning team to create more localized assortments, and execute strategies to more closely match supply and demand to help optimize revenue and maximize margins.

“Yamamay is pleased to extend the relationship with Oracle.  Yamamay selected a suite of world-class retail solutions to enhance its retail business operational excellence and productivity,” said Roberto Villa, Chief Information Officer, Yamamay. “The Oracle Retail suite offers a vast choice of solutions that will boost Yamamay’s capability to interact with its loyalty-driven customers worldwide effectively. Oracle provides us with the strategic planning system that delivers immediate value and will scale to run our business.”

“In our 2018 consumer research study The New Topography of Retail, we discovered that 72% of European consumers think knowledgeable in-store staff are important to their brand experience. A satisfying brand experience begins with smarter decisions in planning the assortment to executing flawlessly in store,” said Chris James, Vice President of EMEA, Oracle Retail. “Yamamay understands the importance of delivering the right inventory to the right customer from planning through the transaction. We are pleased they chose Oracle Retail to empower their planners and associates.”

Due to the strategic importance of this investment, Yamamay branded their transformational initiative “Project Retail Evolution” around the imperatives of the assortment planning, the product collection and the retail experience. The brand began with the implementation of Oracle Retail Merchandise Financial Planning to gain control of the financial plan and strengthen the Open to Buy (OTB) process and systems and followed with the implementation of Oracle Retail Assortment Planning to reduce the number of SKUs per collection while opening new product streams in the product collection. Yamamay will also use these inventory insights to revise shop clustering strategy to improve their assortment and exposure.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

A Graphical Overview of a Repository

Yann Neuhaus - Mon, 2018-11-05 09:00

As the saying goes, “A Picture Is Worth A Thousands Words”. I’d add “And A Simple Graph Is Worth A Long, Abstruse List of Numbers”. And of words too, so let’s show off a little bit:
Screenshot from 2018-11-03 19-39-57
Interested ? Then, please read on.
It happens not so infrequently that we wish we could quickly plot a few numbers to have a look at their overall trend. The usual procedure is to output the numbers into a csv file, transfer it to a desktop machine, subsequently import it into a spreadsheet program and interactively set up a chart, a tedious manual procedure at best, especially if it has to be repeated several times.
What if we could query a few relevant values from a system and plot them in one go to visualize the produced graphs from within a browser ? What if we could generalize that procedure to any extracted tabular data ? I’m not talking of a sophisticated interface to some enterprise-class graphing tool but just of a simple way to get a quick visual overall feeling of some variables with as little installed software as possible.
I’ll show here how to do that for a Documentum repository but the target system can be anything, a database, an ldap, an O/S, just adapt the queries and the script as needed.
To simplify, I assume that we generally work on a server machine, likely a Linux headless VM, and that the browser runs remotely on any GUI-based desktop, quite a common configuration for Documentum.

The Data

As an administrator, I often have to connect to docbases I never visited before, and I find it useful to run the following queries to help me make acquaintance with those new beasts. In the examples below, the target docbase is an out of the box one with no activity in it, which explains the low numbers and lack of custom doctypes:

— 1. what distinct document types are there ?

select r_object_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by r_object_type order by 1
r_object_type count(*) tot_size
-------------------------------- ---------------------- ----------------------
dm_document 1295 6384130
dm_esign_template 1 46255
dm_format_preferences 1 109
dm_menu_system 2 352034
dm_plugin 2 212586
dm_xml_config 1 534
dmc_jar 236 50451386
dmc_preset_package 2 11951
dmc_tcf_activity_template 10 12162
(9 rows affected)

A reminder of some definitions: The result table above is a dataset. The r_object_type is the category, typically shown in the X-axis. The count(*) and tot_size columns are the “variables” to be plot, typically as bars, lines or pie slices. They are also named “traces” in some graphing tools. Some of the datasets here have 1, 2 or 3 variables to be plot. Datasets with 1 variable can be plot as bars, lines or pies charts. Datasets with 2 variables can be plot as grouped or stacked bars, or as 2 distinct graphs of one variable. There are many possibilities and combinations, and the choice depends on which representation offers the best visual clarity. Sometimes, the plotting library even lets one edit the programmatically generated graph and interactively choose the best type of graph with no coding needed !

— 2. how does their population vary over time ?

select r_object_type, datefloor(month, r_creation_date) as "creation_month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by r_object_type, datefloor(month, r_creation_date) order by 1, 2;
r_object_type creation_month count(*) tot_size
-------------------------------- ------------------------- ------------ ------------
dm_document 11/1/2017 01:00:00 448 3055062
dm_document 12/1/2017 01:00:00 120 323552
dm_document 3/1/2018 01:00:00 38 66288
dm_document 4/1/2018 02:00:00 469 1427865
dm_document 5/1/2018 02:00:00 86 584453
dm_document 6/1/2018 02:00:00 20 150464
dm_document 7/1/2018 02:00:00 40 301341
dm_document 8/1/2018 02:00:00 32 151333
dm_document 9/1/2018 02:00:00 46 356386
dm_esign_template 11/1/2017 01:00:00 1 46255
dm_format_preferences 11/1/2017 01:00:00 1 109
dm_menu_system 11/1/2017 01:00:00 2 352034
dm_plugin 11/1/2017 01:00:00 2 212586
dm_xml_config 11/1/2017 01:00:00 1 534
dmc_jar 11/1/2017 01:00:00 236 50451386
dmc_preset_package 11/1/2017 01:00:00 2 11951
dmc_tcf_activity_template 11/1/2017 01:00:00 10 12162
(17 rows affected)

This query tells how heavily used the repository is. Also, a broad spectre of custom document types tends to indicate that the repository is used in the context of applications.

— 3. how changing are those documents ?

select r_object_type, datefloor(month, r_modify_date) as "modification_month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) where r_creation_date < r_modify_date group by r_object_type, datefloor(month, r_modify_date) order by 1, 2;
r_object_type modification_month count(*) tot_size
-------------------------------- ------------------------- ------------ ------------
dm_document 11/1/2017 01:00:00 127 485791
dm_document 12/1/2017 01:00:00 33 122863
dm_document 3/1/2018 01:00:00 16 34310
dm_document 4/1/2018 02:00:00 209 749370
dm_document 5/1/2018 02:00:00 42 311211
dm_document 6/1/2018 02:00:00 10 79803
dm_document 7/1/2018 02:00:00 20 160100
dm_document 8/1/2018 02:00:00 12 81982
dm_document 9/1/2018 02:00:00 23 172299
dm_esign_template 8/1/2018 02:00:00 1 46255
dmc_jar 11/1/2017 01:00:00 14 1616218
dmc_preset_package 11/1/2017 01:00:00 2 11951
dmc_tcf_activity_template 11/1/2017 01:00:00 10 12162
(13 rows affected)

This query shows if a repository is used interactively rather than for archiving. If there are lots of editions, the docbase is a lively one; on the contrary, if documents are rarely or never edited, the docbase is mostly used for archiving. The document ownership can tell too, technical accounts vs. real people.

— samething but without distinction of document type;
— 4. new documents;

select datefloor(month, r_creation_date) as "creation_month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by datefloor(month, r_creation_date) order by 1;
creation_month count(*) tot_size
------------------------- ---------------------- ----------------------
11/1/2017 01:00:00 703 54142079
12/1/2017 01:00:00 120 323552
3/1/2018 01:00:00 38 66288
4/1/2018 02:00:00 469 1427865
5/1/2018 02:00:00 86 584453
6/1/2018 02:00:00 20 150464
7/1/2018 02:00:00 40 301341
8/1/2018 02:00:00 32 151333
9/1/2018 02:00:00 42 323772
(9 rows affected)

— 5. modified documents;

select datefloor(month, r_modify_date) as "modification_month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) where r_creation_date < r_modify_date group by datefloor(month, r_modify_date) order by 1;
modification_month count(*) tot_size
------------------------- ---------------------- ----------------------
11/1/2017 01:00:00 153 2126122
12/1/2017 01:00:00 33 122863
3/1/2018 01:00:00 16 34310
4/1/2018 02:00:00 209 749370
5/1/2018 02:00:00 42 311211
6/1/2018 02:00:00 10 79803
7/1/2018 02:00:00 20 160100
8/1/2018 02:00:00 13 128237
9/1/2018 02:00:00 20 140354
(9 rows affected)

— 6. what content types are used in the repository ?

select a_content_type, count(*) as "count_content_type", sum(r_full_content_size) as "tot_content_size" from dm_document(all) group by a_content_type order by 1
col a_content_type 20
a_content_type count_content_type tot_content_size
-------------------- ------------------ ----------------
11 0
amipro 1 4558
crtext 43 389681
dtd 5 163392
excel12bbook 1 8546
excel12book 1 7848
excel12mebook 1 7867
excel12metemplate 1 7871
excel12template 1 7853
excel5book 1 15360
excel8book 1 13824
excel8template 1 13824
ibmshrlib 2 212586
jar 233 48856783
java 3 1594603
maker55 5 117760
mdoc55 9 780288
ms_access7 1 83968
ms_access8 1 59392
ms_access8_mde 1 61440
msw12 1 11280
msw12me 1 10009
msw12metemplate 1 10004
msw12template 1 9993
msw6 1 11776
msw8 1 19456
msw8template 1 27136
pdf 2 50214
powerpoint 1 14848
ppt12 1 29956
ppt12me 1 29944
ppt12meslideshow 1 29943
ppt12metemplate 1 29941
ppt12slideshow 1 29897
ppt12template 1 29914
ppt8 1 7680
ppt8_template 1 9728
text 1175 4236008
ustn 1 10240
vrf 1 158993
wp6 1 941
wp7 1 1362
wp8 1 1362
xml 28 37953
zip 1 255125
(45 rows affected)

The content type may indicates the kind of activity the repository is used for. While pdf are generally final documents suitable for archiving, a predominance of illustrator/pagemaker/QuarkXPress vs photoshop vs multimedia vs Office documents can give a hint at the docbase’s general function.
Since most content types are unused (who uses Word Perfect or Ami Pro any more ?), a constraint on the count can be introduced, e.g. “having count(*) > 10″ to filter out those obsolete formats.

— 7. and how do they evolve over time ?

select datefloor(month, r_creation_date) as "creation_month", a_content_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by datefloor(month, r_creation_date), a_content_type order by 1, 2
creation_month a_content_type count(*) tot_size
------------------------- -------------------- ------------ ------------
11/1/2017 01:00:00 1 0
11/1/2017 01:00:00 amipro 1 4558
11/1/2017 01:00:00 crtext 43 389681
11/1/2017 01:00:00 dtd 5 163392
11/1/2017 01:00:00 excel12bbook 1 8546
11/1/2017 01:00:00 excel12book 1 7848
11/1/2017 01:00:00 excel12mebook 1 7867
11/1/2017 01:00:00 excel12metemplate 1 7871
11/1/2017 01:00:00 excel12template 1 7853
11/1/2017 01:00:00 excel5book 1 15360
11/1/2017 01:00:00 excel8book 1 13824
11/1/2017 01:00:00 excel8template 1 13824
11/1/2017 01:00:00 ibmshrlib 2 212586
11/1/2017 01:00:00 jar 233 48856783
11/1/2017 01:00:00 java 3 1594603
11/1/2017 01:00:00 maker55 5 117760
11/1/2017 01:00:00 mdoc55 9 780288
11/1/2017 01:00:00 ms_access7 1 83968
11/1/2017 01:00:00 ms_access8 1 59392
11/1/2017 01:00:00 ms_access8_mde 1 61440
11/1/2017 01:00:00 msw12 1 11280
11/1/2017 01:00:00 msw12me 1 10009
11/1/2017 01:00:00 msw12metemplate 1 10004
11/1/2017 01:00:00 msw12template 1 9993
11/1/2017 01:00:00 msw6 1 11776
11/1/2017 01:00:00 msw8 1 19456
11/1/2017 01:00:00 msw8template 1 27136
11/1/2017 01:00:00 pdf 2 50214
11/1/2017 01:00:00 powerpoint 1 14848
11/1/2017 01:00:00 ppt12 1 29956
11/1/2017 01:00:00 ppt12me 1 29944
11/1/2017 01:00:00 ppt12meslideshow 1 29943
11/1/2017 01:00:00 ppt12metemplate 1 29941
11/1/2017 01:00:00 ppt12slideshow 1 29897
11/1/2017 01:00:00 ppt12template 1 29914
11/1/2017 01:00:00 ppt8 1 7680
11/1/2017 01:00:00 ppt8_template 1 9728
11/1/2017 01:00:00 text 338 906940
11/1/2017 01:00:00 ustn 1 10240
11/1/2017 01:00:00 vrf 1 158993
11/1/2017 01:00:00 wp6 1 941
11/1/2017 01:00:00 wp7 1 1362
11/1/2017 01:00:00 wp8 1 1362
11/1/2017 01:00:00 xml 28 37953
11/1/2017 01:00:00 zip 1 255125
12/1/2017 01:00:00 text 120 323552
3/1/2018 01:00:00 text 38 66288
4/1/2018 02:00:00 text 469 1427865
5/1/2018 02:00:00 text 86 584453
6/1/2018 02:00:00 text 20 150464
7/1/2018 02:00:00 text 40 301341
8/1/2018 02:00:00 10 0
8/1/2018 02:00:00 text 22 151333
9/1/2018 02:00:00 text 42 323772
(54 rows affected)

— 8. where are those contents stored ?

select a_storage_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by a_storage_type order by a_storage_type
col a_storage_type 20
a_storage_type count(*) tot_size
-------------------- ------------ ------------
11 0
filestore_01 1539 57471147
(2 rows affected)

Filestores are conceptually quite similar to Oracle RDBMS tablespaces. It is a good practice to separate filestores by applications/type of documents and/or even by time if the volume of documents in important. This query tells if this is in place and, if so, what the criteria were.

— 9. dig by document type;

select r_object_type, a_storage_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by r_object_type, a_storage_type order by r_object_type, a_storage_type
r_object_type a_storage_type count(*) tot_size
-------------------------------- -------------------- ------------ ------------
dm_document 11 0
dm_document filestore_01 1284 6384130
dm_esign_template filestore_01 1 46255
dm_format_preferences filestore_01 1 109
dm_menu_system filestore_01 2 352034
dm_plugin filestore_01 2 212586
dm_xml_config filestore_01 1 534
dmc_jar filestore_01 236 50451386
dmc_preset_package filestore_01 2 11951
dmc_tcf_activity_template filestore_01 10 12162
(10 rows affected)

— 10. same but by content format;

select a_content_type, a_storage_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by a_content_type, a_storage_type having count(*) > 10 order by a_content_type, a_storage_type
a_content_type a_storage_type count(*) tot_size
-------------------------------- --------------- ---------------------- ----------------------
crtext filestore_01 43 389681
jar filestore_01 233 48856783
text filestore_01 1204 4432200
xml filestore_01 28 37953
(4 rows affected)

— 11. what ACLs do exist and who created them ?

select owner_name, count(*) from dm_acl group by owner_name order by 2 desc
col owner_name 30
owner_name count(*)
------------------------------ ------------
dmadmin 179
dmtest 44
dm_bof_registry 6
dmc_wdk_preferences_owner 2
dmc_wdk_presets_owner 2
dm_mediaserver 1
dm_audit_user 1
dm_autorender_mac 1
dm_fulltext_index_user 1
dm_autorender_win31 1
dm_report_user 1
(11 rows affected)

Spoiler alert: the attentive readers have probably noticed the command “col” preceding several queries, like in “col name 20″; this is not a standard idql command but part of an extended idql which will be presented in a future blog.
ACLs are part of the security model in place, if any. Usually, the owner is a technical account, sometimes a different one for each application or application’s functionality or business line, so this query can show if the repository is under control of applications or rather simply used as a replacement for a shared drive. Globally, it can tell how the repository’s security is managed, if at all.

— 12. ACLs most in use;

select acl_name, count(*) from dm_document(all) where acl_name not like 'dm_%' group by acl_name having count(*) >= 10 order by 1
acl_name count(*)
-------------------------------- ----------------------
BOF_acl 233
(1 row affected)

Here too, a filter by usage can be introduced so that rarely used acls are not reported, e.g. “having count(*) > 10″.

— 13. queue items;

select name, count(*) from dmi_queue_item group by name order by 2 desc
name count(*)
-------------------- ------------
dm_autorender_win31 115
dmadmin 101
dmtest 54
(3 rows affected)

This query is mainly useful to show if external systems are used for generating renditions, thumbnails or any other document transformation.

— non quantifiable queries;

select object_name from dm_acl where object_name not like 'dm_%' order by 1
object_name
--------------------------------
BOF_acl
BOF_acl2
BPM Process Variable ACL
BatchPromoteAcl
Default Preset Permission Set
Global User Default ACL
WebPublishingAcl
Work Queue User Default ACL
dce_world_write
desktop_client_acl1
replica_acl_default
(11 rows affected)
 
col name 20
col root 30
col file_system_path 70
select f.name, f.root, l.file_system_path from dm_filestore f, dm_location l where f.root = l.object_name
name root file_system_path
-------------------- ------------------------------ ----------------------------------------------------------------------
filestore_01 storage_01 /home/dmadmin/documentum/data/dmtest/content_storage_01
thumbnail_store_01 thumbnail_storage_01 /home/dmadmin/documentum/data/dmtest/thumbnail_storage_01
streaming_store_01 streaming_storage_01 /home/dmadmin/documentum/data/dmtest/streaming_storage_01
replicate_temp_store replicate_location /home/dmadmin/documentum/data/dmtest/replicate_temp_store
replica_filestore_01 replica_storage_01 /home/dmadmin/documentum/data/dmtest/replica_content_storage_01
(5 rows affected)

No numbers here, just plain text information.

There is of course much more information to query (e.g. what about lifecycles and workflows, users and groups ?) depending on what one needs to look at in the repositories, but those are enough examples for our demonstration’s purpose. The readers can always add queries or refine the existing ones to suit their needs.
At this point, we have a lot of hard-to-ingest numbers. Plotting them gives a pleasant 2-dimensional view of the data and will allow to easily compare and analyze them, especially if the generated plot offers some interactivity.

The Graphing Library

To do this, we need a graphing library and since we want to be able to look at the graphs from within a browser for platform-independence and zero-installation on the desktop, we need one that lets us generate an HTML page containing the charts. This is possible in 2 ways:
o   the library is a javascript one; we will programmatically build up json literals, pass them to the library’s plotting function and wrap everything into an html page which is saved on disk for later viewing;
o   the library is usable from several scripting languages and includes a function to generate an HTML page containing the graph; we will chose one for python since this language is ubiquitous and we have now a binding for Documentum (see my blog here);
Now, there are tons of such javascript libraries available on-line (see for example here for a sample) but our choice criteria will be simple:
o   the library must be rich enough but at the same time easy to use;
o   it must do its work entirely locally as servers are generally not allowed to access the Internet;
o   it must be free in order to get rid of the licensing complexity as it can often be overkill in such a simple and confined usage;
As we must pick one, let’s choose Plotly since it is very capable and largely covers our needs. Besides, it works either as a javascript or as a python library but for more generality let’s just use it as a javascript library and generate ourselves the html pages. As an additional bonus, Plotly also allows to interactively edit a plotted graph so it can be tweaked at will. This is very convenient because it lessens the effort to optimize the graphs’ readability as this can be done later by the users themselves while viewing the graphs. For example, the users can zoom, unselect variables, move legends around and much more.
To install it, simply right-click here and save the link to the default graph location in the project’s directory, e.g. /home/dmadmin/graph-stats/graphs.

pwd
/home/dmadmin/graph-stats/graphs
ls -l
...
-rw-rw-r-- 1 dmadmin dmadmin 2814564 Oct 11 11:40 plotly-latest.min.js
...

That’s less than 2.7 MiB of powerful compact js source code. Since it is a text file, there shouldn’t be any security concern in copying that file on a server. Also, as it is executed inside the sandbox of a browser, security is normally not an issue.
The library needs to be in the same directory as the generated html files. If those files are copied to a desktop for direct viewing from a browser with File/Open, don’t forget to copy over the Plotly.js library too, open the html file in an editor, go to the line 6 below and set src to the path of the html file:

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
   <head>
      <title>Graphical Overview of Repository SubWay</title>
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <script src="...../plotly-latest.min.js"></script>
   </head>
Viewing the html pages

Since those pages must be accessed from any machine on the network (typically, a desktop with a browser), despite they may be created directly on the server that hosts the repositories of interest, we need a web server. It may happen that a disk volume is shared between desktops and servers but this is far from being the general case. Fortunately, python is very helpful here and saves us from the tedious task of setting up a full-fledged web server such as apache. The one-liner below will start a server for visualizing the files in the current directory and its sub-directories:

python3 -m http.server [port] # for python >= v3
python -m SimpleHTTPServer [port] # for python 2.x

It can also be moved in the background to free the command-line. To stop it, bring it in the foreground and type ctrl-C.
Its default port is 8000.
This mini-web server is now waiting for requests on the given port and its IP address. In order to determine that IP address, use the ifconfig command and try one address that is on the same network as the desktop with the browser. Then, load the URL, e.g. http://192.168.56.10:8000/, and you’ll be greeted by a familiar point-and-click interface.
The server presents a directory listing from which to select the html files containing the generated graphs.

An Example

Here is a complete example, from the query to the graph.
The 4th DQL query and its output:

select datefloor(month, r_creation_date) as "creation_month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by datefloor(month, r_creation_date) order by 1;
creation_month count(*) tot_size
------------------------- ---------------------- ----------------------
11/1/2017 01:00:00 703 54142079
12/1/2017 01:00:00 120 323552
3/1/2018 01:00:00 38 66288
4/1/2018 02:00:00 469 1427865
5/1/2018 02:00:00 86 584453
6/1/2018 02:00:00 20 150464
7/1/2018 02:00:00 40 301341
8/1/2018 02:00:00 32 151333
9/1/2018 02:00:00 42 323772
(9 rows affected)

The command to generate the graph:

pwd
/home/dmadmin/graph-stats
./graph-stats.py --docbase dmtest

A simplified generated html page showing only the count(*) variable:

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
      <head>
         <title>Statistical Graphs for Repository dmtest</title>
         <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
         <script src="http://192.168.56.10:8000/plotly-latest.min.js"></script>
      </head>
      <body>
      <div id="TheChart" style="width:400px;height:400px;"></div>
      <script>
         var ctx = document.getElementById("TheChart");
         Plotly.newPlot(ctx, [{
                             type: "bar",
                             name: "new docs",
                             x: ['2017/11', '2017/12', '2018/03', '2018/04', '2018/05', '2018/06', '2018/07', '2018/08', '2018/09'],
                             y: ['703', '120', '38', '469', '86', '20', '40', '32', '42'],
                             marker: {
                                        color: '#4e6bed',
                                        line: {
                                                 width: 2.5
                                              }
                                     }
                            }],
                            {
                               title: "New Documents",
                               font: {size: 18}
                            },
                        {responsive: true});
      </script>
      </body>
</html>

Its resulting file:

ll graphs
total 2756
-rw-rw-r-- 1 dmadmin dmadmin 2814564 Oct 12 14:18 plotly-latest.min.js
-rw-rw-r-- 1 dmadmin dmadmin 1713 Oct 12 14:18 dmtest-20181012-141811.html

Same but exposed by the mini-web server and viewed from a browser:
Screenshot from 2018-11-03 15-02-56
When clicking on one of the html file, a graph such as the one below is displayed:
Screenshot from 2018-11-03 15-08-52

The python script

The following python script runs the 13 queries above and produce one large HTML page containing the graphs of all the datasets. It can also be used to query a list of local docbases and generate one single, huge html report or one report per docbase. It can even directly output the produced html code to stdout for further in-line processing, e.g. to somewhat compact the javascript code:

./graph-stats.py --docbase dmtest -o stdout | gawk '{printf $0}' > graphs/dmtest-gibberish.html
 
wc -l graphs/-gibberish.html
0 graphs/-gibberish.html
 
less graphs/dmtest-gibberish.html
Screenshot from 2018-11-03 15-26-11

So, here is the script:

#!/usr/bin/env python

# 10/2018, C. Cervini, dbi-services;
 
import sys
import getopt
from datetime import datetime
import json
import DctmAPI

def Usage():
   print("""
Usage:
Connects as dmadmin/xxxx to a local repository and generates an HTML page containing the plots of several DQL queries result;
Usage:
       ./graph-stats.py -h|--help | -d|--docbase <docbase>{,<docbase>} \[-o|--output_file <output_file>\]
<docbase> can be one repository or a comma-separated list of repositories;
if <output_file> is omitted, the html page is output to ./graphs/<docbase>-$(date +"%Y%m%d-%H%M%S").html;
<output_file> can be "stdout", which is useful for CGI programming;
Example:
       ./graph-stats.py -d dmtest,mail_archive,doc_engineering
will query the docbases dmtest, mail_archive and doc_engineering and output the graphs to the files <docbase>-$(date +"%Y%m%d-%H%M%S").html, one file per docbase;
       ./graph-stats.py -d mail_archive,doc_engineering -o all_docbase_current_status
will query the docbases mail_archive and doc_engineering and output all the graphs to the unique file all_docbase_current_status;
       ./graph-stats.py -d mail-archive
will query docbase mail-archive and output the graphs to the file mail-archive-$(date +"%Y%m%d-%H%M%S").html;
       ./graph-stats.py --docbase dmtest --output dmtest.html
will query docbase dmtest and output the graphs to the file dmtest.html;
       ./graph-stats.py -d dmtest --output stdout
will query docbase dmtest and output the graphs to stdout;
""")

def Plot2HTML(div, graph_title, data, data_labels, bLineGraph = False, mode = None):
   global html_output
   if None != html_output:
      sys.stdout = open(html_output, "a")

   # start the html page;
   if "b" == mode:
      global server, page_title
      print('''
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN">
<html>
   <head>
      <title>''' + page_title + '''</title>
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <script src="http://''' + server + '''/plotly-latest.min.js"></script>
   </head>
   <body>
   <center><h3>''' + page_title + '''</h3></center>
''')

   # append to the body of the html page;
   if len(data_labels) == 4:
      stack_labels = {}
      for point in data:
         stack_labels[point[data_labels[0]]] = 0
      print('''
      <div style="width:1500px;height:600px;">
          <div id="''' + div + '-' + data_labels[2] + '''" style="width:50%; float:left;"></div>
          <div id="''' + div + '-' + data_labels[3] + '''" style="width:50%; float:left;"></div>
      </div>
      <script>
         var ctx = document.getElementById("''' + div + '-' + data_labels[2] + '''");
      ''')
      variables = ""
      for stack in stack_labels:
         x = [point[data_labels[1]] for i, point in enumerate(data) if point[data_labels[0]] == stack]
         y1 = [point[data_labels[2]] for i, point in enumerate(data) if point[data_labels[0]] == stack]
         variables +=  ("" if not variables else ", ") + "data_" + stack 
         if bLineGraph:
            vars()['data_' + stack] = {
                                        'name': stack,
                                        'type': "scatter",
                                        'mode': "lines",
                                        'x': x,
                                        'y': y1,
                                      };
         else:
            vars()['data_' + stack] = {
                                        'name': stack,
                                        'type': "bar",
                                        'x': x,
                                        'y': y1,
                                        'width': 0.25,
                                        'marker': {
                                                     #'color': '#009933',
                                                     'line': {
                                                                'width': 1.0
                                                             }
                                                  }
                                      };
         print('data_' + stack + ' = ' + json.dumps(vars()['data_' + stack]) + ';')
         layout = {
                     'title': '<b>' + graph_title + '<br>' + data_labels[2] + '</b>',
                     'legend': {'x': -.1, 'y': 1.2, 'font': {'size': 8}},
                     'font': {'size': 12},
                     'width': 750,
                     'height': 600,
                     'xaxis': {
                                 'title': '<b>' + data_labels[1] + '</b>',
                                 'titlefont': {'size': 10},
                                 'tickangle': -45,
                                 'tickfont': {'size': 8},
                                 'zeroline': True,
                                 'showline': True,
                                 'categoryorder': "category ascending",
                                 'type': "category"
                              },
                     'yaxis': {
                                 'title': '<b>' + data_labels[2] + '</b>',
                                 'titlefont': {'size': 10},
                                 'zeroline': True,
                                 'showline': True,
                                 'tickfont': {'size': 8},
                                 'showgrid': False
                              }
                  }
         if not bLineGraph:
            layout.update({'barmode': "stack", 'bargap': 0.15, 'bargroupgap': 0.5})
         interaction = {'responsive': True, 'scrollZoom': True, 'editable': True}
      print('''Plotly.newPlot(ctx,
                              [''' + variables + '], ' +
                              json.dumps(layout) + ',' +
                              json.dumps(interaction) + ''');''')
      for stack in stack_labels:
         x = [point[data_labels[1]] for i, point in enumerate(data) if point[data_labels[0]] == stack]
         y2 = [point[data_labels[3]] for i, point in enumerate(data) if point[data_labels[0]] == stack]
         if bLineGraph:
            vars()['data_' + stack] = {
                                         'name': stack,
                                         'type': "scatter",
                                         'mode': "lines",
                                         'x': x,
                                         'y': y2
                                      };
         else:
            vars()['data_' + stack] = {
                                         'name': stack,
                                         'type': "bar",
                                         'x': x,
                                         'y': y2,
                                         'width': 0.25,
                                         'marker': {
                                                      #'color': '#009933',*/
                                                      'line': {
                                                                 'width': 1.0
                                                              }
                                                   }
                                      };
         print('data_' + stack + ' = ' + json.dumps(vars()['data_' + stack]) + ';')
      layout = {
                  'title': '<b>' + graph_title + '<br>' + data_labels[3] + '</b>',
                  'legend': {'x': -.1, 'y': 1.2, 'font': {'size': 8}},
                  'font': {
                             'size': 12
                          },
                  'width': 750,
                  'height': 600,
                  'xaxis': {
                            'title': '<b>' + data_labels[1] + '</b>',
                            'titlefont': {'size': 10},
                            'tickangle': -45,
                            'tickfont': {'size': 8},
                            'zeroline': True,
                            'showline': True,
                            'categoryorder': "category ascending",
                            'type': "category"
                         },
                  'yaxis': {
                    'title': '<b>' + data_labels[3] + '</b>',
                    'titlefont': {'size': 10},
                    'zeroline': True,
                    'showline': True,
                    'tickfont': {'size': 8},
                    'showgrid': False
                  }
               }
      if not bLineGraph:
         layout.update({'barmode': "stack", 'bargap': 0.15, 'bargroupgap': 0.5})
      interaction = {'responsive': True, 'scrollZoom': True, 'editable': True}
      print('''
      var ctx = document.getElementById("''' + div + '-' + data_labels[3] + '''");
      Plotly.newPlot(ctx,
                     [''' + variables + '],' + 
                     json.dumps(layout) + ''',
                     ''' + json.dumps(interaction) + ''');
      </script>
''')
   elif len(data_labels) == 3:
      print('''
      <div style="width:1200px;height:600px;">
          <div id="''' + div + '''" style="width:75%; float:left;"></div>
          <div id="''' + div + '''-pie" style="width:25%; float:left;"></div>
      </div>
      <script>
         var ctx = document.getElementById("''' + div + '''");
''')
      traces = []
      if not bLineGraph:
         traces = [
                     {
                        'name': data_labels[1],
                        'type': "bar",
                        'x': [point[data_labels[0]] for i, point in enumerate(data)],
                        'y': [point[data_labels[1]] for i, point in enumerate(data)],
                        'width': 0.25,
                        'marker': {
                                     'color': '#009933',
                                     'line': {
                                              'width': 1.0
                                             }
                                  },
                     },
                     # work around for bug "Grouped bar charts do not work with multiple Y axes";
                     # see https://github.com/plotly/plotly.js/issues/78;
                     # must be inserted here;
                     # invisible second trace in the first group
                     {
                        'x': [point[data_labels[0]] for i, point in enumerate(data)],
                        'barmode': "overlay",
                        'y': [0], 'type': 'bar', 'hoverinfo': 'none', 'showlegend': False
                     },
                     # invisible first trace in the second group
                     {
                        'x': [point[data_labels[0]] for i, point in enumerate(data)],
                        'y': [0], 'type': 'bar', 'yaxis': 'y2', 'hoverinfo': 'none', 'showlegend': False
                     },
                     {
                        'name': data_labels[2],
                        'type': "bar",
                        'x': [point[data_labels[0]] for i, point in enumerate(data)],
                        'y': [point[data_labels[2]] for i, point in enumerate(data)],
                        'width': 0.25,
                        'yaxis': "y2",
                        'marker': {
                                     'color': '#4e6bed',
                                     'line': {
                                                'width': 1.0
                                             }
                                  }
                     }
                   ]
      else:
         traces = [
                     {
                        'name': data_labels[1],
                        'type': "scatter",
                        'mode': "lines",
                        'x': [point[data_labels[0]] for i, point in enumerate(data)],
                        'y': [point[data_labels[1]] for i, point in enumerate(data)],
                        'width': 0.25,
                     },
                     {
                        'name': data_labels[2],
                        'type': "scatter",
                        'mode': "lines",
                        'x': [point[data_labels[0]] for i, point in enumerate(data)],
                        'y': [point[data_labels[2]] for i, point in enumerate(data)],
                        'width': 0.25,
                        'yaxis': "y2",
                     }
                  ]
      layout = {
                  'title': '<b>' + graph_title + '</b>',
                  'legend': {'x': -.1, 'y': 1.2, 'font': {'size': 8}},
                  'font': {'size': 12},
                  'width': 800,
                  'height': 600,
                  'xaxis': {
                              'title': '<b>' + data_labels[0] + '</b>',
                              'titlefont': {'size': 10},
                              'tickangle': -45,
                              'tickfont': {'size': 8},
                              'zeroline': True,
                              'showline': True
                           },
                  'yaxis': {
                              'title': '<b>' + data_labels[1] + '</b>',
                              'titlefont': {'size': 10},
                              'zeroline': True,
                              'showline': True,
                              'tickfont': {'size': 8},
                              'showgrid': False,
                           },
                  'yaxis2': {
                               'title': '<b>' + data_labels[2] + '</b>',
                               'titlefont': {'size': 10},
                               'tickfont': {'size': 8},
                               'zeroline': True,
                               'showline': True,
                               'overlaying': "y",
                               'side': "right",
                               'showgrid': False,
                            }
               }
      if bLineGraph:
         layout['yaxis'].update({'type': "linear"})
         layout['yaxis2'].update({'type': "linear"})
         pass
      else:
         layout.update({'barmode': "group", 'bargap': 0.15, 'bargroupgap': 0.5})
      interaction = {'responsive': True, 'scrollZoom': True, 'editable': True}
      print('''
         Plotly.newPlot(ctx,
                        ''' + json.dumps(traces) + ''',
                        ''' + json.dumps(layout) + ''',
                        ''' + json.dumps(interaction) + ''');
      </script>

''')
      if not bLineGraph:
         traces =  [
                      {
                         'name': data_labels[1],
                         'hole': .4,
                         'type': "pie",
                         'labels': [point[data_labels[0]] for i, point in enumerate(data)],
                         'values': [point[data_labels[1]] for i, point in enumerate(data)],
                         'domain': {
                            'row': 0,
                            'column': 0
                         },
                         'outsidetextfont': {'size': 8},
                         'insidetextfont': {'size': 8},
                         'legend': {'font': {'size': 8}},
                         'textfont': {'size': 8},
                         'font': {'size': 8},
                         'hoverinfo': 'label+percent+name',
                         'hoverlabel': {'font': {'size': 8}},
                         'textinfo': 'none'
                      },
                      {
                         'name': data_labels[2],
                         'hole': .4,
                         'type': "pie",
                         'labels': [point[data_labels[0]] for i, point in enumerate(data)],
                         'values': [point[data_labels[2]] for i, point in enumerate(data)],
                         'domain': {
                            'row': 1,
                            'column': 0
                         },
                         'outsidetextfont': {'size': 8},
                         'insidetextfont': {'size': 8},
                         'legend': {'font': {'size': 8}},
                         'textfont': {'size': 8},
                         'font': {'size': 8},
                         'hoverinfo': 'label+percent+name',
                         'hoverlabel': {'font': {'size': 8}},
                         'textinfo': 'none'
                      },
                   ]
         layout = {
                     'title': '<b>' + graph_title + '</b>',
                     'annotations': [
                                       {
                                          'font': {
                                                    'size': 8
                                                  },
                                          'showarrow': False,
                                          'text': '<b>' + data_labels[1] + '</b>',
                                          'x': 0.5,
                                          'y': 0.8
                                       },
                                       {
                                          'font': {
                                                     'size': 8
                                                  },
                                          'showarrow': False,
                                          'text': '<b>' + data_labels[2] + '</b>',
                                          'x': 0.5,
                                          'y': 0.25
                                       }
                                    ],
                     'height': 600,
                     'width': 600,
                     'grid': {'rows': 2, 'columns': 1},
                     'legend': {'font': {'size': 10}}
                  }

         print('''
      <script>
         var ctx = document.getElementById("''' + div + '''-pie");
         Plotly.newPlot(ctx,
                        ''' + json.dumps(traces) + ''',
                        ''' + json.dumps(layout) + '''
);
      </script>
''')
   elif len(data_labels) == 2:
      trace = [
                 {
                    'name': data_labels[1],
                    'type': "bar",
                    'x': [point[data_labels[0]] for i, point in enumerate(data)],
                    'y': [point[data_labels[1]] for i, point in enumerate(data)],
                    'width': 0.25,
                    'marker': {
                                 'color': '#009933',
                                 'line': {
                                            'width': 1.0
                                         }
                              }
                 }
              ]
      layout = {
                  'title': '<b>' + graph_title + '</b>',
                  'legend': {'x': -.1, 'y': 1.2, 'font': {'size': 8}},
                  'font': {'size': 12},
                  'height': 600,
                  'width': 800,
                  'xaxis': {
                            'title': '<b>' + data_labels[0] + '</b>',
                            'titlefont': {'size': 10},
                            'tickangle': -45,
                            'tickfont': {'size': 8},
                            'zeroline': True,
                          },
                  'yaxis': {
                            'title': '<b>' + data_labels[1] + '</b>',
                            'titlefont': {'size': 10},
                            'zeroline': True,
                            'showline': True,
                            'tickfont': {'size': 8},
                            'showgrid': False
                          },
                  'barmode': "group",
                  'bargap': 0.15,
                  'bargroupgap': 0.5
               }
      interaction = {'responsive': True, 'scrollZoom': True, 'editable': True};
      print('''        
      <div style="width:1200px;height:600px;">
          <div id="''' + div + '''" style="width:75%; float:left;"></div> 
          <div id="''' + div + '''-pie" style="width:25%; float:left;"></div>
      </div>
      <script>
         var ctx = document.getElementById("''' + div + '''");
         Plotly.newPlot(ctx,
                        ''' + json.dumps(trace) + ''',
                        ''' + json.dumps(layout) + ''',
                        ''' + json.dumps(interaction) + ''');
      </script>
''')
      trace =  [
                  {
                     'name': data_labels[1],
                     'hole': .4,
                     'type': "pie",
                     'labels': [point[data_labels[0]] for i, point in enumerate(data)],
                     'values': [point[data_labels[1]] for i, point in enumerate(data)],
                     'domain': {
                                  'row': 0,
                                  'column': 0
                               },
                     'outsidetextfont': {'size': 8},
                     'insidetextfont': {'size': 8},
                     'legend': {'font': {'size': 8}},
                     'textfont': {'size': 8},
                     'font': {'size': 8},
                     'hoverinfo': 'label+percent+name',
                     'hoverlabel': {'font': {'size': 8}},
                     'textinfo': 'none'
                  },
               ]
      layout = {
                  'title': '<b>' + graph_title + '</b>',
                  'annotations': [
                                    {
                                       'font': {
                                                  'size': 8
                                               },
                                       'showarrow': False,
                                       'text': '<b>' + data_labels[1] + '</b>',
                                       'x': 0.5,
                                       'y': 0.5
                                    },
                                 ],
                  'height': 600,
                  'width': 600,
                  'grid': {'rows': 1, 'columns': 1},
                  'legend': {'font': {'size': 10}}
               }
      print('''        
      <script>
         var ctx = document.getElementById("''' + div + '''-pie");
         Plotly.newPlot(ctx,
                        ''' + json.dumps(trace) + ''',
                        ''' + json.dumps(layout) + ''');
      </script>
''')
   else:
      print("illegal data_label value: " + repr(data_label))

   # closes the html page;
   if "e" == mode:
      print('''
   </body>
</html>
''')

   # restores default output stream;
   if None != html_output:
      sys.stdout = sys.__stdout__

def cumulAndExtend2():
   """
   for 2 variables in resultset;
   """
   global rawdata
   sum_count = 0
   sum_size = 0
   new_month = str(datetime.now().year) + "/" + str(datetime.now().month)
   for ind, point in enumerate(rawdata):
      sum_count += int(point['count(*)'])
      point['count(*)'] = sum_count
      sum_size += int(point['tot_size'])
      point['tot_size'] = sum_size
   if rawdata[ind]['month'] < new_month:
      new_point = dict(rawdata[ind])
      new_point['month'] = new_month
      rawdata.append(new_point)
   DctmAPI.show(rawdata)

def cumulAndExtend3(key):
   """
   for 3 variables in resultset;
   """
   global rawdata
   prec_doc = ""
   sum_count = 0
   sum_size = 0
   new_month = str(datetime.now().year) + "/" + str(datetime.now().month)
   for ind, point in enumerate(rawdata):
      if "" == prec_doc:
         prec_doc = point[key]
      if point[key] != prec_doc:
         # duplicate the last point so the line graphe shows a flat line and not a simple dot when there is a unique point for the document type;
         if rawdata[ind - 1]['month'] < new_month:
            new_point = dict(rawdata[ind - 1])
            new_point['month'] = new_month
            rawdata.insert(ind, new_point)
            prec_doc = point[key]
            sum_count = 0
            sum_size = 0
      else:
         sum_count += int(point['count(*)'])
         point['count(*)'] = sum_count
         sum_size += int(point['tot_size'])
         point['tot_size'] = sum_size
   if rawdata[ind]['month'] < new_month:
      new_point = dict(rawdata[ind])
      new_point['month'] = new_month
      rawdata.append(new_point)
   DctmAPI.show(rawdata)

# -----------------
# main;
if __name__ == "__main__":
   DctmAPI.logLevel = 0
 
   # parse the command-line parameters;
   # old-style for I don't need more flexibility here;
   repository = None
   output_file = None
   try:
      (opts, args) = getopt.getopt(sys.argv[1:], "hd:o:", ["help", "docbase=", "output="])
   except getopt.GetoptError:
      print("Illegal option")
      print("./graph-stats.py -h|--help | -d|--docbase <docbase>{,<docbase>} [-o|--output_file <output_file>]")
      sys.exit(1)
   for opt, arg in opts:
      if opt in ("-h", "--help"):
         Usage()
         sys.exit()
      elif opt in ("-d", "--docbase"):
         repository = arg
      elif opt in ("-o", "--output"):
         output_file = arg
   if None == repository:
      print("at least one repository must be specified")
      Usage()
      sys.exit()
   DctmAPI.show("Will connect to docbase(s): " + repository + " and output to " + ("stdout" if "stdout" == output_file else "one single file " + output_file if output_file is not None else "one file per docbase"))
 
   # needed to locally import the js library Plotly;
   server = "192.168.56.10:8000"

   docbase_done = set()
   status = DctmAPI.dmInit()
   for pointer, docbase in enumerate(repository.split(",")):
      # graphe_1;
      if docbase in docbase_done:
         print("Warning: docbase {:s} was already processed and won't be again, skipping ...".format(docbase))
         continue
      docbase_done.add(docbase)
      session = DctmAPI.connect(docbase = docbase, user_name = "dmadmin", password = "dmadmin")
      if session is None:
         print("no session opened, exiting ...")
         exit(1)

      page_title = "Graphical Overview of Repository " + docbase
      if None == output_file:
         html_output = "./graphs/" + docbase + "-" + datetime.today().strftime("%Y%m%d-%H%M%S") + ".html"
      elif "stdout" == output_file:
         html_output = None
      else:
         html_output = output_file

      graph_name = "graphe_1"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select r_object_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by r_object_type order by 1"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q1. Present Count & Size Per Document Type",
                data = rawdata,
                data_labels = attr_name,
                mode = ("b" if (("stdout" == output_file or output_file is not None) and 0 == pointer) or output_file is None else None))

      # graphe_2;
      graph_name = "graphe_2"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select r_object_type, datetostring(r_creation_date, 'yyyy/mm') as "month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by r_object_type, datetostring(r_creation_date, 'yyyy/mm') order by 1, 2"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q2. Monthly New Documents per Type",
                data = rawdata,
                data_labels = attr_name)
      cumulAndExtend3('r_object_type')
      Plot2HTML(div = graph_name + "l",
                graph_title = "Q2l. Cumulated Monthly Documents per Type",
                data = rawdata,
                data_labels = attr_name,
                bLineGraph = True)
   
      # graphe_3;
      graph_name = "graphe_3"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select r_object_type, datetostring(r_modify_date, 'yyyy/mm') as "month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) where r_creation_date < r_modify_date group by r_object_type, datetostring(r_modify_date, 'yyyy/mm') order by 1, 2"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q3. Monthly Modified Documents per Type",
                data = rawdata,
                data_labels = attr_name)
      cumulAndExtend3('r_object_type')
      Plot2HTML(div = graph_name + "l",
                graph_title = "Q3l. Cumulated Monthly Modified Documents per Type",
                data = rawdata,
                data_labels = attr_name,
                bLineGraph = True)

      # graphe_4;
      graph_name = "graphe_4"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select datetostring(r_creation_date, 'yyyy/mm') as "month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by datetostring(r_creation_date, 'yyyy/mm') order by 1"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q4. Monthly New Documents",
                data = rawdata,
                data_labels = attr_name )
      cumulAndExtend2()
      Plot2HTML(div = graph_name + "l",
                graph_title = "Q4l. Cumulated Monthly Documents",
                data = rawdata,
                data_labels = attr_name,
                bLineGraph = True)

      # graphe_5;
      graph_name = "graphe_5"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select datetostring(r_modify_date, 'yyyy/mm') as "month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) where r_creation_date != r_modify_date group by datetostring(r_modify_date, 'yyyy/mm') order by 1;
        """
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q5. Monthly Modified Documents",
                data = rawdata,
                data_labels = attr_name)
      cumulAndExtend2()
      Plot2HTML(div = graph_name + "l",
                graph_title = "Q5l. Cumulated Monthly Modified Documents",
                data = rawdata,
                data_labels = attr_name,
                bLineGraph = True,
                mode = 'e')

      # graphe_6;
      graph_name = "graphe_6"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select a_content_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by a_content_type order by 1"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q6. Count & Size Per Content Format",
                data = rawdata,
                data_labels = attr_name)

      # graphe_7;
      graph_name = "graphe_7"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select a_content_type, datetostring(r_creation_date, 'yyyy-mm') as "month", count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by a_content_type, datetostring(r_creation_date, 'yyyy-mm') having count(*) > 10 order by 1, 2"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q7. Monthly Created Documents Per Content Format with count(*) > 10",
                data = rawdata,
                data_labels = attr_name)
      cumulAndExtend3('a_content_type')
      DctmAPI.show(rawdata)
      Plot2HTML(div = graph_name + "l",
                graph_title = "Q7l. Cumulated Monthly Created Documents Per Content Format with count(*) > 10",
                data = rawdata,
                data_labels = attr_name,
                bLineGraph = True)

      # graphe_8;
      graph_name = "graphe_8"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select a_storage_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by a_storage_type order by a_storage_type"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q8. Count & Size Per Filestore",
                data = rawdata,
                data_labels = attr_name)

      # graphe_9;
      graph_name = "graphe_9"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select r_object_type, a_storage_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by r_object_type, a_storage_type order by r_object_type, a_storage_type"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q9. Count & Size Per Document Type & Filestore",
                data = rawdata,
                data_labels = attr_name)

      # graphe_10;
      graph_name = "graphe_10"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select a_content_type, a_storage_type, count(*), sum(r_full_content_size) as "tot_size" from dm_document(all) group by a_content_type, a_storage_type having count(*) > 10 order by a_content_type, a_storage_type"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q10. Count & Size Per Content Format & Filestore with count(*) > 10",
                data = rawdata,
                data_labels = attr_name)

      # graphe_11;
      graph_name = "graphe_11"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select owner_name, count(*) from dm_acl group by owner_name order by 2 desc"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q11. ACLs per owner",
                data = rawdata,
                data_labels = attr_name)

      # graphe_12;
      graph_name = "graphe_12"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select acl_name, count(*) from dm_document(all) where acl_name not like 'dm_%' group by acl_name having count(*) >= 10 order by 1"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q12. External ACLs in Use by >= 10 documents",
                data = rawdata,
                data_labels = attr_name)

      # graphe_13;
      graph_name = "graphe_13"
      DctmAPI.show(graph_name, beg_sep = True)
      stmt = """select name, count(*) from dmi_queue_item group by name order by 2 desc"""
      rawdata = []
      attr_name = []
      status = DctmAPI.select2dict(session, stmt, rawdata, attr_name)
      DctmAPI.show(rawdata)
      if not status:
         print("select [" + stmt + "] was not successful")
         exit(1)
      Plot2HTML(div = graph_name,
                graph_title = "Q13. Queue Items by Name",
                data = rawdata,
                data_labels = attr_name,
                mode = ('e' if (("stdout" == output_file or output_file is not None) and len(repository.split(",")) - 1 == pointer) or output_file is None else None))

   status = DctmAPI.disconnect(session)
   if not status:
      print("error while  disconnecting")

A pdf (sorry, uploading python scripts is not allowed on this site) of the script is available here too graph-stats.
Here is the DctmAPI.py as pdf too: DctmAPI.
The script invokes select2dict() from the module DctmAPI (as said before, it is also presented and accessible here) with the query and receives back the result into an array of dictionaries, i.e. each line of the array is a line from the query’s resultset. It then goes on creating the HTML page by invoking the function Plot2HTML(). On line 47, the js library is imported. Each graph is plotted in its own DIV section by Plotly’s js function newPlot(). All the required parameters are set in python dictionaries and passed as json literals to newPlot(). The conversion from python to javascript is conveniently done by the function dumps() from the json module. The HTML and js parts are produced by multi-line python’s print() statements.

Invoking the script

The script is invoked from the command-line through the following syntax:

./graph-stats.py -h|--help | -d|--docbase <docbase>{,<docbase>} [-o|--output_file <output_file>]

A Help option is available, which outputs the text below:

./graph-stats.py --help
Usage:
Connects as dmadmin/xxxx to a repository and generates an HTML page containing the plots of several DQL queries result;
Usage:
./graph-stats.py -h|--help | -d|--docbase {,<docbase>} [-o|--output_file <output_file>] output_file can be one repository or a comma-separated list of repositories;
if output_file is omitted, the html page is output to ./graphs/-$(date +"%Y%m%d-%H%M%S").html;
output_file can be "stdout", which is useful for CGI programming;
Example:
./graph-stats.py -d dmtest,mail_archive,doc_engineering
will query the docbases dmtest, mail_archive and doc_engineering and output the graphs to the files -$(date +"%Y%m%d-%H%M%S").html, one file per docbase;
./graph-stats.py -d mail_archive,doc_engineering -o all_docbase_current_status
will query the docbases mail_archive and doc_engineering and output all the graphs to the unique file all_docbase_current_status;
./graph-stats.py -d mail-archive
will query docbase mail-archive and output the graphs to the file mail-archive-$(date +"%Y%m%d-%H%M%S").html;
./graph-stats.py --docbase dmtest --output dmtest.html
will query docbase dmtest and output the graphs to the file dmtest.html;
./graph-stats.py -d dmtest --output stdout
will query docbase dmtest and output the graphs to stdout;

As the account used is the trusted, password-less dmadmin one, the script must run locally on the repository server machine but it is relatively easy to set it up for remote docbases, with a password prompt or without if using public key authentication (and still password-less in the remote docbases if using dmadmin).

Examples of charts

Now that we’ve seen the bits and pieces, let’s generate an HTML page with the graphs of all the above queries.
Here is an example when the queries where run against an out of the box repository named dmtest (all the graphs in one pdf file): dmtest.pdf
For more complex examples, just run the script against one of your docbases and see the results by yourself, as I can’t upload neither html files nor compressed files here.
As shown, some queries are charted using 2 to 4 graphs, the objective being to offer the clearest view. E.g. query 1 computes counts and total sizes. They will be plotted on the same graph as grouped bars of 2 independent variables and on two distinct pie charts, one for each variable. Query 2 shows the count(*) and the total sizes by document type as 2 stacked bar graphs plus 2 line graphs of the cumulated monthly created documents by document type. The line graphs show the total number of created documents at the end of a month, not the total number of existing documents as of the end of each month; to have those numbers (an historical view of query 1), one needs to query the docbase monthly and store the numbers somewhere. Query 1 returns this status but only at the time it was run.
Discussing Plotly is out of the scope of this article but suffice it to say that it allows one to work the chart interactively, e.g. removing variables (aka traces in Plotly’s parlance) so the rest of them are re-scaled for better readability, zoom parts of the charts, and scroll the graphs horizontally or vertically. A nice tip on hover is also available to show the exact values of a point, e.g.:
Screenshot from 2018-11-03 19-07-28
Plotly even lets one completely edit the chart from within a js application hosted at Plotly, so this may not suit anybody if confidentiality is required; moreover, the graph can only be saved in a public area on their site, and subsequently downloaded from there, unless a per annum paying upgrade is subscribed. Nonetheless, the free features are well enough for most needs.

Leveraging Plot2HTML()

The script’s Plot2HTML() function can be leveraged so it is used to plot data from any source, even from a text file so the query is separated from its result’s graphical representation, which is useful if a direct access to the system to query is not possible. To do this, an easy to parse file format is necessary, e.g.:

# this is a commented line;
# line_no   variable1    variable2 .....
0          type of graph                                   # use B for bars, S for stacked bars, L for lines, P for pie or any combination of them (BL, LB, BP, PB, LP, PL, BPL);
                                                           # one graph for each combination will be put on the html page in a nx2 cell grid, i.e. graphs  1 and 2 horizontally and graph 3 below,
                                                           # in the same order as the graph type letters;
1          graph title                                     # can be on several lines separated by html 
for sub-titles; 2 Xaxis_name Yaxis1_name Yaxis2_name # names of axis, one X and up to 2 Y; 3 Xaxis_values1 value1 value2 # first line of (x,y) pairs or (x,y1,y2) triplets if a second axis was specified; 4 Xaxis_values2 value1 value2 # second line of data; ... # the other lines; n Xaxis_valuesn valuen valuen # last line of data;

And that script could be invoked like this:

Usage:
       ./graph-it.py -h|--help | -f|--input_file <filename>{,<filename>} [-o|--output_file <output_file>]

The script itself just needs to parse and load the data from the input file(s) into an array of dictionaries and pass it to the plotting function Plot2HTML(). Of course, if the data is already in json format, the parsing step will be greatly simplified.
Time and mood permitting, I’ll do it in a future article.

Useful tools

The color of the many chart’s option (e.g. on line 87) can be specified through an rgb triplet or an HTML code. There are many color pickers on the web, e.g. this one. Here, I let Plotly chose the colors as it sees fit, hence the not so attractive results.
If you need to paste HTML code into an HTML page without it being interpreted by the browser, e.g. the code generated by graph-stats.py, some transformations are required, e.g. all the angle brackets need be replaced by their HTML entities representations. If that code does not contain confidential information, an on-line service can be used for that, such as this one, but it shouldn’t be too difficult to write a list of sed expressions that do the substitutions (e.g. this gawk one-liner takes care of the characters < and >:

gawk -c '{gsub(/>/, "\\>"); gsub(/</, "\\<"); print}' myfile.html).

Other replacements can be added as the need arises.

Conclusion

All the above confirms yet again that it is possible to create cheap but useful, custom tools with almost no installation, except here a textual javascript library. All the rest is already available on any administrator’s decent system, command-line and python interpreters, and a browser which, by the way, is taking more and more importance as a tiers because of the wide spreading of javascript libraries that execute locally and the ever increasing performance of javascript engines. By the way, chromium is highly recommended here for visualizing those complex charts from real production repositories.
As the source code is provided, it is possible to modify it in order to produce better looking graphs. This is the real arduous and time-consuming part in my humble opinion as it requires some artistic inclination and many experimentations. If you find a nicer color scheme, please feel free to contribute in the comments.

Cet article A Graphical Overview of a Repository est apparu en premier sur Blog dbi services.

Public Utilities Enhance Energy Efficiency Programs with Oracle

Oracle Press Releases - Mon, 2018-11-05 07:00
Press Release
Public Utilities Enhance Energy Efficiency Programs with Oracle

Redwood Shores, Calif.—Nov 5, 2018

Elizabethtown Gas and Riverside Public Utilities (RPU) join a growing list of utilities using Oracle Opower Home Energy Reports to educate customers on energy efficiency and related savings. With Oracle Opower, these utilities have the potential to increase savings and engagement for more than 179,000 households.

"We are proud to be working with Riverside Public Utilities and Elizabethtown Gas to enhance their energy efficiency education and drive increased efficiency solutions amongst their customers," said Rodger Smith, SVP and general manager of Oracle Utilities. "Our behavioral Energy Efficiency programs have contributed significant bill savings for consumers, and we continue to innovate to bring our utility customers the best in home energy management solutions."

Since Elizabethtown Gas launched the program in December 2017, participating customer savings continue to trend upward. The utility also implemented Oracle's energy efficiency web portal to provide online tools including bill comparison insights and an online audit.

RPU selected Opower Home Energy Reports to help improve customer experience, increase energy efficiency savings, and drive more digital engagement. The implementation of the Home Energy Reports represents an expanding relationship between RPU and Oracle Utilities. The utility currently utilizes Oracle’s Work and Asset Management solution to manage asset lifecycles, streamline maintenance operations and improve regulatory compliance.

"Working with Oracle, we are now able to deliver new, cutting-edge tools that allow our customers to gain a better understanding of their energy use at home and make educated decisions regarding their electric bill," said RPU account manager, Ryan McManus.

Ranked number one in Navigant Research's Leaderboard report on Home Energy Management systems, Oracle Opower Home Energy Reports provide residential customers personalized feedback and actionable insights about their energy consumption to help motivate them to lower their usage.

Oracle Utilities recently announced that its Opower Home Energy Reports and Energy Efficiency programs achieved a total of $2 billion in utility bill savings for consumers over the past decade and have reached more than 20 TWh in energy savings. The reports include a neighbor comparison, energy usage trends, seasonal disaggregation, and personalized energy saving tips that are powered by an intelligent recommendation engine.

Contact Info
Valerie Beaudett
Oracle Corporation
+1 650.400.7833
valerie.beaudett@oracle.com
Wendy Wang
H&K Strategies
+1 979.216.8157
wendy.wang@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Wendy Wang

  • +1 979.216.8157

FIRST_ROWS_10 CBO Is Hopeless, It’s Using The Wrong Index !! (Weeping Wall)

Richard Foote - Mon, 2018-11-05 05:45
There’s an organisation I had been dealing with on and off over the years who were having all sorts of issues with their Siebel System and who were totally convinced their performance issues were due directly to being forced to use the FIRST_ROWS_10 optimizer. I’ve attempted on a number of occasions to explain that their […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator