Feed aggregator

EBS 12.1 Data Masking Template Certified with EM 13c

Steven Chan - Thu, 2017-01-19 11:56
We're pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 13c. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Cloud Control 13c to mask sensitive data in cloned Oracle E-Business Suite environments.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. 

You may scramble data in E-Business Suite 12.1.3 cloned environments with EM13c using the My Oracle Support Note and template: 

What does masking do in an Oracle E-Business Suite environment?

Based upon the knowledge of the Oracle E-Business Suite architecture and sensitive columns, application data masking does the following:

  • De-identifies the data:  Scramble identifiers of individuals, also known as personally identifiable information (PII).  Examples include information such as name, account, address, location, and driver's license number.
  • Masks sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.  
  • Maintains data validity:  Provide a fully functional application.
References

Related Articles

Categories: APPS Blogs

Oracle E-Business Suite: 250 Security Vulnerabilities Fixed in the Last Year

Oracle has fixed 250 security vulnerabilities in the Oracle E-Business Suite from January 2016 to January 2017.  The past five Oracle Critical Update Updates (CPU) have included double or triple digit number of fixes for Oracle E-Business Suite.  Almost all these security vulnerabilities are exploitable in all versions of Oracle E-Business Suite including 11i, 12.0, 12.1, and 12.2.  Many of the 250 security vulnerabilities fixed are high risk vulnerabilities such as SQL injection, cross-site scripting (XSS), XML external entity attacks, and privilege escalation.

Unless your organization is applying the CPU patches immediately and have hardened the application, the Oracle E-Business Suite is extremely vulnerable and easily exploitable.  Significant defensive measures are required to protect Oracle E-Business Suite especially those with Internet facing modules such as iSupplier, iStore, iRecruitment, and iSupport.   A key layer of defense is Integrigy’s web application firewall for Oracle E-Business Suite, AppDefend, which provides virtual patching for these security bugs and additional protection from generic web application attack like SQL injection and cross-site scripting (XSS) and common Oracle E-Business Suite security misconfigurations.

Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle E-Business Suite 11i - Critical Patch Updates Extended for Tier 1 Support

As of December 2016, Oracle has extended Critical Patch Update (CPU) support for Oracle E-Business Suite 11.5.10 until October 2017 for additional fee Tier 1 support/Advanced Contract Support (ACS) customers.  Starting with the April 2016 Critical Patch Update (CPU), Oracle E-Business Suite 11.5.10 CPU patches are only available for customers with Tier 1/ACS support contracts.  See My Oracle Support Note ID 1596629.1 for more information.

Almost all security vulnerabilities discovered and patched in Oracle E-Business Suite 12.x are also present and exploitable in 11i.  A significant number of these security bugs are SQL injection bugs allow an attacker to execute SQL as the Oracle E-Business Suite APPS database account.  These attacks can easily compromise the entire application and database.  In the past year, Oracle has fixed 250 security vulnerabilities in Oracle E-Business Suite 11i and R12.

Oracle E-Business Suite 11i customers without Tier 1 support, as well as 12.0 customers, should take immediate take immediate defensive steps to protect the Oracle E-Business Suite 11i, especially those with Internet facing modules such as iSupplier, iStore, iRecruitment, and iSupport.  A key layer of defense is Integrigy’s web application firewall for Oracle E-Business Suite, AppDefend, which provides virtual patching for these security bugs and additional protection from generic web application attack like SQL injection and cross-site scripting (XSS) and common Oracle E-Business Suite security misconfigurations.

Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

vagrant up – get your Oracle infrastructure up an running

Yann Neuhaus - Thu, 2017-01-19 10:55

By using Vagrant to manage your Virtual Machines and Ansible for configuration management and provisioning
you can easily automate the setup of your whole test environment in a standardized way.

If you have never heard about Ansible and Vagrant I try to give you an idea with the following very short  summary. There is a lot of good information available about Ansible and Vagrant.
Please check the provided links at the end of this blog for further information.

What is Vagrant ?

Vagrant is an open source tool on top of some virtualization solution like Oracle VirtualBox. It can automate the creation of VM’s. Additionally vagrant supports provisioning with scripts or with tools like Ansible, Puppet or Chef.
You can download a lot of useful  boxes from here: https://atlas.hashicorp.com/boxes/search

 What is Ansible ?

Ansible is an open source automation platform.
It is a radically simple IT automation engine designed for multi-tier deployments. [https://www.ansible.com/how-ansible-works]

Ansible just uses ssh and does not require agents or other software installed on the target nodes. You simply put your steps into an Ansible playbook, which is an easy to read text-file written in YAML syntax. Your playbook will simply look like documented steps.
Ansible will run the listed tasks described in the playbook on the target servers by invoking Ansible Modules.

Here is a simple example task from a playbook which will add a directory on a server. It uses the Ansible module “file”
- name: add home directory
file:
path: /home/myuser
state: directory

Ansible is quite well known to build up whole test environments including databases like mysql which are easy to install with simple tar balls or rpm files.

Unfortunately in the community of Oracle DBA’s usually Ansible is not on the radar despite there are already good Ansible playbooks available which proofed that you can also use Ansible to provision your whole Oracle Test Environment even with Oracle Real Application Cluster:
https://github.com/racattack/racattack-ansible-oracle

https://github.com/cvezalis/oracledb-ansible

Starting from these examples and adapting them for your needs you will experience how quick you will be able to automate your Oracle installations. This is what I did an want to show you here. Please keep in mind that this example is optimized for a fast installation and should not be used as it is for a productive system.

What you’ll get
In this blog I give you an example how to build an Oracle infrastructure from scratch containing
two virtual servers, installed and configured with CentOS 7.2 ,
each hosting an Oracle DB (12.1.0.2).
Example_Ansible

  • Step ONE – What you need to prepare once to run this example
      1) the Ansible Playbook and Vagrant configuration for this example
      you can download everything from the git repository. All files are simple text files.
      https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
      2) the Oracle 12.1.0.2 binaries
      the Oracle binaries are not included in the download. You have to provide them.
      Please copy the Oracle software zip files into the directory oracle-db-12c-vagrant-ansible/
      ./linuxamd64_12102_database_1of2.zip
      ./linuxamd64_12102_database_2of2.zip

      3) your Linux host or laptop
      with Network Connection,Oracle VirtualBox , Vagrant and Ansible installed.
      This can be done with your linux package manager.
      You will need Ansible version 2.1.1.0 or higher for this example!
      Please check http://docs.ansible.com/ansible/intro_installation.html for installation details.
      sudo yum install ansible
      You can find the Oracle VirtualBox Download and Installation Guide here:
      https://www.virtualbox.org/wiki/Linux_Downloads
      Download Vagrant with version 1.8.5 or higher from
      https://www.vagrantup.com/downloads.html
      Also install the vagrant hostmanager plugin:
      $ vagrant plugin install vagrant-hostmanager
  • Step TWO – Run it
      Now you are ready to start the whole setup which will create two virtual servers and oracle databases.
      On my laptop with SSD disks and 16 GB RAM this takes about 20 minutes.
      To run this example you will need minimal 8 GB RAM and 10G free disk space
      Go to the directory where you have downloaded this example. Everything will be started from here.
      cd oracle-db-12c-vagrant-ansible
      vagrant up
  • Of cause you do not want to start this without knowing what is going on.
    I will go a little bit into details therefore next week ….

    Further information about Ansible:
    There will be some Introduction Webinars for Ansible coming soon
    https://www.ansible.com/webinars-training

    you can find more examples at:
    http://galaxy.ansible.com
    https://github.com/ansible/ansible-examples
    https://groups.google.com/forum/#!forum/ansible-project
    If you want to read a book I can recommend this:
    Ansible: Up and Running
    Print ISBN: 978-1-4919-1532-5
    Ebook ISBN: 978-1-4919-1529-5

    https://www.ansible.com/ebooks

     

    Cet article vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.

    Details about this incident are available via debug id....

    Joel Kallman - Thu, 2017-01-19 08:07


    In Oracle Application Express, unexpected exceptions can occasionally occur.  Instead of communicating the technical details to an end user who is running the application, an error message will be issued of the form:

    Contact your administrator. Details about this incident are available via debug id "NN".

    But what actually happened?  Who is "your administrator"?  And what exactly are you supposed to do with this information?

    What happened?An internal exception occurred - it either wasn't properly handled by the application's error handler (or there was no handler in place), or an unforeseen exception occurred in the Application Express engine itself, which wasn't properly caught by the error handler of APEX.  If you're logged into the development environment as a developer and running the application, you should be able to see the full error stack and more details of what actually occurred.  But if you're a mere end-user of the application, you are shielded from those details - not only would the details be mystifying to an end user, but it could also be viewed as a security risk, to show the details.


    Who is "your administrator"?
    • A developer of this application, who has access to SQL Commands in the APEX workspace?
    • A developer who can connect directly to the database where this error occurred, connecting as the database user associated with the workspace (commonly referred as the "workspace schema").
    • A database user who has been granted the APEX_ADMINISTRATOR_ROLE (or, beginning in APEX 5.1, also the APEX_ADMINISTRATOR_READ_ROLE).
    • Someone who can connect as SYS or SYSTEM to your database.

    What can you do with this information?The full details of the incident and the error stack at the time of the exception are written to the Debug Messages tables in the Application Express schema.  To view these details, you simply need to query the view APEX_DEBUG_MESSAGES:

    select *
    from apex_debug_messages
    where page_view_id = NN
    order by message_timestamp asc;

    This information is also accessible from the development environment of Application Express.
    • A developer can select their application in App Builder, and then navigate to Utilities -> Debug Messages.
    • A workspace administrator can navigate to Workspace Administration -> Monitor Activity -> Page Views By View.  You can then filter the output by the Debug ID column.

    Oracle Marketing Cloud Teams with Eyeota to Enhance Global Data Offering

    Oracle Press Releases - Thu, 2017-01-19 07:00
    Press Release
    Oracle Marketing Cloud Teams with Eyeota to Enhance Global Data Offering Eyeota data helps Oracle Data Management Platform customers improve ROI and strengthen customer experience across global campaigns

    Redwood Shores, Calif.—Jan 19, 2017

    Oracle Marketing Cloud today announced it is teaming with Eyeota to enable marketers and advertisers to use Eyeota data in the Oracle Data Management Platform to more intelligently target and personalize non-US campaigns to Eyeota’s three billion unique profiles in Europe, APAC, and the Americas. Eyeota data will be integrated into the Oracle Data Management Platform through the Oracle Data Cloud, and Oracle Data Management Platform users can build audiences directly from Eyeota data or by seamlessly combining it with additional data segments from Oracle’s BlueKai Marketplace.

    Data has transformed marketing by giving marketers and advertisers the ability to enhance the customer experience and increase ROI. To extend the benefits of data-driven marketing strategies to international markets, advertisers need to be able to execute targeted, data-driven campaigns in all the markets they operate in, but this can often be extremely complex due to highly fragmented and linguistically diverse markets. The relationship between Oracle Marketing Cloud and Eyeota addresses this challenge by providing a new natively integrated data set that enables Eyeota data in areas such as socioeconomics, interest, intent and B2B to be activated within the Oracle Data Management Platform.

    “To help global and international brands improve key campaign and ROI metrics, Oracle is committed to working with the best data providers in strategic markets across the globe,” said Karen Kokiko, senior product marketing manager, Oracle Marketing Cloud. “Our data management platform is differentiated on features and capabilities, but it truly stands out in terms of the data it has natively integrated into the platform. Eyeota’s audiences reflect the company’s depth of data in the EMEA and APAC regions and this integration dramatically improves our international data offering, making the Oracle Data Management Platform an even more compelling platform.” 

    Eyeota is a leading provider of high-quality audience data that enables brands to more efficiently identify and engage customers by using more than three billion unique profiles to create audience and message consistency across marketing channels. With the new integration, Oracle Data Management Platform users can increase speed to market by selecting, purchasing, and managing Eyeota data from within the Oracle Data Management Platform UI.

    “In today’s market, brands understand the importance of knowing their audiences on an almost intimate level to help them deliver more personalized and relevant content, while enhancing the online experience,” said Kevin Tan, CEO, Eyeota. “Eyeota’s extensive global data segments provide a granular view of audiences to help them improve their outreach strategies. Our integration with the Oracle Data Management Platform offers brands deep insights to better understand their customers, improve audience engagement, and drive significant ROI.”

    The Oracle Data Management Platform is integrated with Oracle Responsys, Oracle Eloqua, and Oracle Social Relationship Management platforms to allow marketers to achieve immediate cross-channel benefits. The Oracle Data Management Platform also enables marketers to seamlessly integrate Eyeota data with additional data from Oracle’s BlueKai Marketplace, which includes more than 30,000 data attributes on two billion consumer profiles drawn from 1,500 data partners.

    The Oracle Marketing Cloud is part of the Oracle Customer Experience (CX) Cloud, one of the industry’s most complete CX solutions. Oracle CX Cloud empowers organizations to improve experiences, enhance loyalty, differentiate their brands, and drive measurable results by creating consistent, connected, and personalized brand experiences across all channels and devices.

    Contact Info
    Simon Jones
    PR for Oracle
    +1.415.856.5155
    sjones@blancandotus.com
    About Eyeota

    Eyeota is the global leader for audience data with 3 billion unique profiles in Europe, APAC and the Americas. The company provides marketers with the data they need to reach the right online audiences and cut campaign waste while also enabling publishers to monetize their audiences more widely. Eyeota is a certified member of the Network Advertising Initiative (NAI), the leading self-regulatory industry association dedicated to responsible data collection and its use for digital advertising, and has received consumer privacy certifications around the world. Eyeota data delivers deep audience insights to help brands understand their customers in a new way - as humans. For more information, visit http://www.eyeota.com/.

    About Oracle

    Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

    Trademarks

    Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Safe Harbor

    The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

    Talk to a Press Contact

    Simon Jones

    • +1.415.856.5155

    Oracle Buys Apiary

    Oracle Press Releases - Thu, 2017-01-19 07:00
    Press Release
    Oracle Buys Apiary Creates the Most Comprehensive API Integration Cloud by Adding the Leading API Design and Governance Solution

    Redwood Shores, Calif.—Jan 19, 2017

    Oracle today announced that it has signed an agreement to acquire Apiary, whose pioneering APIFlow solution provides the framework and tools for developing application programming interfaces (APIs) that share enterprise services and data and help create modern, cloud-based applications and experiences.

    Apiary has helped companies create hundreds of thousands of APIs and products that their customers and partners love to use. APIFlow spans the API creation lifecycle, including design, governance, testing, and documentation, while supporting API Blueprint and OpenAPI industry standards. Together, Oracle and Apiary will help companies thrive in the digital economy by comprehensively managing connectivity complexity and API proliferation.

    “Oracle’s API Integration Cloud enables companies to secure, consume, monetize, and analyze APIs,” said Amit Zavery, Senior Vice President, Integration Cloud, Oracle. “With Apiary, Oracle will also provide customers advanced capabilities to design and govern API’s, allowing companies to manage the entire API lifecycle and deliver integrated applications.”

    “Oracle customers will have unique access to a comprehensive API management platform providing control and increased agility, enabling them to focus on innovation,” said Jakub Nešetřil, Founder and CEO, Apiary. “We are excited to join Oracle and bring even more value to customers as part of Oracle’s Integration Cloud.”

    More information about this announcement is available at www.oracle.com/apiary

    Contact Info
    Deborah Hellinger
    Oracle Corporate Communications
    +1.212.508.7935
    deborah.hellinger@oracle.com
    Ken Bond
    Oracle Investor Relations
    +1.650.607.0349
    ken.bond@oracle.com
    About Oracle

    Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

    Trademarks

    Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Oracle is currently reviewing the existing Apiary product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle’s review of Apiary’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by Apiary or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. It is intended for information purposes only, and may not be incorporated into any contract.

    Cautionary Statement Regarding Forward-Looking Statements
    This document contains certain forward-looking statements about Oracle and Apiary, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of Apiary, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or Apiary, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business may deteriorate and/or Oracle or Apiary may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or Apiary. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor Apiary is under any duty to update any of the information in this document.

    Talk to a Press Contact

    Deborah Hellinger

    • +1.212.508.7935

    Ken Bond

    • +1.650.607.0349

    Time Series Visualisations: Kibana or Grafana?

    Rittman Mead Consulting - Thu, 2017-01-19 05:49

    Grafana has rapidly become one of the de-facto “DevOps” tools for real time monitoring dashboards of time series metrics. In addition to its powerful visualisations, Grafana is not tied to a particular stack or vendor, and supports multiple backend data sources including InfluxDB, Graphite, Elasticsearch and many others which can be added via plugins.

    Another similar tool, Kibana is the data visualisation front end for the Elastic Stack, complementing the rest of the stack which includes Beats, Logstash (ingest) and Elasticsearch itself (storage). With the version 5.x release of the Elastic Stack, Kibana now includes Timelion for interactive time series charts.

    Here at Rittman Mead we are big fans of both tools, and have written about them over the years (see 1, 2, 3). Our industry-leading Performance Analytics solution for OBIEE is built on top of these tools, and takes advantage of the time series features to provide interactive web-based dashboards presenting a “full stack" view of the important metrics relating to OBIEE's performance.

    To give you an idea of what we’ve built, here is a sample dashboard from our Performance Analytics tool. We use both Grafana and Kibana, to present different views of data. The dense dashboards of time series metrics work brilliantly in Grafana:

    To enable the user to view and analyse performance data across multiple dimensions we use Kibana, which does a stirling job:

    With the recent release of Timelion - a time series visualisation plugin for Kibana - out of beta and into the big time, we wanted to ensure we were still using the right tool for the right job. Did we still need Grafana in our stack for visualisation of time series metrics, or could Timelion fill that gap now, and enable us to streamline our platform’s toolset?

    In this article we’ll see how Timelion and Grafana stack up against each other. The intention is not to define which is “best” (a pointless exercise), nor create an unintelligible grid of down-in-the-weeds features that each may or may not support, but to see how the two tools compared in real-world usage, side by side. Which makes it easier to build charts? Which produces a nicer-looking dashboard at the end of it? Which has the best UI and UX for the end user reading and analysing the data? What limitations -if any- are there on data sources and functionality in analysing that data? And ultimately, can we unify our product’s front end on a single one of these tools?

    Introduction to Timelion

    Since version 5 of Kibana, Timelion (pronounced "Timeline") has been included as part of the default installation. Charts are defined using a bespoke query language, which specifies both the source of the data, functions to apply to it, and how it is presented. The query is specified in a textbox in the Timelion interface. In this simple chart here we’re using the expression .es(*) to show the total number of documents in Elasticsearch, over time:

    Every Timelion expression starts with a data source function and continues with a chain of functions that are connected with a dot. Over 20 functions are provided, across three groups:

    1. Data sources - the default is Elasticsearch, and other APIs such as World Bank and Quandl are also available.
      For example in the graph above, the default expression .es(*) (similar to .elasticsearch(*)) shows a count of all documents in Elasticsearch. You can specify details of the Elasticsearch index, mappings and metrics here too, as well as filters.

    2. Data manipulations ranging from simple arithmetic to moving averages, cumulative sums and derivatives
      For example, adding a moving average to the data is as simple as including the function to the end of the expression: .es(*).movingaverage(12)

    3. Themes and styles of the visual elements including bar/point/lines, labels, title and legends. The graph below shows the number of running queries by time extracted from the active session history data in the Oracle database. .es(index=ash*).lines(1,fill=1).title('Running Queries').legend(none).label(false)

    With regards to the available documentation and guides for the developers, the main documentation for Timelion is somewhat sparse. For details of each function you can refer to the documentation on github. Compared to the rest of the excellent Elastic documentation, this is surprising and hopefully now that Timelion is part of the core product its documentation will be brought up to parity - full explanations of features and functions along with examples of usage.

    On the positive side, the query builder text box supports auto-complete of functions and their arguments, and the Timelion interface provides online help too. A downside to this minimalist Timelion page is the size of the expression textbox. As you will read more in this post, it wouldn’t take long before you need to add more than one metric and a few styles to a visualisation which means having too many words in the textbox that can’t be seen, scrolled and edited easily:

    If you are a beginner, to avoid the confusion over typos and errors, try building the expressions step by step and add functions gradually. The blog here nicely explains how to gradually create Timelion expressions.

    Of special note in the data manipulation functions that Timelion provides are the statistical analysis ones:

    • .trend() : add a trendline using a specified regression algorithm to your graph
    • .holt(): an early version of this function, which samples the beginning of a series and use it to forecast what should happen via several optional parameters.

    These are useful for our performance monitoring dashboards, enabling us to show things such as the point at which you would run out of memory/disk space if you continued to consume resources at your current rate.

    Related to this concept is Prelert, which Elastic acquired next year and is expected to be part of a future X-Pack release. Whilst dashboard-based analysis is useful, once a clear pattern on which we want to alert is identified we can bring in Watcher to provide real time notifications to pager systems etc.

    Introduction to Grafana

    Grafana is an open source feature rich dashboard and graph editor that is rapidly becoming accepted as one of the best time-series metric visualisation tools available. Grafana has gained its popularity thanks to its simplicity, ease of use and snazzy look and feel that attracts many users. You can read more about Grafana in an earlier article that we wrote on the Rittman Mead blog here. Here is the kind of dashboard you can easily build with Grafana:

    Most of the configurations in Grafana are done via a comprehensive graph editor interface:

    In the Grafana editor queries are generally built entirely through the GUI. Manually specified queries are used in cases such as accessing advanced functionality, and for specifying Lucene queries for in order to access data held in Elasticsearch. In terms of support for Elasticsearch, the latest version of Grafana at the time of writing this post (v4.1.1) supports both Elasticsearch v2 and v5. From my time spent working with Grafana 4.1.1 and Elasticsearch v5 I haven’t found it to be as stable as the long-standing data sources such as InfluxDB and Graphite (or even Elasticsearch v2). As an example, if a chart is configured incorrectly (for example settings for null values), Grafana is not as intuitive in returning no results or throw a descriptive error explaining the issue; instead the graph seems locked and the only possible solution for this behaviour seems to be deleting the chart and recreating it from scratch.

    As well as data sources the graph editor includes settings covering display styles such as titles, templates, axis and legends.

    A interesting new addition to the Grafana family is the alerting engine which allows users to attach rules to the dashboard panels. Once dashboards are saved Grafana will extract the alert rules into a separate alert rule storage and schedule them for evaluation.

    Side-by-Side : Presenting the Data

    On the face of it, the output from Grafana and Timelion can be remarkably similar:

    Grafana on the right and Timelion is the left graph

    However, there are a few differences between the two tools that are worth digging into here. They are mainly on the display configuration part and simplicity of the user experience.

    As mentioned, Grafana’s chart editor has a clear interface over the multitude of options available for refining the presentation of the data.

    Timelion also supports chart formatting, but with fewer options than Grafana. It also depends on the user concatenating the correct functions onto the data query expression as we saw above. For example to add a graph that has a “Running Queries” title, a legend on the top right of the plot, not labeled axes and data shown with a 1px width line, you would need to hand-code the this expression: .lines(1,fill=1).title('Running Queries').legend(ne).label(false)

    Grafana offers significantly greater flexibility in the formatting of the chart. One example is displaying metrics of different units such as time, currency and data. Grafana can automatically scale axes based on the units (bytes -> MB -> GB). The following Grafana graph shows disk usage from our monitored application stored in Elasticsearch. The disk usage metric on the Y axis is in Kilobytes, which Grafana has automagically scaled to the appropriate magnitude (MiB) in the labelling:

    The same could be done manually in Timelion by specifying the appropriate conversion, but this is a hardcoded option compared to Grafana’s dynamic one, and even then wouldn’t have the varying labeling that Grafana does above (KiB initially, switching to MiB subsequently)

    Grafana also supports the rendering of negative values on the Y axis, which is just not possible in Timelion. As well as genuinely negative data values (for example, temperature recordings below zero degrees), using transform feature of Grafana it is possible to invert particular series so as to aid the comprehension of the data as seen here:

    Another nice feature that Grafana has - and unfortunately Timelion doesn’t - is the ability to show metric values in the legend itself. It’s a great way to see key values at a glance, without requiring a separate table or the user to hover over the data points.

    Side-by-Side : Interacting with the Data

    Grafana and Kibana are also different in terms of the level and ease with which it is possible to interact with the charted data. Both Kibana and Grafana support the drag-select of time periods on a chart to zoom into detail, with the rest of the charts on the same dashboard updating to show the same time period too. However, Kibana is much more feature-rich in this area. As a front end to Elasticsearch it supports ad-hoc text search of your data. It also allows users to automatically drill down into data, by clicking on a value in a chart to show details just for that. In the OBIEE monitoring dashboard below (built in Kibana), Active Session History data is filtered for the session_states in “Waiting” and “On CPU” - this filter was created by the user simply by clicking on the data points in one of the charts, and can be toggled dynamically from the same interface.

    This interactivity is supported by Timelion too. The es() datasource function includes an argument called “kibana”. This argument defines whether the visualisation should follow the filters applied to the rest of the Kibana dashboard or not, for example:
    .es(index=dms_*,metric='avg:obips1-Current_Disk_Usage',fit='nearest',kibana='true')

    Whilst it is possible to specify Elasticsearch Lucene queries in Grafana and use term filters in the editor, these are local to the graph. With some use of variables it can be possible to enable a degree of global filtering on a single Grafana dashboard but this is a bespoke solution per-dashboard, rather than the out-of-the-box functionality that Kibana provides.

    Grafana does enable you to toggle the display of data in a chart, by clicking on the measure label in the legend, seen above.

    Conclusion

    Comparing Kibana and Timelion to Grafana, it is true that they do a similar job displaying time series metrics - with pros and cons on each side.

    Grafana’s graph editor offers an amazing interface with regards to the options available for refining the presentation of the data. Grafana is not only an straightforward development tool but also adds a huge amount of value to the resulting dashboards making them easier to read and analyse by the end users

    On other hand, Timelion is just one of many visualisations that Kibana provides (including Tile Map and Tag Cloud), meaning that dashboards can be built which are less dense with numbers and time series but information is shown through variety of visualisations. Unfortunately Timelion and its expression editor at its current version seem slightly immature and relatively limited. A few more additional display options plus a nicer editor would put Timelion in a better position in comparison.

    So, for now, we’ll be sticking with our dual approach of both Grafana and Kibana. Grafana provides our pure time-series metric dashboards, with the ease-of-building being one of the key factors, along with the rich formatting capabilities and its support for a data sources rather than Elasticsearch. Kibana does an unbeatable job of dashboards enabling rich exploration of metrics across dimensions, rendered in a greater number of possible visualisation forms. Timelion is a great first step, but ultimately just can’t compete with Grafana.

    This is a fast-moving area of tool development, and you can bet that Grafana and Kibana are going to continue developing at a rate of knots - which as users and developers is great news!

    Categories: BI & Warehousing

    Oracle VAR_SAMP Function with Examples

    Complete IT Professional - Thu, 2017-01-19 05:00
    In this article, I’ll explain the Oracle VAR_SAMP function and show you some examples. Purpose of the Oracle VAR_SAMP Function The VAR_POP function will return the sample variance of a set of numbers, after excluding NULLs. What does this mean? The sample variation calculates the variation of a sample of numbers from the overall set. […]
    Categories: Development

    From PostgreSQL 9.1.8 to PostgreSQL 9.5.5 including a standby database with minimal downtime

    Yann Neuhaus - Thu, 2017-01-19 03:29

    As you might know PostgreSQL 9.1 is out of support since last September. But, no surprise, there are still plenty of PostgreSQL 9.1 instances out there which need to be upgraded to a supported release. At one of our customers we exactly have this situation: The production environment is running on PostgreSQL 9.1.8 and needs to be upgraded to a recent release (9.5.5 in this case). To make life a little bit more fun there are more constraints: This is a PostgreSQL streaming replication configuration, so there is a standby database involved. Allowed downtime is 30 minutes and there is space pressure on the current systems and the current systems need to be reused. In this post we’ll look at how you can do the upgrade with minimal downtime (without using logical replication).

    First, lets build the test environment. We need two systems, one for the master instance (192.168.22.32) and one for the standby (192.168.22.32). On both of these system we’ll need PostgreSQL 9.1.8 installed, so:

    postgres@debian6pg:~$ wget https://ftp.postgresql.org/pub/source/v9.1.8/postgresql-9.1.8.tar.bz2
    postgres@debian6pg:~$ tar -axf postgresql-9.1.8.tar.bz2
    postgres@debian6pg:~$ cd postgresql-9.1.8/
    postgres@debian6pg:~/postgresql-9.1.8$ PGHOME=/u01/app/postgres/product/91/db_8
    postgres@debian6pg:~/postgresql-9.1.8$ SEGSIZE=2
    postgres@debian6pg:~/postgresql-9.1.8$ BLOCKSIZE=8
    postgres@debian6pg:~/postgresql-9.1.8$ WALSEGSIZE=16
    postgres@debian6pg:~/postgresql-9.1.8$ ./configure --prefix=${PGHOME} \
                                                       --exec-prefix=${PGHOME} \
                                                       --bindir=${PGHOME}/bin \
                                                       --libdir=${PGHOME}/lib \
                                                       --sysconfdir=${PGHOME}/etc \
                                                       --includedir=${PGHOME}/include \
                                                       --datarootdir=${PGHOME}/share \
                                                       --datadir=${PGHOME}/share \
                                                       --with-pgport=5432 \
                                                       --with-perl \
                                                       --with-python \
                                                       --with-tcl \
                                                       --with-openssl \
                                                       --with-pam \
                                                       --with-ldap \
                                                       --with-libxml \
                                                       --with-libxslt \
                                                       --with-segsize=${SEGSIZE} \
                                                       --with-blocksize=${BLOCKSIZE} \
                                                       --with-wal-segsize=${WALSEGSIZE}
    postgres@debian6pg:~/postgresql-9.1.8$ make world
    postgres@debian6pg:~/postgresql-9.1.8$ make install
    postgres@debian6pg:~/postgresql-9.1.8$ cd contrib
    postgres@debian6pg:~/postgresql-9.1.8/contrib$ make install
    postgres@debian6pg:~/postgresql-9.1.8/contrib$ cd ../..
    postgres@debian6pg:~$ rm -rf postgresql-9.1.8*
    

    Once this is available on both nodes we can initialize our master instance:

    postgres@debian6pg:~$ /u01/app/postgres/product/91/db_8/bin/initdb -D /u02/pgdata/testmig -X /u03/pgdata/testmig
    The files belonging to this database system will be owned by user "postgres".
    This user must also own the server process.
    
    The database cluster will be initialized with locale en_US.UTF-8.
    The default database encoding has accordingly been set to UTF8.
    The default text search configuration will be set to "english".
    
    creating directory /u02/pgdata/testmig ... ok
    creating directory /u03/pgdata/testmig ... ok
    creating subdirectories ... ok
    selecting default max_connections ... 100
    selecting default shared_buffers ... 24MB
    creating configuration files ... ok
    creating template1 database in /u02/pgdata/testmig/base/1 ... ok
    initializing pg_authid ... ok
    initializing dependencies ... ok
    creating system views ... ok
    loading system objects' descriptions ... ok
    creating collations ... ok
    creating conversions ... ok
    creating dictionaries ... ok
    setting privileges on built-in objects ... ok
    creating information schema ... ok
    loading PL/pgSQL server-side language ... ok
    vacuuming database template1 ... ok
    copying template1 to template0 ... ok
    copying template1 to postgres ... ok
    
    WARNING: enabling "trust" authentication for local connections
    You can change this by editing pg_hba.conf or using the -A option the
    next time you run initdb.
    
    Success. You can now start the database server using:
    
        /u01/app/postgres/product/91/db_8/bin/postgres -D /u02/pgdata/testmig
    or
        /u01/app/postgres/product/91/db_8/bin/pg_ctl -D /u02/pgdata/testmig -l logfile start
    

    (I am assuming that password less ssh authentication is already setup between the nodes for the following). Setup authentication:

    postgres@debian6pg:/u03$ echo "host    replication     postgres       192.168.22.32/32        trust" >> /u02/pgdata/testmig/pg_hba.conf
    postgres@debian6pg:/u03$ echo "host    replication     postgres       192.168.22.33/32        trust" >> /u02/pgdata/testmig/pg_hba.conf
    

    Adjust the parameters:

    postgres@debian6pg:/u03$ sed -i 's/#wal_level = minimal/wal_level = hot_standby/g' /u02/pgdata/testmig/postgresql.conf
    postgres@debian6pg:/u03$ sed -i 's/#max_wal_senders = 0/max_wal_senders = 10/g' /u02/pgdata/testmig/postgresql.conf
    postgres@debian6pg:/u03$ sed -i 's/#wal_keep_segments = 0/wal_keep_segments = 100/g' /u02/pgdata/testmig/postgresql.conf
    postgres@debian6pg:/u03$ sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '*'/g" /u02/pgdata/testmig/postgresql.conf
    postgres@debian6pg:/u03$ sed -i 's/#logging_collector = off/#logging_collector = on/g' /u02/pgdata/testmig/postgresql.conf    
    postgres@debian6pg:/u03$ mkdir /u02/pgdata/testmig/pg_log    
    

    Start and stop the instance:

    postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/91/db_8/bin/:$PATH
    postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ start -l /u02/pgdata/testmig/pg_log/log.log
    postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ stop
    

    Ready to setup the standby:

    postgres@debian6pg:/u03$ cd /u02    
    postgres@debian6pg:/u02$ rsync -r pgdata/ 192.168.22.33:/u02/pgdata
    postgres@debian6pg:~$ cd /u03
    postgres@debian6pg:/u03$ rsync -r pgdata/ 192.168.22.33:/u03/pgdata
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "ln -s /u03/pgdata/testmig/ /u02/pgdata/testmig/pg_xlog"
    postgres@debian6pg:/u03$ scp /u02/pgdata/testmig/pg_hba.conf 192.168.22.33:/u02/pgdata/testmig/pg_hba.conf
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "echo \"standby_mode = on\" >> /u02/pgdata/testmig/recovery.conf"
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "echo \"primary_conninfo = 'host=192.168.22.32 port=5432 user=postgres'\" >> /u02/pgdata/testmig/recovery.conf"
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "echo \"trigger_file = '/u02/pgdata/testmig/up_slave'\" >> /u02/pgdata/testmig/recovery.conf"
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "sed -i 's/#hot_standby = off/hot_standby = on/g' /u02/pgdata/testmig/postgresql.conf"
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "sed -i 's/#logging_collector = off/#logging_collector = on/g' /u02/pgdata/testmig/postgresql.conf"
    postgres@debian6pg:/u03$ ssh 192.168.22.33 "mkdir -p /u02/pgdata/testmig/pg_log"
    

    Start the master:

    postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ start -l /u02/pgdata/testmig/pg_log/log.log
    

    Start the standby:

    # standby side
    postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/91/db_8/bin/:$PATH
    postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ start -l /u02/pgdata/testmig/pg_log/log.log
    

    … and if everything went fine we should see this in the log of the standby instance:

    LOG: database system was shut down at 2017-01-18 07:28:02 CET
    LOG: entering standby mode
    LOG: consistent recovery state reached at 0/16BCBB0
    LOG: database system is ready to accept read only connections
    LOG: record with zero length at 0/16BCBB0
    LOG: streaming replication successfully connected to primary

    A quick check on the standby to confirm that it is operating in recovery mode:

    postgres@debian6pg:~$ psql
    psql (9.1.8)
    Type "help" for help.
    
    postgres=# select pg_is_in_recovery();
     pg_is_in_recovery 
    -------------------
     t
    (1 row)
    

    In the case we had at the customer there was the adminpack extension installed in the postgres database and the pg_trgm and pg_buffercache extension in the application database, so lets do the same here on the master (this will get replicated to the standby automatically):

    postgres@debian6pg:/u03$ psql
    psql (9.1.8)
    Type "help" for help.
    
    postgres=# create extension adminpack;
    CREATE EXTENSION
    postgres=# create database testmig;
    CREATE DATABASE
    postgres=# \c testmig
    You are now connected to database "testmig" as user "postgres".
    testmig=# create extension pg_trgm;
    CREATE EXTENSION
    testmig=# create extension pg_buffercache;
    CREATE EXTENSION
    testmig=# 
    

    Quickly confirm that it is there on the standby:

    postgres=# \dx
                            List of installed extensions
       Name    | Version |   Schema   |               Description               
    -----------+---------+------------+-----------------------------------------
     adminpack | 1.0     | pg_catalog | administrative functions for PostgreSQL
     plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
    (2 rows)
    
    postgres=# \c testmig
    You are now connected to database "testmig" as user "postgres".
    testmig=# \dx
                                           List of installed extensions
          Name      | Version |   Schema   |                            Description                            
    ----------------+---------+------------+-------------------------------------------------------------------
     pg_buffercache | 1.0     | public     | examine the shared buffer cache
     pg_trgm        | 1.0     | public     | text similarity measurement and index searching based on trigrams
     plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
    (3 rows)
    

    Finally, some sample data generated with pgbench:

    postgres@debian6pg:/u03$ pgbench -i testmig -s 10
    

    Should be there on the standby as well:

    testmig=# select count(*) from pgbench_accounts;
     count  
    --------
    1000000
    (1 row)
    testmig=# 
    

    This is, more or less, the situation to start from. How can we upgrade this to PostgreSQL 9.5.5 with minimal downtime and without using logical replication? Obviously we’ll need to get PostgreSQL 9.5.5 installed on both systems before we can do anything further, so:

    postgres@debian6pg:~$ wget https://ftp.postgresql.org/pub/source/v9.5.5/postgresql-9.5.5.tar.bz2
    postgres@debian6pg:~$ tar -axf postgresql-9.5.5.tar.bz2 
    postgres@debian6pg:~$ cd postgresql-9.5.5/
    postgres@debian6pg:~/postgresql-9.5.5$ PGHOME=/u01/app/postgres/product/95/db_5
    postgres@debian6pg:~/postgresql-9.5.5$ PGHOME=/u01/app/postgres/product/95/db_5
    postgres@debian6pg:~/postgresql-9.5.5$ SEGSIZE=2
    postgres@debian6pg:~/postgresql-9.5.5$ BLOCKSIZE=8
    postgres@debian6pg:~/postgresql-9.5.5$ WALSEGSIZE=16
    postgres@debian6pg:~/postgresql-9.5.5$ ./configure --prefix=${PGHOME} \
                                                       --exec-prefix=${PGHOME} \
                                                       --bindir=${PGHOME}/bin \
                                                       --libdir=${PGHOME}/lib \
                                                       --sysconfdir=${PGHOME}/etc \
                                                       --includedir=${PGHOME}/include \
                                                       --datarootdir=${PGHOME}/share \
                                                       --datadir=${PGHOME}/share \
                                                       --with-pgport=5432 \
                                                       --with-perl \
                                                       --with-python \
                                                       --with-tcl \
                                                       --with-openssl \
                                                       --with-pam \
                                                       --with-ldap \
                                                       --with-libxml \
                                                       --with-libxslt \
                                                       --with-segsize=${SEGSIZE} \
                                                       --with-blocksize=${BLOCKSIZE} \
                                                       --with-wal-segsize=${WALSEGSIZE}
    postgres@debian6pg:~/postgresql-9.5.5$ make world
    postgres@debian6pg:~/postgresql-9.5.5$ make install
    postgres@debian6pg:~/postgresql-9.5.5$ cd contrib
    postgres@debian6pg:~/postgresql-9.5.5/contrib$ make install
    postgres@debian6pg:~/postgresql-9.5.5/contrib$ cd ../..
    postgres@debian6pg:~$ rm -rf postgresql-9.5.5*
    

    Then we need a new cluster initialized with the new version of PostgreSQL on the master:

    postgres@debian6pg:~$ /u01/app/postgres/product/95/db_5/bin/initdb -D /u02/pgdata/testmig95/ -X /u03/pgdata/testmig95/ 
    The files belonging to this database system will be owned by user "postgres".
    This user must also own the server process.
    
    The database cluster will be initialized with locale "en_US.UTF-8".
    The default database encoding has accordingly been set to "UTF8".
    The default text search configuration will be set to "english".
    
    Data page checksums are disabled.
    
    creating directory /u02/pgdata/testmig95 ... ok
    creating directory /u03/pgdata/testmig95 ... ok
    creating subdirectories ... ok
    selecting default max_connections ... 100
    selecting default shared_buffers ... 128MB
    selecting dynamic shared memory implementation ... posix
    creating configuration files ... ok
    creating template1 database in /u02/pgdata/testmig95/base/1 ... ok
    initializing pg_authid ... ok
    initializing dependencies ... ok
    creating system views ... ok
    loading system objects' descriptions ... ok
    creating collations ... ok
    creating conversions ... ok
    creating dictionaries ... ok
    setting privileges on built-in objects ... ok
    creating information schema ... ok
    loading PL/pgSQL server-side language ... ok
    vacuuming database template1 ... ok
    copying template1 to template0 ... ok
    copying template1 to postgres ... ok
    syncing data to disk ... ok
    
    WARNING: enabling "trust" authentication for local connections
    You can change this by editing pg_hba.conf or using the option -A, or
    --auth-local and --auth-host, the next time you run initdb.
    
    Success. You can now start the database server using:
    
        /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ -l logfile start
    

    Shutdown the master and record the latest checkpoint location (This is where your downtime starts):

    postgres@debian6pg:/u02$ pg_ctl -D /u02/pgdata/testmig stop -m fast
    LOG:  received fast shutdown request
    LOG:  aborting any active transactions
    LOG:  autovacuum launcher shutting down
    LOG:  shutting down
    waiting for server to shut down....LOG:  database system is shut down
     done
    server stopped
    
    postgres@debian6pg:/u02/pgdata/testmig$ pg_controldata  | grep "Latest checkpoint location"
    Latest checkpoint location:           0/C619840
    

    Shutdown the slave and compare the last checkpoint:

    # slave side
    postgres@debian6pg:/u02/pgdata/testmig$ pg_ctl -D /u02/pgdata/testmig/ stop -m fast
    waiting for server to shut down.... done
    server stopped
    
    postgres@debian6pg:/u02/pgdata/testmig$ pg_controldata  | grep "Latest checkpoint location"
    Latest checkpoint location:           0/C619840
    

    As both checkpoint locations match we are sure that the standby applied all changes and there is not difference in data.

    Save your configuration files:

    postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp 
    postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
    postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp 
    postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
    

    Run pg_upgrade on the master with link (-k) mode (if you have many cores on your box you can use the “-j” option to parallelize pg_upgrade):

    postgres@debian6pg:/u02$ export PGDATAOLD=/u02/pgdata/testmig/
    postgres@debian6pg:/u02$ export PGDATANEW=/u02/pgdata/testmig95/
    postgres@debian6pg:/u02$ export PGBINOLD=/u01/app/postgres/product/91/db_8/bin/
    postgres@debian6pg:/u02$ export PGBINNEW=/u01/app/postgres/product/95/db_5/bin/
    
    postgres@debian6pg:/u02$ /u01/app/postgres/product/95/db_5/bin/pg_upgrade -k 
    

    (Usually you’d do a “-c” check run before doing the real upgrade). When using link mode the files get hard-linked instead of copied which is much faster and saves disk space. The downside is that you can not revert to the old cluster in case anything goes wrong. When it goes fine, it looks like this:

    
    Performing Consistency Checks
    -----------------------------
    Checking cluster versions                                   ok
    Checking database user is the install user                  ok
    Checking database connection settings                       ok
    Checking for prepared transactions                          ok
    Checking for reg* system OID user data types                ok
    Checking for contrib/isn with bigint-passing mismatch       ok
    Checking for invalid "line" user columns                    ok
    Creating dump of global objects                             ok
    Creating dump of database schemas
                                                                ok
    Checking for presence of required libraries                 ok
    Checking database user is the install user                  ok
    Checking for prepared transactions                          ok
    
    If pg_upgrade fails after this point, you must re-initdb the
    new cluster before continuing.
    
    Performing Upgrade
    ------------------
    Analyzing all rows in the new cluster                       ok
    Freezing all rows on the new cluster                        ok
    Deleting files from new pg_clog                             ok
    Copying old pg_clog to new server                           ok
    Setting next transaction ID and epoch for new cluster       ok
    Deleting files from new pg_multixact/offsets                ok
    Setting oldest multixact ID on new cluster                  ok
    Resetting WAL archives                                      ok
    Setting frozenxid and minmxid counters in new cluster       ok
    Restoring global objects in the new cluster                 ok
    Restoring database schemas in the new cluster
                                                                ok
    Setting minmxid counter in new cluster                      ok
    Adding ".old" suffix to old global/pg_control               ok
    
    If you want to start the old cluster, you will need to remove
    the ".old" suffix from /u02/pgdata/testmig/global/pg_control.old.
    Because "link" mode was used, the old cluster cannot be safely
    started once the new cluster has been started.
    
    Linking user relation files
                                                                ok
    Setting next OID for new cluster                            ok
    Sync data directory to disk                                 ok
    Creating script to analyze new cluster                      ok
    Creating script to delete old cluster                       ok
    
    Upgrade Complete
    ----------------
    Optimizer statistics are not transferred by pg_upgrade so,
    once you start the new server, consider running:
        ./analyze_new_cluster.sh
    
    Running this script will delete the old cluster's data files:
        ./delete_old_cluster.sh
    

    Restore the configuration files:

    postgres@debian6pg:/u02$ mkdir -p /u02/pgdata/testmig95/pg_log
    postgres@debian6pg:/u02$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf  
    postgres@debian6pg:/u02$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf 
    

    Start and stop the upgraded instance and check that everything is fine in the log file:

    postgres@debian6pg:/u02$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ -l /u02/pgdata/testmig95/pg_log/log.log start    
    postgres@debian6pg:/u02$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ stop   
    

    You could already keep your cluster running now and your downtime is completed when you plan to re-build the standby. When you want to do the standby now then: save the configuration files:

    # standby side
    postgres@debian6pg:/u03$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
    postgres@debian6pg:/u03$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
    postgres@debian6pg:/u03$ cp /u02/pgdata/testmig/recovery.conf /var/tmp
    

    Sync the directories from the master to the standby (this will be very fast because it will create hard links on the standby server instead of copying the user files):

    postgres@debian6pg:/u03$ cd /u02/pgdata   
    postgres@debian6pg:/u02$ rsync --archive --delete --hard-links --size-only testmig testmig95 192.168.22.33:/u02/pgdata
    postgres@debian6pg:/u02$ cd /u03
    postgres@debian6pg:/u03$ rsync -r pgdata/testmig95 192.168.22.33:/u03/pgdata/testmig95
    

    Restore the configuration files on the standby:

    postgres@debian6pg:/u03$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf 
    postgres@debian6pg:/u03$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf
    postgres@debian6pg:/u03$ cp /var/tmp/recovery.conf /u02/pgdata/testmig95/recovery.conf
    

    Start the master:

    postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
    postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log
    

    Start the standby:

    postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
    postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log
    

    Check the standby’s logfile:

    
    LOG:  database system was shut down at 2017-01-19 07:51:24 GMT
    LOG:  creating missing WAL directory "pg_xlog/archive_status"
    LOG:  entering standby mode
    LOG:  started streaming WAL from primary at 0/E000000 on timeline 1
    LOG:  consistent recovery state reached at 0/E024D38
    LOG:  redo starts at 0/E024D38
    LOG:  database system is ready to accept read only connections
    

    Do some checks to see that everything is there on the standby:

    postgres@debian6pg:~$ psql
    psql (9.5.5)
    Type "help" for help.
    
    postgres=# select pg_is_in_recovery();
     pg_is_in_recovery 
    -------------------
     t
    (1 row)
    
    postgres=# \dx
                            List of installed extensions
       Name    | Version |   Schema   |               Description               
    -----------+---------+------------+-----------------------------------------
     adminpack | 1.0     | pg_catalog | administrative functions for PostgreSQL
     plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
    (2 rows)
    
    postgres=# \c testmig
    You are now connected to database "testmig" as user "postgres".
    testmig=# \dx
                                           List of installed extensions
          Name      | Version |   Schema   |                            Description                            
    ----------------+---------+------------+-------------------------------------------------------------------
     pg_buffercache | 1.0     | public     | examine the shared buffer cache
     pg_trgm        | 1.0     | public     | text similarity measurement and index searching based on trigrams
     plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
    (3 rows)
    
    testmig=# \d
                  List of relations
     Schema |       Name       | Type  |  Owner   
    --------+------------------+-------+----------
     public | pg_buffercache   | view  | postgres
     public | pgbench_accounts | table | postgres
     public | pgbench_branches | table | postgres
     public | pgbench_history  | table | postgres
     public | pgbench_tellers  | table | postgres
    (5 rows)
    
    testmig=# select count(*) from pgbench_accounts;
      count  
    ---------
     1000000
    (1 row)
    

    Run the analyze_new_cluster.sh on the master:

    postgres@debian6pg:~$ ./analyze_new_cluster.sh
    This script will generate minimal optimizer statistics rapidly
    so your system is usable, and then gather statistics twice more
    with increasing accuracy.  When it is done, your system will
    have the default level of optimizer statistics.
    
    If you have used ALTER TABLE to modify the statistics target for
    any tables, you might want to remove them and restore them after
    running this script because they will delay fast statistics generation.
    
    If you would like default statistics as quickly as possible, cancel
    this script and run:
        "/u01/app/postgres/product/95/db_5/bin/vacuumdb" --all --analyze-only
    
    vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
    vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
    vacuumdb: processing database "testmig": Generating minimal optimizer statistics (1 target)
    vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
    vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
    vacuumdb: processing database "testmig": Generating medium optimizer statistics (10 targets)
    vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
    vacuumdb: processing database "template1": Generating default (full) optimizer statistics
    vacuumdb: processing database "testmig": Generating default (full) optimizer statistics
    

    Now you can delete the old cluster on the master:

    postgres@debian6pg:~$ postgres@debian6pg:~$ ./delete_old_cluster.sh
    

    Then either copy the script to the standby or delete the old standby the manual way:

    postgres@debian6pg:~$ rm -rf /u02/pgdata/testmig
    postgres@debian6pg:~$ rm -rf /u03/pgdata/testmig
    

    Done. Hope this helps …

     

    Cet article From PostgreSQL 9.1.8 to PostgreSQL 9.5.5 including a standby database with minimal downtime est apparu en premier sur Blog dbi services.

    Issue with OR expanding and subquery

    Tom Kyte - Wed, 2017-01-18 18:06
    <code>Hi Tom I have an issue with OR expansion. Using literals, the OR conditions are expanded neatly. However, I am unable to make the optimizer come up with a plan that uses the index, when I use a subquery. drop table t1; drop table t2; ...
    Categories: DBA Blogs

    Degree of Parallelism (DOP) of Index

    Tom Kyte - Wed, 2017-01-18 18:06
    Hi Team, As part of a daily housekeeping practice I executed the below commands for an index rebuilding activity : ALTER INDEX DLM_RAMBO_ILQ_PK REBUILD ONLINE PARALLEL 12 NOLOGGING; ALTER INDEX RAMBO_ILQ_IDX2 REBUILD ONLINE PARALLEL 12 N...
    Categories: DBA Blogs

    Rename index with DBMS_METADATA

    Tom Kyte - Wed, 2017-01-18 18:06
    Hi, I am trying to change the name of an index using DBMS_METADATA ,but it seemingly does not works on the index name. Please let me know if I am doing something wrong. Best Regards, Manas drop table a; create table a (a clob); drop...
    Categories: DBA Blogs

    Refreshing SQL Profiles

    Tom Kyte - Wed, 2017-01-18 18:06
    Recently I had a performance issue where the solution was to drop the exising SQL profile. SQL profiles contain "auxiliary statistics specific to a SQL statement". These can of course get stale if the data in the tables referenced by the statement ...
    Categories: DBA Blogs

    Security lockdown - revoke all privileges from PUBLIC

    Tom Kyte - Wed, 2017-01-18 18:06
    Hi guys, I'm working on a system which requires good security, and therefore must operate on a minimum privilege basis. Naturally I've read the CIS recommendations and applied them, as well as DISA STIG, and disabled all default accounts etc. ...
    Categories: DBA Blogs

    spool output columns in seperate csv column - call from SQL developer

    Tom Kyte - Wed, 2017-01-18 18:06
    additional question to the question from yesterday : : http://asktom.oracle.com/pls/apex/f?p=100:12:0::NO::P12_ORIG,P12_PREV_PAGE,P12_QUESTION_ID:Y,1,9532893700346545049 I'm using the SQL developer for starting scripts. additional question: ...
    Categories: DBA Blogs

    Compound Principle Amount and Profit Calculation

    Tom Kyte - Wed, 2017-01-18 18:06
    Hi Tom, Hope you are at the best of you health. I have following table structure with sample data. CREATE TABLE investments ( investment_date DATE, registration_no VARCHAR2(100), amount NUMBER, profit_rate NUMBER...
    Categories: DBA Blogs

    Refreshing nested materialized views

    Tom Kyte - Wed, 2017-01-18 18:06
    Hi, When I have 2 materialized views (MV_1A and MV_1B) that are based on the same materialized view (MV_1) then using dbms_mview.refresh with the nested=>true option on 1 of those materialized views invalidates the other materialized view. As I un...
    Categories: DBA Blogs

    Partner Webcast – Developing Hybrid Mobile Applications with Oracle JET

    Oracle Javascript Extension Toolkit (Oracle JET) is a modular open source toolkit based on modern JavaScript, CSS3 and HTML5 design and development principles. It is a collection of open...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator