Feed aggregator

Importance of BI tools for Pharmaceutical Industry

Nilesh Jethwa - Fri, 2017-06-16 12:40

The pharmaceutical industry has been focusing much of its energy on producing the next chartbusting drugs.

However, the industry is also facing challenges in terms of the changing regulations, increasing number of declines on drug approvals and the continuing need to develop high value biologics. Refocusing efforts is imperative.

There are existing processes, systems and practices that need to be immediately addressed. Key performance indicators (KPI) using BI tools help corporations identify areas that need continuous development and highlight successes already achieved. This helps pharmaceutical companies grasp their management operations better.

What is KPI?

Simply put, KPI is a measurement of something that is vital in a business’s operations. In the pharmaceutical industry, measurements on drug cost overall trend for a certain drug are examples of KPIs.

What KPIs to Include

Before using dashboard software to view KPIs, it is important that useful KPIs be identified first. Some KPIs are not useful to a certain organization, depending on the operations and the needs of that organization. It is therefore important to have long discussions first before operating a KPI dashboard. Discussions should tackle on:

  • Assessing the organization’s strategies.
  • Determining the business drives that have an impact on the execution of strategies.
  • Identification of long-term and short-term goals.

Determining useful KPIs is one the most important tasks of managers. KPIs based on the organization’s objectives are essential components of an effective and helpful dashboard. What you will see on the dashboard will largely depend on how you specify your KPIs.

Read more at http://www.infocaptor.com/dashboard/dashboard-bi-tools-for-pharmaceutical-industry

Generally available Kudu

DBMS2 - Fri, 2017-06-16 10:52

I talked with Cloudera about Kudu in early May. Besides giving me a lot of information about Kudu, Cloudera also helped confirm some trends I’m seeing elsewhere, including:

  • Security is an ever bigger deal.
  • There’s a lot of interest in data warehouses (perhaps really data marts) that are updated in human real-time.
    • Prospects for that respond well to the actual term “data warehouse”, at least when preceded by some modifier to suggest that it’s modern/low-latency/non-batch or whatever.
    • Flash is often — but not yet always — preferred over disk for that kind of use.
    • Sometimes these data stores are greenfield. When they’re migrations, they come more commonly from analytic RDBMS or data warehouse appliance (the most commonly mentioned ones are Teradata, Netezza and Vertica, but that’s perhaps just due to those product lines’ market share), rather than from general purpose DBMS such as Oracle or SQL Server.
  • Intel is making it ever easier to vectorize CPU operations, and analytic data managers are increasingly taking advantage of this possibility.

Now let’s talk about Kudu itself. As I discussed at length in September 2015, Kudu is:

  • A data storage system introduced by Cloudera (and subsequently open-sourced).
  • Columnar.
  • Updatable in human real-time.
  • Meant to serve as the data storage tier for Impala and Spark.

Kudu’s adoption and roll-out story starts:

  • Kudu went to general availability on January 31. I gather this spawned an uptick in trial activity.
  • A subsequent release with some basic security features spawned another uptick.
  • I don’t think Cloudera will mind my saying that there are many hundreds of active Kudu clusters.
  • But Cloudera believes that, this soon after GA, very few Kudu users are in actual production.

Early Kudu interest is focused on 2-3 kinds of use case. The biggest is the kind of “data warehousing” highlighted above. Cloudera characterizes the others by the kinds of data stored, specifically the overlapping categories of time series — including financial trading — and machine-generated data. A lot of early Kudu use is with Spark, even ahead of (or in conjunction with) Impala. A small amount has no relational front-end at all.

Other notes on Kudu include:

  • Solid-state storage is recommended, with a few terabytes per node.
  • You can also use spinning disk. If you do, your write-ahead logs can still go to flash.
  • Cloudera said Kudu compression ratios can be as low as 2-5X, or as high as 10-20X. With that broad a range, I didn’t drill down into specifics of what they meant.
  • There seem to be a number of Kudu clusters with 50+ nodes each. By way of contrast, a “typical” Cloudera customer has 100s of nodes overall.
  • As you might imagine from their newness, Kudu security features — Kerberos-based — are at the database level rather than anything more granular.

And finally, the Cloudera folks woke me up to some issues around streaming data ingest. If you stream data in, there will be retries resulting in duplicate delivery. So your system needs to deal with those one way or another. Kudu’s way is:

  • Primary keys will be unique. (Note: This is not obvious in a system that isn’t an entire RDBMS in itself.)
  • You can configure the uniqueness to be guaranteed either through an upsert mechanism or just by simply rejecting duplicates.
  • Alternatively, you can write code to handle duplication errors, e.g. via Spark.
Categories: Other

DBSAT un outil pour la sécurité de vos bases de données Oracle

Yann Neuhaus - Fri, 2017-06-16 10:22
Qu’est-ce que DBSAT ?

C’est un outil gratuit d’Oracle que vous pouvez télécharger sur My Oracle Support avec comme référence : Doc Id 2138254.1.
Il a pour but d’évaluer la sécurité de vos bases de données Oracle en analysant la configuration et les stratégies de sécurité mise en place afin de découvrir les risques liés à la sécurité.

Comment cela fonctionne t-il ?

Dans un premier temps il sera nécessaire de collecter les informations de votre database et dans un second temps de générer un rapport.
L’outil met à notre disposition 3 types de rapport.

  • Rapport Texte
  • Rapport Tableau
  • Rapport HTML

En quelques mots :

  • Implémentation et utilisation facile et rapide
  • Rapports détaillés
  • Détecte de véritable problème de sécurité
  • Pas de coûts supplémentaires si vous avez avez un contrat de support Oracle
  • Les résultats sont mis en évidence par différentes couleurs (Bleu, Vert, Jaune, Rouge)
Les différentes étapes nécessaires à la mise en œuvre
  • Installation de l’outil (DBSAT)
  • Collection des informations (DBSAT Collector)
  • Rapport sur l’état des risques (DBSAT Reports)
Installation de l’outil

Cet outil est développé en Python est requiert la version 2.6 ou supérieure (voir la version : python -V).

Le répertoire d’installation peut être ou vous le souhaitez car l’installation n’est qu’une décompression d’un fichier zippé, mais nous vous conseillons de le décompresser dans le répertoire de l’utilisateur  Oracle (/Home/Oracle/DBSAT).

DBSAT Collector doit être exécuté en tant qu’utilisateur OS avec des autorisations de lecture sur les fichiers et les répertoires sous ORACLE_HOME afin de collecter et traiter les données.

Si vous utilisez un environnement Vault il sera nécessaire d’utiliser un utilisateur non-SYS avec le role DV_SECANALYST.

Rôle DV_SECANALYST :

    • CREATE SESSION
    • SELECT on SYS.REGISTRY$HISTORY
    • Role SELECT_CATALOG_ROLE
    • Role DV_SECANALYST (if Database Vault is enabled)
    • Role AUDIT_VIEWER (12c only)
    • Role CAPTURE_ADMIN (12c only)
    • SELECT on SYS.DBA_USERS_WITH_DEFPWD (11g and 12c)
    • SELECT on AUDSYS.AUD$UNIFIED (12c only)

Vous trouverez plus d’information dans la documentation à l’adresse suivante :
https://docs.oracle.com/cd/E76178_01/SATUG/toc.htm

Collection des informations

La collection des informations est obligatoire. Celle-ci sera nécessaire pour la génération des rapports (Texte, HTML ou Tableau).
Très simple à utiliser et sécurisé : dbsat collect /file_name
Capture d’écran 2017-06-09 à 17.00.00

Rapport sur l’état des risques

Le rapport peut être généré de différentes manière en excluant plusieurs sections avec l’option -x.
dbsat report [-a] [-n] [-x section] pathname

Options :
Capture d’écran 2017-06-12 à 17.52.53

Les différentes sections utilisables

USER     : Compte utilisateur
PRIV      : Privileges et Roles
AUTH     : Contrôles authorisations
CRYPT    : Encryption des données
ACCESS  : Contrôle d’accès
AUDIT    : Audit
CONF      : Configuration Base de données
NET         : Configuration réseaux
OS            : Système d’exploitation

Capture d’écran 2017-06-09 à 17.31.14
Une fois décompressé, nous avons nos 3 types de fichier, texte, tableau et html.

Capture d’écran 2017-06-09 à 17.35.39

 Consultation du rapport

Aperçu du rapport.
Si vous utilisez des PDB, il sera nécessaire de collecter les informations auprès de chaque PDB individuellement.

Capture d’écran 2017-06-12 à 17.11.04

Capture d’écran 2017-06-12 à 17.14.03Les différents status du rapport

Vous pouvez utiliser ces status comme un fil rouge pour la mise en œuvre de recommandation. Ceci peut être utilisé pour prioriser et  planifier les modifications en fonction du niveau de risque. Un risque grave pourrait nécessiter des mesures correctives immédiates, alors que d’autres risques pourraient être résolus pendant un temps d’arrêt planifié ou associés à d’autres activités de maintenance.

Passe             : Aucune erreur trouvée
Évaluation   : Nécessite une analyse manuelle
Risque           : Bas
Risque           : Medium significatif
Risque           : Haut

Conclusion

Testez-le sans modération, il vous permettra d’avoir une vue globale sur la mise en place de la sécurité de vos bases de données. Une fois les problèmes identifiés il ne vous restera plus qu’à les corriger.

 

Cet article DBSAT un outil pour la sécurité de vos bases de données Oracle est apparu en premier sur Blog dbi services.

OUD 11.1.2.3 – How to create an OUD Start/Stop/Status script on Oracle Linux 6

Yann Neuhaus - Fri, 2017-06-16 08:39

One of the questions that pops up immediately, after you have installed your OUD successfully is how to integrate it into the automatic startup routines of the OS.

My example here show how to do it on Oracle Linux 6. On Oracle Linux 7 it looks a little different. Fortunately, Oracle delivers a script called “create-rc-script”, which can be found in your asinst home directory. It lets you specify the user name under which the OUD will run, the JAVA home and few more stuff. The whole documentation can be found under the following link.

https://docs.oracle.com/cd/E52734_01/oud/OUDAG/appendix_cli.htm#OUDAG01144

Running “–help” gives you all the options.

$ cat /etc/oracle-release
Oracle Linux Server release 6.9

$ ./create-rc-script -V
Oracle Unified Directory 11.1.2.3.170418
Build 20170206221556Z

$ ./create-rc-script --help
Create an RC script that may be used to start, stop, and restart the Directory
Server on UNIX-based systems

Usage:  create-rc-script  {options}
        where {options} include:

-f, --outputFile {path}
    The path to the output file to create
-u, --userName {userName}
    The name of the user account under which the server should run
-j, --javaHome {path}
    The path to the Java installation that should be used to run the server
-J, --javaArgs {args}
    A set of arguments that should be passed to the JVM when running the server

General Options

-V, --version
    Display Directory Server version information
-?, -H, --help
    Display this usage information

Take care that you start the “create-rc-script” script from your asinst_1 home, and not from the Oracle_OUD1 home. The reason for that, is that the “create-rc-script” sets the working directory to your current directory. “WORKING_DIR=`pwd`”, and if not started from the correct directory, you might end up with a not working start/stop script.

So .. to do it correctly, switch to your OUD asinst home first and run it from there. I am using here only a few options. The JAVA home, the user name under which the OUD will run and the output file.

$ cd /u01/app/oracle/product/middleware/asinst_1/OUD/bin

$ pwd
/u01/app/oracle/product/middleware/asinst_1/OUD/bin

$ ./create-rc-script --userName oracle --javaHome /u01/app/oracle/product/middleware/jdk --outputFile /home/oracle/bin/oud

The output generated by the script will be the start/stop script.

$ pwd
/home/oracle/bin

$ ls -l
total 4
-rwxr-xr-x. 1 oracle oinstall 862 Jun 16 13:35 oud

$ cat oud
#!/bin/sh
#
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
#
#
# chkconfig: 345 90 30
# description: Oracle Unified Directory startup script
#

# Set the path to the Oracle Unified Directory instance to manage
INSTALL_ROOT="/u01/app/oracle/product/middleware/asinst_1/OUD"
export INSTALL_ROOT

# Specify the path to the Java installation to use
OPENDS_JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export OPENDS_JAVA_HOME

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/start-ds" --quiet
  exit ${?}
  ;;
stop)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --quiet
  exit ${?}
  ;;
restart)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --restart --quiet
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart }"
  exit 1
  ;;
esac

The generated start/stop script looks quite complete. The only thing missing is the “status” section which is quite useful from my point of view. To add the status section, we can use the “status” script, which is also part of the OUD installation.

$ ./status --help
This utility can be used to display basic server information

Usage:  status {options}
        where {options} include:


LDAP Connection Options

-D, --bindDN {bindDN}
    DN to use to bind to the server
    Default value: cn=Directory Manager
-j, --bindPasswordFile {bindPasswordFile}
    Bind password file
-o, --saslOption {name=value}
    SASL bind options
-X, --trustAll
    Trust all server SSL certificates
-P, --trustStorePath {trustStorePath}
    Certificate trust store path
-U, --trustStorePasswordFile {path}
    Certificate trust store PIN file
-K, --keyStorePath {keyStorePath}
    Certificate key store path
-u, --keyStorePasswordFile {keyStorePasswordFile}
    Certificate key store PIN file
-N, --certNickname {nickname}
    Nickname of certificate for SSL client authentication
--connectTimeout {timeout}
    Maximum length of time (in milliseconds) that can be taken to establish a
    connection.  Use '0' to specify no time out
    Default value: 30000

Utility Input/Output Options

-n, --no-prompt
    Use non-interactive mode.  If data in the command is missing, the user is
    not prompted and the tool will fail
-s, --script-friendly
    Use script-friendly mode
--propertiesFilePath {propertiesFilePath}
    Path to the file containing default property values used for command line
    arguments
--noPropertiesFile
    No properties file will be used to get default command line argument values
-r, --refresh {period}
    When this argument is specified, the status command will display its
    contents periodically.  Used to specify the period (in seconds) between two
    displays of the status

General Options

-V, --version
    Display Directory Server version information
-?, -H, --help
    Display this usage information

Take care. Per default, the status utility is an interactive one, and it asks you for the user bind DN and the password. So, the interactive version of that script is not useful for our script.

$ ./status

>>>> Specify Oracle Unified Directory LDAP connection parameters

Administrator user bind DN [cn=Directory Manager]:

Password for user 'cn=Directory Manager':

          --- Server Status ---
Server Run Status:        Started
Open Connections:         6

          --- Server Details ---
Host Name:                dbidg01
Administrative Users:     cn=Directory Manager
Installation Path:        /u01/app/oracle/product/middleware/Oracle_OUD1
Instance Path:            /u01/app/oracle/product/middleware/asinst_1/OUD
Version:                  Oracle Unified Directory 11.1.2.3.170418
Java Version:             1.7.0_141
Administration Connector: Port 4444 (LDAPS)

          --- Connection Handlers ---
Address:Port : Protocol               : State
-------------:------------------------:---------
--           : LDIF                   : Disabled
8899         : Replication (secure)   : Enabled
0.0.0.0:161  : SNMP                   : Disabled
0.0.0.0:1389 : LDAP (allows StartTLS) : Enabled
0.0.0.0:1636 : LDAPS                  : Enabled
0.0.0.0:1689 : JMX                    : Disabled
...
...

And we need to do some adjustments, like in the following example.

./status --trustAll --no-prompt --bindDN cn="Directory Manager" --bindPasswordFile /home/oracle/.oudpwd | head -24

OK. To complete the script, we can add the status section to the script.

$ cat oud

#!/bin/sh
#
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
#
#
# chkconfig: 345 90 30
# description: Oracle Unified Directory startup script
#

# Set the path to the Oracle Unified Directory instance to manage
INSTALL_ROOT="/u01/app/oracle/product/middleware/asinst_1/OUD"
export INSTALL_ROOT

# Specify the path to the Java installation to use
OPENDS_JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export OPENDS_JAVA_HOME

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/start-ds" --quiet
  exit ${?}
  ;;
stop)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --quiet
  exit ${?}
  ;;
restart)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/stop-ds" --restart --quiet
  exit ${?}
  ;;
status)
  /bin/su - oracle -- "${INSTALL_ROOT}/bin/status" --trustAll --no-prompt --bindDN cn="Directory Manager" --bindPasswordFile /home/oracle/.oudpwd | head -24
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart | status }"
  exit 1
  ;;
esac

Last but not least, we need to move it with the root user to the /etc/init.d directory and add it via chkconfig.

# mv /home/oracle/bin/oud /etc/init.d/
# chkconfig --add oud

# chkconfig --list | grep oud
oud             0:off   1:off   2:off   3:on    4:on    5:on    6:off

That’s all. The OUD part is done now. But what about the ODSM? We want the WebLogic domain to startup automatically as well. For doing so, we need another script.

$ cat /home/oracle/bin/weblogic

#!/bin/sh
#
#
# chkconfig: 345 90 30
# description: WebLogic 10.3.6 startup script
#

# Specify the path to the Java installation to use
JAVA_HOME="/u01/app/oracle/product/middleware/jdk"
export JAVA_HOME

BASE_DOMAIN="/u01/app/oracle/product/middleware/user_projects/domains/base_domain"
export BASE_DOMAIN

# Determine what action should be performed on the server
case "${1}" in
start)
  /bin/su - oracle -c "nohup ${BASE_DOMAIN}/bin/startWebLogic.sh &"
  exit ${?}
  ;;
stop)
  /bin/su - oracle -c "${BASE_DOMAIN}/bin/stopWebLogic.sh"
  exit ${?}
  ;;
restart)
  /bin/su - oracle -c "${BASE_DOMAIN}/bin/stopWebLogic.sh"
  /bin/su - oracle -c "nohup ${BASE_DOMAIN}/bin/startWebLogic.sh &"
  exit ${?}
  ;;
*)
  echo "Usage:  $0 { start | stop | restart }"
  exit 1
  ;;
esac

Now it’s time to move the weblogic to the start routines as well.

# mv /home/oracle/bin/weblogic /etc/init.d/
# chkconfig --add weblogic
# chkconfig --list | grep weblogic
weblogic        0:off   1:off   2:off   3:on    4:on    5:on    6:off

After everything is setup, it is time to test it. ;-)

# chkconfig --list | egrep '(weblogic|oud)'
oud             0:off   1:off   2:off   3:on    4:on    5:on    6:off
weblogic        0:off   1:off   2:off   3:on    4:on    5:on    6:off

# init 6

Now just check if everything came up correctly.

Conclusion

The OUD comes with a script “create-rc-script” which is quite useful. However, in case you have the OSDM and you want the OUD status section as well, some adjustments have to be done.

 

Cet article OUD 11.1.2.3 – How to create an OUD Start/Stop/Status script on Oracle Linux 6 est apparu en premier sur Blog dbi services.

Webcast: Testing Recommendations for Oracle E-Business Suite Applications

Steven Chan - Fri, 2017-06-16 02:00

Oracle University has a large collection of free recorded webcasts that EBS system administrators might find useful. Here's another webcast on testing EBS environments:

Prashanti Madiredd, Senior Director Quality Assurance shares best practices from the Oracle E-Business Suite Development team on the following topics: how the Oracle Development Quality Assurance team tests Oracle E-Business Suite, factors for customers to consider during functional testing, approaches for new feature and regression testing, reducing risk and production outages due to insufficient testing, capturing and presenting metrics to showcase the ROI of testing investment and leveraging automation for testing Oracle E-Business Suite application. This material was presented at Oracle OpenWorld 2014.

Related Articles

 

Categories: APPS Blogs

No data is logged in flashback_transaction_query view

Tom Kyte - Thu, 2017-06-15 19:46
Hi, I have been using flashback_transaction_query view since the previous version but in 12.1.0, I got the strange situation that is not explained with any reference. As far as I know, user has to enable supplemental logging to log some flashba...
Categories: DBA Blogs

login as sysdba remotely without any other prompts / or grant a user for example system shutdown and startup privilege

Tom Kyte - Thu, 2017-06-15 19:46
Hi , we are testing remote db setup and from application team we have to run scripts which were using sysdba earlier , but now we want to run those same scripts on a remote db from a application machine . So there are few places where we have to s...
Categories: DBA Blogs

Update statement with correlated subquery that intentionally passes multiple rows

Tom Kyte - Thu, 2017-06-15 19:46
I am trying to update one table using a subquery that totals several transactions from another table. I cannot figure how to link specific rows from my parent table to the rows in my subquery. I keep getting the ORA-01427: Subquery returns more than ...
Categories: DBA Blogs

HIUG Interact 2017

Jim Marion - Thu, 2017-06-15 16:11

Are you attending Interact 2017 this weekend? I will be leading a PeopleTools hands-on workshop on Sunday from 1:30 PM to 4:30 PM in Panzacola F-4. Because this is hands-on, please bring your laptop. All session activities will be completed over a remote desktop connection, so as long as you have a Remote Desktop Client and a full battery charge, you should be ready. In this session we will cover a variety of topics including classic global and component branding as well as fluid navigation and page development. I look forward to seeing you in Orlando!

12c NSSn process for Data Guard SYNC transport

Yann Neuhaus - Thu, 2017-06-15 10:15

In a previous post https://blog.dbi-services.com/dataguard-wait-events-have-changed-in-12c/ I mentioned the new processes NSA for ASYNC transport and NSS for SYNC transport. I’m answering a bit late to a comment about the number of processes: yes there is one NSSn process per LOG_ARCHIVE_DEST_n destination in SYNC and the numbers match.

Here is my configuration with two physical standby:
DGMGRL> show configuration
 
Configuration - orcl
 
Protection Mode: MaxPerformance
Members:
orcla - Primary database
orclb - Physical standby database
orclc - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 56 seconds ago)

Both are in SYNC:
DGMGRL> show database orclb logxptmode;
LogXptMode = 'sync'
DGMGRL> show database orclc logxptmode;
LogXptMode = 'sync'

I can see two NSS processes:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 4952 1 0 16:05 ? 00:00:00 ora_nss3_ORCLA
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

Here are the two log archive dest:
SQL> select name,value from v$parameter where regexp_like(name,'^log_archive_dest_[23]$');
NAME VALUE
---- -----
log_archive_dest_2 service="ORCLB", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclb" net_timeout=30, valid_for=(online_logfile,all_roles)
log_archive_dest_3 service="ORCLC", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="orclc" net_timeout=30, valid_for=(online_logfile,all_roles)

I set the 3rd one in ASYNC:
DGMGRL> edit database orclc set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS3 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5322 1 0 16:17 ? 00:00:00 ora_nss2_ORCLA

I set the 2nd destination to ASYNC:
DGMGRL> edit database orclb set property logxptmode=ASYNC;
Property "logxptmode" updated

The NSS2 has stopped:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"

Now starting the 3rd destination first:
DGMGRL> edit database orclc set property logxptmode=SYNC;
Property "logxptmode" updated

I can see that nss3 has started as it is the log_archive_dest_3 which is in SYNC now:
DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA

Then starting the second one:
DGMGRL> edit database orclb set property logxptmode=SYNC;
Property "logxptmode" updated

Here are the two processes:

DGMGRL> host ps -edf | grep --color=auto ora_nss[0-9] Executing operating system command(s):" ps -edf | grep --color=auto ora_nss[0-9]"
oracle 5368 1 0 16:20 ? 00:00:00 ora_nss3_ORCLA
oracle 5393 1 0 16:20 ? 00:00:00 ora_nss2_ORCLA

So if you see some SYNC Remote Write events in ASH, look at the program name to know which destination it is.

 

Cet article 12c NSSn process for Data Guard SYNC transport est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance

Yann Neuhaus - Thu, 2017-06-15 10:14

In the last post we did a click/click/click setup of the PEM server. What we want to do now is to attach a PostgreSQL instance to the PEM server for being able to monitor and administer it. For that we need to install the PEM agent on a host where we have a PostgreSQL instance running (192.168.22.249 in my case, which runs PostgreSQL 10 Beta1). Lets go …

As usual, when you want to have the systemd services generated automatically you should run the installation as root:

[root@pemclient postgres]# ls
pem_agent-7.0.0-beta1-1-linux-x64.run
[root@pemclient postgres]# chmod +x pem_agent-7.0.0-beta1-1-linux-x64.run 
[root@pemclient postgres]# ./pem_agent-7.0.0-beta1-1-linux-x64.run 

The installation itself is not a big deal, just follow the screens:

pem_agent1
pem_agent2
pem_agent3
pem_agent4
pem_agent5
pem_agent6
pem_agent7
pem_agent8
pem_agent9

Once done we have a new systemd service:

[root@pemclient postgres]# systemctl list-unit-files | grep pem
pemagent.service                              enabled 

… and the processes that make up the PEM agent:

[root@pemclient postgres]# ps -ef | grep pem
root      3454     1  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg
root      3455  3454  0 16:40 ?        00:00:00 /u01/app/postgres/product/pem7/agent/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/agent/etc/agent.cfg --pid 3454
root      3515  2741  0 16:43 pts/0    00:00:00 grep --color=auto pem

Heading back to the PEM web interface the new agent is visible immediately:
pem_agent10

So, lets add the instance:
pem_agent11
pem_agent12
pem_agent17
pem_agent14
pem_agent15

Of course, we need to allow connections to our PostgreSQL instance from the PEM server. Adding this to the pg_hba.conf and reloading the instance fixes the issue:

host    all             all             192.168.22.248/32       md5

Once done:
pem_agent16

… and the instance is there.

In the next post we’ll setup some monitoring for our newly added PostgreSQL instance.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta – Connecting a PostgreSQL instance est apparu en premier sur Blog dbi services.

A first look at EDB Postgres Enterprise Manager 7 beta

Yann Neuhaus - Thu, 2017-06-15 08:09

In case you missed it: EnterpriseDB has released the beta of Postgres Enterprise Manager 7 beta. When installation is as easy as for the current version that should just be a matter of clicking next, lets see.

Because the installer will create the systemd services installation should be done as root:

[root@edbpem tmp]$ ls -l
total 289076
-rw-r--r--. 1 root root 296009946 Jun  1 18:58 pem_server-7.0.0-beta1-1-linux-x64.run
[root@edbpem tmp]$ chmod +x pem_server-7.0.0-beta1-1-linux-x64.run 
[root@edbpem tmp]$ ./pem_server-7.0.0-beta1-1-linux-x64.run 

From now on all is graphical and straight forward:

pem7_1
pem7_2
pem7_3
pem7_4
pem7_5
pem7_6
pem7_7
pem7_8
pem7_9
pem7_10
pem7_11
pem7_12
pem7_13
pem7_14
pem7_15
pem7_16
pem7_17
pem7_18
pem7_19
pem7_20
pem7_21
pem7_22
pem7_23
pem7_24
pem7_25
pem7_26
pem7_27
pem7_28
pem7_29
pem7_30
pem7_31
pem7_32

What you get from a process perspective is this:

[root@edbpem tmp]$ ps -ef | grep pem
postgres 13462     1  0 19:17 ?        00:00:00 /u01/app/postgres/product/96/db_2/bin/postgres -D /u02/pgdata/pem
root     13869     1  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemagent -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg
root     13870 13869  0 19:18 ?        00:00:00 /u01/app/postgres/product/pem7/agent/bin/pemworker -c /u01/app/postgres/product/pem7/agent/etc/agent.cfg --pid 13869
postgres 13883 13462  1 19:18 ?        00:00:02 postgres: agent1 pem 127.0.0.1(53232) idle
postgres 13885 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53234) idle
postgres 13886 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53236) idle
postgres 13887 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53238) idle
postgres 13888 13462  0 19:18 ?        00:00:00 postgres: agent1 pem 127.0.0.1(53240) idle
pem      13938 13937  0 19:18 ?        00:00:00 EDBPEM                                                              -k start -f /u01/app/postgres/product/EnterpriseDB-ApacheHTTPD/apache/conf/httpd.conf
root     14301 11016  0 19:20 pts/0    00:00:00 grep --color=auto pem

Two new systemd services have been created so PEM should startup and shutdown automatically when the server reboots:

[root@edbpem tmp]# systemctl list-unit-files | egrep "pem|postgres"
pemagent.service                              enabled 
postgresql-9.6.service                        enabled 

Lets connect to PEM: https://192.168.22.248:8443/pem

pem7_33

If you have an EDB subscription now it is the time to enter the product key:

pem7_34

What you immediately can see is that the look and feel changed to that of pgadmin4 (A fat client for PEM as in the current version is not available any more):

pem7_35
pem7_36
pem7_37

In a next post we’ll add a PostgreSQL instance and start to monitor it.

 

Cet article A first look at EDB Postgres Enterprise Manager 7 beta est apparu en premier sur Blog dbi services.

Unify - An Insight Into the Product

Rittman Mead Consulting - Thu, 2017-06-15 06:00
Unify - An Insight Into the Product

Monday, 12 Jun saw the official release of Unify, Rittman Mead's very own connector between Tableau and OBIEE. It provides a simple but powerful integration between the two applications that allows you to execute queries through OBIEE and manipulate and render the datasets using Tableau.

Unify - An Insight Into the Product

Why We Made It

One of the first questions of course would be why we would want to do this in the first place. The excellent thing about OBI is that it acts as an abstraction layer on top of a database, allowing analysts to write efficient and secure reports without going into the detail of writing queries. As with any abstraction, it is a trade of simplicity for capability. Products like Tableau and Data Visualiser seek to reverse this trade, putting the power back in the hands of the report builder. However, without quoting Spiderman, care should be taken when doing this.

The result can be that users write inefficient queries, or worse still, incorrect ones. We know there will be some out there that use self service tools as purely a visualisation engine, simply dropping pre-made datasets into it. If you are looking to produce sustainable, scalable and accessible reporting systems, you need to tackle the problem both at the data acquisition stage as well as the communication stage at the end.

If you are already meeting both requirements, perhaps by using OBI with Data Visualiser (formerly Visual Analyser) or by other means then that's perfectly good. However, We know from experience that there are many of you out there that have already invested heavily into both OBI and Tableau as separate solutions. Rather than have them linger in a state of conflict, we'd rather we nurse them into a state of symbiosis.

The idea behind Unify is that it bridges this gap, allowing you to use your OBIEE system as an efficient data acquisition platform and Tableau as an intuitive playground for users who want to do a a bit more with their data. Unify works by using the Tableau Web Data Connector as a data source and then our customised software to act as an interface for creating OBIEE queries and them exporting them into Tableau.

How It Works

Unify uses Tableau's latest Web Data Connector data source to allow us to dynamically query OBIEE and extract data into Tableau. Once a dataset is extracted into Tableau, it can be used with Tableau as normal, taking advantages of all of the powerful features of Tableau. This native integration means you can add in OBIEE data sources just as you would add in any others - Excel files, SQL results etc. Then you can join the data sources using Tableau itself, even if the data sources don't join up together in the background.

First you open up Tableau and add a Web Data Connector source:

Unify - An Insight Into the Product

Then give the link to the Unify application, e.g. http://localhost:8080/unify. This will open up Unify and prompt you to login with your OBIEE credentials. This is important as Unify operates through the OBIEE server layer in order to maintain all security permissions that you've already defined.

Unify - An Insight Into the Product

Now that the application is open, you can make OBIEE queries using the interface provided. This is a bit like Answers and allows you to query from any of your available subject areas and presentation columns. The interface also allows you to use filtering, column formulae and OBIEE variables much in the same way as Answers does.

Alternatively, you can open up an existing report that you've made in OBIEE and then edit it at your leisure. Unify will display a preview of the dataset so you can tweak it until you are happy that is what you want to bring into Tableau.

Unify - An Insight Into the Product

Once you're happy with your dataset, click the Unify button in the top right and it will export the data into Tableau. From this point, it behaves exactly as Tableau does with any other data set. This means you can join your OBIEE dataset to external sources, or bring in queries from multiple subject areas from OBIEE and join them in Tableau. Then of course, take advantage of Tableau's powerful and interactive visualisation engine.

Unify - An Insight Into the Product

Unify Server

Unify comes in desktop and server flavours. The main difference between the two is that the server version allows you to upload Tableau workbooks with OBIEE data to Tableau Server and refresh them. With the desktop version, you will only be able to upload static workbooks that you've created, however with the server version of Unify, you can tell Tableau Server to refresh data from OBIEE in accordance with a schedule. This lets you produce production quality dashboards for your users, sourcing data from OBIEE as a well as any other source you choose.

Unify Your Data

In a nutshell, Unify allows you to combine the best aspects of two very powerful BI tools and will prevent the need for building all of your reporting artefacts from scratch if you already have a good, working system.

I hope you've found this brief introduction to Unify informative and if you have OBIEE and would like to try it with Tableau, I encourage you to register for a free desktop trial. If you have any questions, please don't hesitate to get in touch.

Categories: BI & Warehousing

What Tools Do You Use to Patch EBS 12.2 ORACLE_HOMEs?

Steven Chan - Thu, 2017-06-15 02:00

Oracle E-Business Suite 12.2 has several technology stack components.  Each component has its own ORACLE_HOME:

  • Oracle Fusion Middleware
    • Oracle WebLogic Server (WLS) 10.3.6
    • OHS (WebTier) 11.1.1 and Oracle Common (Common Modules)
  • Application Server (OracleAS)
    • Forms and Reports 10.1.2
  • Oracle Database
    • Oracle Database Server 11.2 or 12.0

Each of these technology stack components can be patched individually.  But what do you use to patch them?  For a quick cheatsheet, see:

References

Related Articles

Categories: APPS Blogs

Numeric data not working which used to work earlier

Tom Kyte - Thu, 2017-06-15 01:46
Hi Tom, We have a package which was running fine from quite a sometime in production. All of the sudden, the report associated to this package got error out. The reason being an and condition in the procedure inside the package i.e. and ct.tran...
Categories: DBA Blogs

unable to use AUTOTRACE in SQL Developer Version 4.2.0.17.089

Tom Kyte - Thu, 2017-06-15 01:46
unable to use AUTOTRACE in SQL Developer Version 4.2.0.17.089 it works fine in sqldeveloper-4.1.0.19.07 with the below enclosed setup GRANT SELECT ON SYS.V_$MYSTAT TO RL_AUTOTRACE; GRANT SELECT ON SYS.V_$SESSION TO RL_AUTOTRACE; GRANT SELECT ON S...
Categories: DBA Blogs

Displaying PDF files stored in the Database

Tom Kyte - Thu, 2017-06-15 01:46
Tom, How can we display PDF files stored in the database, using htp.p (OAS PL/SQL cartridge )calls and dbms_lob ? We are developing a document archival and display app. Using ctxload, I can upload the files. Now, I would like to display the file...
Categories: DBA Blogs

Notes From Orlando

Floyd Teter - Wed, 2017-06-14 17:13
I thought y'all would appreciate some notes from last week's OHUG conference in Orlando Florida.  So, in no particular order, my observations...
  • The sessions were pretty evenly divided between Oracle E-Business, PeopleSoft and HCM Cloud.  Right around 1/3 of total sessions for each track.
  • The mix of attendees, from what I could tell, were about half HCM Cloud users and half on either EBS or PeopleSoft.  And out of those on EBS or Peoplesoft, about half of them were exploring the transformation of moving to HCM Cloud.
  • Attendance for this year's conference seems a little light; maybe down 10 or 15 percent from prior years.  I'm guessing that was caused by a combination of following so closely on the heels of Oracle HCM Cloud World and the fact that it's always tough for a user group conference to get a big draw in Orlando (I don't know why, just know from experience that it's generally true).
  • I did not run into many decision makers this year...very few upper management executives.  But tons of influencers: people who implement and use the products.  I suspect most decision makers are going to Oracle HCM Cloud World while most of those responsible for executing those decisions attend OHUG.
  • A follow on to the prior point.  Attendees were focused on the fundamentals of implementation and use; "how do I do...".  Not many strategic discussions.
  • You couldn't  walk more than 10 feet without encountering a Taleo session or a hallway discussion about Taleo.  Maybe the top topic of the conference.
  • I tried several times to catch Oracle's Justin Challenger, who ran the conference Report Jam this year.  But every time I tried, he was heads down with a group of users designing reports. So I have to thin that the Report Jam was a big hit.
  • Likewise, the UX Lab was abuzz with activity whenever I stopped by there.
  • When the folks in Orlando say they're going to get rain, they're not messing around.  It rained hard...and usually more than once...every day I was there.
  • There may not be anyone who understands global implementations of HCM Cloud better than Oracle's Jon McGoy.  The breadth and depth of the material he presented, plus his off-the-cuff answers to questions, was pretty amazing.
So, overall, I think the OHUG conference is in the midst of a transition.  First, it's becoming more cloud-centric.  You can see it in both the session tracks and the focus of the attendees.  Second, it's become a "how to" type of conference.  More emphasis on using, integrating, and extending the applications.  Less emphasis on strategic decisions.   Third, the type of attendee is changing.  More influencers and users, fewer decision makers (hot tip:  some folks think that's a good thing).

I'm already looking forward to next year's OHUG conference.  Can't wait to see how the transition continues to shake out.

Bash: The most useless command (3)

Dietrich Schroff - Wed, 2017-06-14 12:48
The blog statistics show, that there are many people reading the posts about useless commands. And there is the next candidate, suggested by an anonymous comment:
slThis is my most hated program on a shell. Why?
NAME
sl − display animations aimed to correct users who accidentally enter sl instead of ls.and this program is not interruptable by ctrl-c.
It shows a train running from the left to the right
and blocks your shell for at least 2-3 seconds (depends on the width of your shell window):
$ time sl
real 0m3.347s

Nice Trick to Get ADF LOV Description Text

Andrejus Baranovski - Wed, 2017-06-14 12:29
I will tell you about nice trick to display LOV description. Usually you would create separate attribute in VO for LOV description and base LOV on this attribute (read about it in my previous post - Defining the LOV on a Reference Attribute in Oracle ADF 11g). But there is one more way - it makes it much faster to define LOV on description, but you should be aware about additional SQL statement executed to fetch description text.

You could set converter for ADF UI LOV, and then LOV component would use description by itself, without any additional configuration.

It is important to set correct order for LOV display attributes. Make sure to set description attribute to be first in the list for converter approach to work:


Go to ADF UI LOV component and set converter property. This must point to binding object, converter expression:


What you get - LOV field displays description, converter is able to mask ID value behind it:


It offers nice usability option - you could start typing description, press tab and value will be auto completed (if autoSubmit is set to true):


Behind the scenes it executes LOV SQL to get description text (this SQL is executed on page load too, which is not ideal when you have many LOV fields - in such situation is better to use separate attribute for description in the same VO):


When LOV value is changed and changes are saved, ADF BC updates ID value only (as expected):


Download sample application - ADFLovDescriptionApp.zip.

Pages

Subscribe to Oracle FAQ aggregator