Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 10 hours 29 min ago

Documentum – Setup WildFly in HTTPS

Sun, 2017-05-14 04:24

In a previous blog (See this one), I mentioned some differences between JBoss 7.1.1 (coming with CS 7.2 / xPlore 1.5) and WildFly 9.0.1 (coming with CS 7.3 / xPlore 1.6). In this blog, I will talk about what is needed to setup a WildFly instance in HTTPS only or HTTPS+HTTP. This blog applies to both Content Servers and Full Text Servers. On the Full Text Server side, you might be aware of the ConfigSSL.groovy script which can be used to setup the Dsearch and IndexAgents in HTTPS (see this blog for more information). But if you are using a Multi-node Full Text setup (which most likely involves a lot of CPS (Content Processing Service)) or if you are just installing additional CPS (or Remote CPS), then what can you do?

 

The script ConfigSSL.groovy can only be used to configure a PrimaryDsearch or an IndexAgent in SSL, it’s not able to configure a CPS in SSL. Also, the setup of a CPS in SSL isn’t described anywhere in the official Documentation of EMC (/OpenText) so that’s the main purpose of this blog: what is needed to setup a WildFly instance in HTTPS so you can use that for your CPS as well as for your JMS (Java Method Server).

 

As a prerequisite, you need to have a Java Keystore. There are plenty of documentation around that so I won’t describe this part. In this blog, I will use a Full Text Server and I will configure a CPS in HTTPS. If you want to do that on the Content Server, just adapt the few paths accordingly. As a first thing to do, I will setup two environment variables which will contain the passwords for the Java cacerts and my Java Keystore (JKS):

[xplore@full_text_server_01 ~]$ cd /tmp/certs/
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the Java cacerts password: " jca_pw
[xplore@full_text_server_01 certs]$ read -s -p "  ----> Please enter the JKS password: " jks_pw

 

When you enter the first read command, the prompt isn’t returned. Just write/paste the jca password and press enter. Then the prompt will be returned and you can execute the second read command in the same way to write/paste the jks password. Now you can update the Java cacerts and import your JKS. The second and third commands below (the ones with the “-delete”) will remove any entries with the alias mentioned (dbi_root_ca and dbi_int_ca). If you aren’t sure about what you are doing, don’t execute these two commands. If the alias already exists in the Java cacerts, you will saw an error while executing the commands. In such cases, just use another alias (or remove the existing one using the “-delete” commands…):

[xplore@full_text_server_01 certs]$ cp $JAVA_HOME/jre/lib/security/cacerts $JAVA_HOME/jre/lib/security/cacerts_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -delete -noprompt -alias dbi_root_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass ${jca_pw} > /dev/null 2>&1
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -delete -noprompt -alias dbi_int_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass ${jca_pw} > /dev/null 2>&1
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -import -trustcacerts -noprompt -alias dbi_root_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -file "/tmp/certs/dbi_root_certificate.cer" -storepass ${jca_pw}
[xplore@full_text_server_01 certs]$ $JAVA_HOME/bin/keytool -import -trustcacerts -noprompt -alias dbi_int_ca -keystore $JAVA_HOME/jre/lib/security/cacerts -file "/tmp/certs/dbi_int_certificate.cer" -storepass ${jca_pw}

 

In the commands above, you will need to customize at least the “-file” parameter. I used above one Root CA and one Intermediate CA. Depending on your needs, you might need to add the same thing (if you are using a trust chain with 2 CAs) or just import the SSL Certificate directly, aso…

 

When this is done, you need to stop your application. In this case, I will stop my CPS (named “Node2_CPS1″ here). To stay more general, I will define a new variable “server_name”. You can use this variable to keep the same commands on the different servers, just use the appropriate value: “MethodServer” (for a JMS), “PrimaryDsearch” (or the name of your Dsearch), “CPS” (or the name of your CPS) or anything else:

[xplore@full_text_server_01 certs]$ export server_name="Node2_CPS1"
[xplore@full_text_server_01 certs]$ $JBOSS_HOME/server/stop${server_name}.sh

 

In the command above, $JBOSS_HOME is the location of the JBoss/WildFly instance. For an xPlore 1.6 for example, it will “$XPLORE_HOME/wildfly9.0.1″. For a Content Server 7.3, it will be “$DOCUMENTUM_SHARED/wildfly9.0.1″. I kept the name “JBOSS_HOME” because even if it has been renamed, I think it will take time before everybody use WildFly, just like there are still a lot of people using “docbase” while this has been replaced with “repository” with the CS 7.0…

 

Then let’s move the JKS file to the right location:

[xplore@full_text_server_01 certs]$ mv /tmp/certs/full_text_server_01.jks $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore
[xplore@full_text_server_01 certs]$ chmod 600 $JBOSS_HOME/server/DctmServer_${server_name}/configuration/my.keystore

 

Now you can configure the standalone.xml file for this application to handle the HTTPS communications because by default only HTTP is enabled:

[xplore@full_text_server_01 certs]$ cd $JBOSS_HOME/server/DctmServer_${server_name}/configuration/
[xplore@full_text_server_01 configuration]$ cp standalone.xml standalone.xml_bck_$(date +%Y%m%d)
[xplore@full_text_server_01 configuration]$ 
[xplore@full_text_server_01 configuration]$ sed -i 's/inet-address value="${jboss.bind.address.management:[^}]*}/inet-address value="${jboss.bind.address.management:127.0.0.1}/' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<security-realm name="sslSecurityRealm"/,/<\/security-realm>/d' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<https-listener.*name="default-https"/d' standalone.xml
[xplore@full_text_server_01 configuration]$ sed -i '/<security-realms>/a \            <security-realm name="sslSecurityRealm">\n                <server-identities>\n                    <ssl>\n                        <keystore path="'$JBOSS_HOME'/server/DctmServer_'${server_name}'/configuration/my.keystore" keystore-password="'${jks_pw}'"/>\n                    </ssl>\n                </server-identities>\n            </security-realm>' standalone.xml

 

So what are the commands above doing?

  • Line 2: Backup of the standalone.xml file
  • Line 4: Changing the default IP on which WildFly is listening for the Management Interface (by default 0.0.0.0 which means no restriction) to 127.0.0.1 so it’s only accessible locally
  • Line 5: Removing any existing Security Realm named “sslSecurityRealm”, if any (there isn’t any unless you already created one with this specific name…)
  • Line 6: Removing any HTTPS listener, if any (there isn’t any unless you already setup this application in HTTPS…)
  • Line 7: Creating a new Security Realm named “sslSecurityRealm” containing all the needed properties: keystore path and password. You can also define additional parameters like alias, aso… $JBOSS_HOME, ${server_name} and ${jks_pw} are surrounded with single quote so they are evaluated before being put in the xml file.

 

Once this has been done, you defined your security realm but it’s not yet used… So here comes the choice to either completely deactivate HTTP and keep only HTTPS or use both at the same time. From my point of view, it’s always better to keep only HTTPS but in DEV environments for example, you might have requests from the developer to enable the HTTP because it’s simpler for them to develop something (web services for example) using HTTP and then secure it. So it’s up to you!

 

1. HTTP and HTTPS

To keep both enabled, you just have to create a new listener in the standalone.xml file which will be dedicated to HTTPS. This is the command that can do it:

[xplore@full_text_server_01 configuration]$ sed -i '/<http-listener .*name="default".*/a \                <https-listener name="default-https" socket-binding="https" security-realm="sslSecurityRealm" enabled-cipher-suites="PUT_HERE_SSL_CIPHERS"/>' standalone.xml

 

In the above command, just replace “PUT_HERE_SSL_CIPHERS” with the SSL Ciphers that you want your application to be using. Depending on how you configure the SSL in your environment, the list of SSL Ciphers needed will change so I will let you fill that blank. Of course if the name of your http-listener isn’t “default”, then you will need to adapt the command above but it is the default name for all WildFly instances so unless you customized this already, this will do.

 

2. HTTPS Only

If you don’t want to allow HTTP communications, then you just need to remove the HTTP listener and replace it with the HTTPS listener. This is the command that can do it:

[xplore@full_text_server_01 configuration]$ sed -i 's,<http-listener .*name="default".*,<https-listener name="default-https" socket-binding="https" security-realm="sslSecurityRealm" enabled-cipher-suites="PUT_HERE_SSL_CIPHERS"/>,' standalone.xml

 

 

No matter if you choose HTTP+HTTPS or HTTPS only, you then have to start your application again using your prefered way. This is an example of command that will do it:

[xplore@full_text_server_01 configuration]$ sh -c "cd $JBOSS_HOME/server/; nohup ./start${server_name}.sh & sleep 5; mv nohup.out nohup-${server_name}.out"

 

When the application has been started, you can try to access the application URL. These are some examples using the most common ports:

  • JMS => https://content_server_01:9082/DmMethods/servlet/DoMethod
  • PrimaryDsearch => https://full_text_server_01:9302/dsearch
  • IndexAgent => https://full_text_server_01:9202/IndexAgent
  • CPS => https://full_text_server_01:9402/cps/ContentProcessingService?wsdl

 

 

Cet article Documentum – Setup WildFly in HTTPS est apparu en premier sur Blog dbi services.

Documentum – WildFly not able to start in a CS 7.3 / xPlore 1.6

Sun, 2017-05-14 03:15

As mentioned in previous blogs (like this one), we were doing some testing on the newly released versions of Documentum (CS 7.3, xPlore 1.6, aso…). On this blog, I will talk about something that can prevent the WildFly instances to start properly (RedHat renamed the JBoss Application Server to WildFly in 2013 starting with the version 8.0). As this issue is linked to WildFly, you can understand that this might happen on both Content Servers 7.3 and Full Text Servers 1.6 but in this blog, I will use a Content Server for the example.

 

So when I first installed a Content Server 7.3 (in full silent of course!), the Java Method Server wasn’t running properly. Actually worse than that, it wasn’t even starting completely. Here are the entries put in the JMS log file (server.log):

JAVA_OPTS already set in environment; overriding default settings with values: -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: $DOCUMENTUM_SHARED/wildfly9.0.1

  JAVA: $DOCUMENTUM_SHARED/java64/JAVA_LINK/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer

=========================================================================

15:48:41,985 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
15:48:44,301 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
15:48:44,502 INFO  [org.jboss.as] (MSC service thread 1-8) WFLYSRV0049: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) starting
15:48:50,818 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 28) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
15:48:50,819 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 27) WFLYCTL0028: Attribute 'job-repository-type' in the resource at address '/subsystem=batch' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
15:48:50,867 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0071: There is no resource matching the expiry-address jms.queue.ExpiryQueue for the address-settings #, expired messages from destinations matching this address-setting will be lost!
15:48:50,868 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0072: There is no resource matching the dead-letter-address jms.queue.DLQ for the address-settings #, undelivered messages from destinations matching this address-setting will be lost!
15:48:51,024 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
15:48:51,026 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
15:48:51,109 INFO  [org.xnio] (MSC service thread 1-5) XNIO version 3.3.1.Final
15:48:51,113 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
15:48:51,128 INFO  [org.xnio.nio] (MSC service thread 1-5) XNIO NIO Implementation Version 3.3.1.Final
15:48:51,164 INFO  [org.jboss.remoting] (MSC service thread 1-5) JBoss Remoting version 4.0.9.Final
15:48:51,325 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) WFLYCLINF0001: Activating Infinispan subsystem.
15:48:51,335 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 38) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
15:48:51,374 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 47) WFLYNAM0001: Activating Naming Subsystem
15:48:51,353 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 54) WFLYSEC0002: Activating Security Subsystem
15:48:51,359 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 55) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
15:48:51,387 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 57) WFLYWS0002: Activating WebServices Extension
15:48:51,410 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 45) WFLYJSF0007: Activated the following JSF Implementations: [main]
15:48:51,429 INFO  [org.jboss.as.security] (MSC service thread 1-3) WFLYSEC0001: Current PicketBox version=4.9.2.Final
15:48:51,499 INFO  [org.jboss.as.connector] (MSC service thread 1-7) WFLYJCA0009: Starting JCA Subsystem (IronJacamar 1.2.4.Final)
15:48:51,541 INFO  [org.jboss.as.naming] (MSC service thread 1-4) WFLYNAM0003: Starting Naming Service
15:48:51,554 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-7) WFLYMAIL0001: Bound mail session 
15:48:51,563 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0003: Undertow 1.2.9.Final starting
15:48:51,564 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0003: Undertow 1.2.9.Final starting
15:48:51,583 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 34) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
15:48:51,593 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) WFLYJCA0018: Started Driver service with driver-name = h2
15:48:51,931 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0014: Creating file handler for path $DOCUMENTUM_SHARED/wildfly9.0.1/welcome-content
15:48:51,984 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) WFLYUT0012: Started server default-server.
15:48:52,049 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0018: Host default-host starting
15:48:52,112 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-8) MSC000001: Failed to start service jboss.undertow.listener.default: org.jboss.msc.service.StartException in service jboss.undertow.listener.default: Could not start http listener
        at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:150)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1948)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1881)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Protocol family unavailable
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
        at org.xnio.nio.NioXnioWorker.createTcpConnectionServer(NioXnioWorker.java:182)
        at org.xnio.XnioWorker.createStreamConnectionServer(XnioWorker.java:243)
        at org.wildfly.extension.undertow.HttpListenerService.startListening(HttpListenerService.java:115)
        at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:147)
        ... 5 more

...

15:49:37,585 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
    ("core-service" => "management"),
    ("management-interface" => "native-interface")
]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.remoting.server.management" => "org.jboss.msc.service.StartException in service jboss.remoting.server.management: WFLYRMT0005: Failed to start service
    Caused by: java.net.SocketException: Protocol family unavailable"}}
15:49:37,589 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([
    ("core-service" => "management"),
    ("management-interface" => "http-interface")
]) - failure description: {
    "WFLYCTL0080: Failed services" => {"jboss.serverManagement.controller.management.http" => "org.jboss.msc.service.StartException in service jboss.serverManagement.controller.management.http: WFLYSRV0083: Failed to start the http-interface service
    Caused by: java.lang.RuntimeException: java.net.SocketException: Protocol family unavailable
    Caused by: java.net.SocketException: Protocol family unavailable"},
    "WFLYCTL0288: One or more services were unable to start due to one or more indirect dependencies not being available." => {
        "Services that were unable to start:" => ["jboss.serverManagement.controller.management.http.shutdown"],
        "Services that may be the cause:" => ["jboss.remoting.remotingConnectorInfoService.http-remoting-connector"]
    }
}
...
15:49:37,653 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "acs.ear" (runtime-name : "acs.ear")
15:49:37,654 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "ServerApps.ear" (runtime-name : "ServerApps.ear")
...
15:49:37,901 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0063: Http management interface is not enabled
15:49:37,902 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0054: Admin console is not enabled
15:49:37,903 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started (with errors) in 57426ms - Started 225 of 424 services (16 services failed or missing dependencies, 225 services are lazy, passive or on-demand)
...
15:49:38,609 INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) WFLYSRV0009: Undeployed "acs.ear" (runtime-name: "acs.ear")
15:49:38,610 INFO  [org.jboss.as.server] (DeploymentScanner-threads - 2) WFLYSRV0009: Undeployed "ServerApps.ear" (runtime-name: "ServerApps.ear")
...
15:53:02,276 INFO  [org.jboss.as] (MSC service thread 1-4) WFLYSRV0050: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) stopped in 124ms

 

After all these errors, the logs are showing that the Http Management interface isn’t enabled and then the WildFly is just shutting itself down. So what is this “Protocol family unavailable”? This is actually linked to one of the changes included in WildFly compared to the old version (JBoss 7.1.1): WildFly supports and use by default IPv4 and IPv6. Unfortunately if your OS isn’t configured to use IPv6, this error will appear and will prevent your WildFly to start.

 

Fortunately, it is very easy to correct this, you just have to force WildFly to only use IPv4. For that purpose, we will add a single JVM parameter in the start script of the JMS:

sed -i 's,^JAVA_OPTS="[^"]*,& -Djava.net.preferIPv4Stack=true,' $DOCUMENTUM_SHARED/wildfly9.0.1/server/startMethodServer.sh

 

This command will simply add ” -Djava.net.preferIPv4Stack=true” just before the ending double quote of the line that is starting with JAVA_OPTS= in the file $DOCUMENTUM_SHARED/wildfly9.0.1/server/startMethodServer.sh. As a side note, to do the same thing on a Full Text Server, just change the path and the name of startup script and it will do the same thing (E.g.: $XPLORE_HOME/wildfly9.0.1/server/startPrimaryDsearch.sh).

 

After that, just restart the JMS and it will be able to start without issue:

JAVA_OPTS already set in environment; overriding default settings with values: -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer -Djava.net.preferIPv4Stack=true
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: $DOCUMENTUM_SHARED/wildfly9.0.1

  JAVA: $DOCUMENTUM_SHARED/java64/JAVA_LINK/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops  -server -XX:+UseCompressedOops -XX:+UseParallelOldGC -XX:ReservedCodeCacheSize=128m -XX:MaxMetaspaceSize=256m -XX:NewRatio=4 -Xms4096m -Xmx4096m -XX:NewSize=256m -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -Xss256k -Djava.awt.headless=true -Djava.io.tmpdir=$DOCUMENTUM/temp/JMS_Temp -Djboss.server.base.dir=$DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer -Djava.net.preferIPv4Stack=true

=========================================================================

16:10:22,285 INFO  [org.jboss.modules] (main) JBoss Modules version 1.4.3.Final
16:10:22,601 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
16:10:22,711 INFO  [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) starting
16:10:24,627 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 28) WFLYCTL0028: Attribute 'job-repository-type' in the resource at address '/subsystem=batch' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:10:24,645 INFO  [org.jboss.as.controller.management-deprecated] (ServerService Thread Pool -- 27) WFLYCTL0028: Attribute 'enabled' in the resource at address '/subsystem=datasources/data-source=ExampleDS' is deprecated, and may be removed in future version. See the attribute description in the output of the read-resource-description operation to learn more about the deprecation.
16:10:24,664 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found acs.ear in deployment directory. To trigger deployment create a file called acs.ear.dodeploy
16:10:24,665 INFO  [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0004: Found ServerApps.ear in deployment directory. To trigger deployment create a file called ServerApps.ear.dodeploy
16:10:24,701 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0071: There is no resource matching the expiry-address jms.queue.ExpiryQueue for the address-settings #, expired messages from destinations matching this address-setting will be lost!
16:10:24,702 WARN  [org.jboss.messaging] (ServerService Thread Pool -- 30) WFLYMSG0072: There is no resource matching the dead-letter-address jms.queue.DLQ for the address-settings #, undelivered messages from destinations matching this address-setting will be lost!
16:10:24,728 INFO  [org.xnio] (MSC service thread 1-2) XNIO version 3.3.1.Final
16:10:24,737 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)
16:10:24,745 INFO  [org.xnio.nio] (MSC service thread 1-2) XNIO NIO Implementation Version 3.3.1.Final
16:10:24,808 INFO  [org.jboss.remoting] (MSC service thread 1-2) JBoss Remoting version 4.0.9.Final
16:10:24,831 INFO  [org.wildfly.extension.io] (ServerService Thread Pool -- 38) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
16:10:24,852 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) WFLYCLINF0001: Activating Infinispan subsystem.
16:10:24,867 INFO  [org.jboss.as.jsf] (ServerService Thread Pool -- 45) WFLYJSF0007: Activated the following JSF Implementations: [main]
16:10:24,890 WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 55) WFLYTX0013: Node identifier property is set to the default value. Please make sure it is unique.
16:10:24,899 INFO  [org.jboss.as.security] (ServerService Thread Pool -- 54) WFLYSEC0002: Activating Security Subsystem
16:10:24,901 INFO  [org.jboss.as.naming] (ServerService Thread Pool -- 47) WFLYNAM0001: Activating Naming Subsystem
16:10:24,916 INFO  [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 34) WFLYJCA0004: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
16:10:24,939 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0003: Undertow 1.2.9.Final starting
16:10:24,939 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0003: Undertow 1.2.9.Final starting
16:10:24,959 INFO  [org.jboss.as.security] (MSC service thread 1-2) WFLYSEC0001: Current PicketBox version=4.9.2.Final
16:10:24,967 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 57) WFLYWS0002: Activating WebServices Extension
16:10:24,977 INFO  [org.jboss.as.connector] (MSC service thread 1-4) WFLYJCA0009: Starting JCA Subsystem (IronJacamar 1.2.4.Final)
16:10:25,034 INFO  [org.jboss.as.naming] (MSC service thread 1-7) WFLYNAM0003: Starting Naming Service
16:10:25,035 INFO  [org.jboss.as.mail.extension] (MSC service thread 1-7) WFLYMAIL0001: Bound mail session 
16:10:25,044 INFO  [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-6) WFLYJCA0018: Started Driver service with driver-name = h2
16:10:25,394 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 56) WFLYUT0014: Creating file handler for path $DOCUMENTUM_SHARED/wildfly9.0.1/welcome-content
16:10:25,414 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0012: Started server default-server.
16:10:25,508 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-6) WFLYUT0018: Host default-host starting
16:10:25,581 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-3) WFLYUT0006: Undertow HTTP listener default listening on /0.0.0.0:9080
16:10:25,933 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) WFLYJCA0001: Bound data source 
16:10:26,177 INFO  [org.jboss.as.server.deployment.scanner] (MSC service thread 1-6) WFLYDS0013: Started FileSystemDeploymentService for directory $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments
16:10:26,198 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0027: Starting deployment of "ServerApps.ear" (runtime-name: "ServerApps.ear")
16:10:26,201 INFO  [org.jboss.as.server.deployment] (MSC service thread 1-4) WFLYSRV0027: Starting deployment of "acs.ear" (runtime-name: "acs.ear")
16:10:26,223 INFO  [org.jboss.as.remoting] (MSC service thread 1-5) WFLYRMT0001: Listening on 0.0.0.0:9084
...
16:10:42,398 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 63) WFLYUT0021: Registered web context: /DmMethods
16:10:44,519 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 64) WFLYUT0021: Registered web context: /ACS
16:10:44,563 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "acs.ear" (runtime-name : "acs.ear")
16:10:44,564 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 35) WFLYSRV0010: Deployed "ServerApps.ear" (runtime-name : "ServerApps.ear")
16:10:44,911 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://0.0.0.0:9085/management
16:10:44,912 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://0.0.0.0:9085
16:10:44,912 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 9.0.1.Final (WildFly Core 1.0.1.Final) started in 23083ms - Started 686 of 905 services (298 services are lazy, passive or on-demand)

 

As shown above, the WildFly has been started successfully. In this example, I really took a default JMS setup with just a few custom JVM parameters (Xms, Xmx, aso…) but as you can see in the logs above, there are a few issues with the default setup like deprecated or missing parameters. So I would strongly suggest you to take a look at that and configure properly your JMS once it is up & running!

 

Earlier in this blog, I mentioned that there are a few things that changed between JBoss 7.1.1 (CS 7.2 / xPlore 1.5) and WildFly 9.0.1 (CS 7.3 / xPlore 1.6). In a future blog, I will talk about another change which is how to setup a WildFly instance in SSL.

 

 

Cet article Documentum – WildFly not able to start in a CS 7.3 / xPlore 1.6 est apparu en premier sur Blog dbi services.

Documentum – Not able to install IndexAgent with xPlore 1.6

Sun, 2017-05-14 02:41

In the last few months, we already did some tests with the Content Server 7.3, D2 4.7 and xPlore 1.6. As often, we found some funny/interesting behaviors and we faced some issues. In this blog I will talk about one issue I faced when I tried to install a new IndexAgent with xPlore 1.6. The Content Server used was a CS 7.3 (no patch available when I did this installation). This issue was present in both Silent and non-Silent installations.

 

So what was the issue? While trying to install a first IndexAgent, it looked like the installer wasn’t able to contact the Content Server. The GUI installer was stuck on the “Connection Broker Information”. After filling the information and clicking on “Next”, the Full Text Installer was stuck and nothing happened anymore.

 

Basically on this step of the installer, it is trying to connect to the docbroker using the information provided (docbroker hostname and port). Of course the docbroker and the docbase were running and responding properly.

 

To verify what was happening, I just checked the xPlore processes:

[xplore@full_text_server_01 ~]$ ps -ef | grep xplore
xplore    2423     1  0 08:04 ?        00:00:00 /bin/sh ./startPrimaryDsearch.sh
xplore    2425  2423  0 08:04 ?        00:00:00 /bin/sh $XPLORE_HOME/wildfly9.0.1/bin/standalone.sh
xplore    2572  2425  5 08:04 ?        00:03:01 $XPLORE_HOME/java64/1.8.0_77/bin/java -D[Standalone] -server -XX:+UseCompressedOops -server -XX:+UseCompressedOops -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=196m -Xms4096m -Xmx4096m -XX:MaxMetaspaceSize=512m -Xss1024k -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:-ReduceInitialCardMarks -Djava.awt.headless=true -Djboss.server.base.dir=$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch -Djava.net.preferIPv4Stack=true -Dorg.jboss.boot.log.file=$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch/log/server.log -Dlogging.configuration=file:$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch/configuration/logging.properties -jar $XPLORE_HOME/wildfly9.0.1/jboss-modules.jar -mp $XPLORE_HOME/wildfly9.0.1/modules org.jboss.as.standalone -Djboss.home.dir=$XPLORE_HOME/wildfly9.0.1 -Djboss.server.base.dir=$XPLORE_HOME/wildfly9.0.1/server/DctmServer_PrimaryDsearch
xplore    3977  2572  0 08:06 ?        00:00:00 $XPLORE_HOME/dsearch/cps/cps_daemon/bin/CPSDaemon $XPLORE_HOME/dsearch/cps/cps_daemon/PrimaryDsearch_local_configuration.xml Daemon0 9322 NORMAL
xplore    4069  2572  0 08:06 ?        00:00:00 $XPLORE_HOME/dsearch/cps/cps_daemon/bin/CPSDaemon $XPLORE_HOME/dsearch/cps/cps_daemon/PrimaryDsearch_local_configuration.xml QDaemon0 9323 QUERY
xplore    9591  9568  0 08:45 pts/3    00:00:00 /bin/sh ./configIndexagent.sh
xplore    9592  9591  0 08:45 pts/3    00:00:08 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -Xmx262114000 -Xms262114000 com.zerog.lax.LAX /tmp/install.dir.9592/temp.lax /tmp/env.properties.9592 "-f" "config.properties"
xplore   11099  9592  0 08:51 pts/3    00:00:02 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/342186.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/install.dir.9592/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489
xplore   11928  9515  0 08:59 pts/1    00:00:00 ps -ef
xplore   11929  9515  0 08:59 pts/1    00:00:00 grep xplore

 

As you can see above, the PrimaryDsearch is up & running so that’s good but there are two additional interesting processes:

xplore    9592  9591  0 08:45 pts/3    00:00:08 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -Xmx262114000 -Xms262114000 com.zerog.lax.LAX /tmp/install.dir.9592/temp.lax /tmp/env.properties.9592 "-f" "config.properties"
xplore   11099  9592  0 08:51 pts/3    00:00:02 $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/342186.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/install.dir.9592/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489

 

The most interesting one is the second line: this is the command that the installer is starting to check that the docbroker is responding properly and this process is stuck. To do some testing, I just put the utilities and the needed elements to a test folder so I would be able to execute this command myself whenever I wanted. Then I killed the installer since it was stuck and I tried myself to contact the docbroker:

[xplore@full_text_server_01 ~]$ time $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/my.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/my.tmp/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489
^C
real    7m43.669s
user    0m1.618s
sys     0m0.153s

 

As you can see above, after 7min and 43 seconds, I just killed the process (^C) because it still didn’t return anything. Just to be sure, I checked our monitoring tool and it was able to connect to the docbase, remotely, without issue so the issue here was link somehow to the Full Text Server.

 

To go deeper, the only thing I could do was to perform some thread dumps of the java process to try to see where the bottleneck was. After doing that, it actually became quite clear that the server had an issue with the entropy generation because the threads were holding and waiting for the rsa libraries to get a random seed from the system. If you are running Documentum (or any other thing) on a Virtual Machine you will most probably want to change the default random generation because the entropy isn’t sufficient usually on VMs. We are always doing that for all our customer’s environments but since this was just a sandbox server to play with the newly released xPlore 1.6, this wasn’t setup at that time. With xPlore 1.5, this wouldn’t have caused any issue but it looks like xPlore 1.6 is much more demanding on the random seed generations. This has most probably something to do with some security improvements done in xPlore 1.6.

 

So to solve this issue, you just need to increase the entropy of your server. This can be done as follow (as root):

  1. Install the rng tools package
  2. Update the options in the /etc/sysconfig/rngd file
  3. Setup the service to start automatically at OS boot

 

You can basically use the following commands for that (I’m using a RHEL so it’s yum based):

[root@full_text_server_01 ~]# yum -y install rng-tools.x86_64
Loaded plugins: product-id, search-disabled-repos, security, subscription-manager
Setting up Install Process
Resolving Dependencies
--> Running transaction check
...
Transaction Test Succeeded
Running Transaction
  Installing : rng-tools-5-2.el6_7.x86_64                                                                                                                                                                                     1/1
  Verifying  : rng-tools-5-2.el6_7.x86_64                                                                                                                                                                                     1/1

Installed:
  rng-tools.x86_64 0:5-2.el6_7

Complete!
[root@full_text_server_01 ~]# rpm -qf /etc/sysconfig/rngd
rng-tools-5-2.el6_7.x86_64
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# sed -i 's,EXTRAOPTIONS=.*,EXTRAOPTIONS=\"-r /dev/urandom -o /dev/random -t 0.1\",' /etc/sysconfig/rngd
[root@full_text_server_01 ~]# cat /etc/sysconfig/rngd
# Add extra options here
EXTRAOPTIONS="-r /dev/urandom -o /dev/random -t 0.1"
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# chkconfig --level 345 rngd on
[root@full_text_server_01 ~]# chkconfig --list | grep rngd
rngd            0:off   1:off   2:off   3:on    4:on    5:on    6:off
[root@full_text_server_01 ~]#
[root@full_text_server_01 ~]# service rngd start
Starting rngd:                                             [  OK  ]
[root@full_text_server_01 ~]#

 

When this is done, you need to force xPlore to use this new random generation mechanism. For that, just put in the profile of the xplore user one environment variable (as xplore):

[xplore@full_text_server_01 ~]$ echo 'export DEVRANDOM=/dev/urandom' >> ~/.bash_profile

 

When this has been done, you can just re-execute the java command and in something like 1 second the command should return a result like that:

[xplore@full_text_server_01 ~]$ $XPLORE_HOME/java64/1.8.0_77/jre/bin/java -cp /tmp/my.tmp/testDocbrokerUtil.jar:$XPLORE_HOME/dfc/dfc.jar -Ddfc.properties.file=/tmp/my.tmp/dfc.properties com.documentum.install.shared.util.TestDocbrokerUtil content_server_01 1489
0 [main] ERROR com.documentum.fc.common.impl.logging.LoggingConfigurator  - Problem locating log4j configuration
1 [main] WARN com.documentum.fc.common.impl.logging.LoggingConfigurator  - Using default log4j configuration
[xplore@full_text_server_01 ~]$

 

These two messages aren’t important, they are just printed because there is no log4j properties file in the command line so it is using the default one. The only important thing here is that the command returns something so it is working! To confirm that, you can just run the installer once more and this time the screen shouldn’t be stuck on the Connection Broker Information.

 

 

Cet article Documentum – Not able to install IndexAgent with xPlore 1.6 est apparu en premier sur Blog dbi services.

12cR2 Cross-container DML – insert into container()

Sat, 2017-05-13 10:07

Multitenant has been introduced in 12.1.0.1 with the goal to share resources but isolate data. However, having all PDBs in the same root may be convenient to manipulate data in multiple PDBs. In the first patchset, 12.1.0.2, a way to query cross-container has been introduced for the CDB administrator to see data in other containers. In the second release, 12.2.0.1, this goes further with the introduction of Application Containers and cross-PDB DML. Currently, not all possibilities are documented and not all documented features are actually working. This will probably improve in next patchset. I’ll start here with something simple: insert from root into a table which is in a PDB.

Here is my CDB with two PDBs

22:48:13 SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
22:48:13 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO
 

I create a DEMO table in CDB$ROOT and do the same in PDB1 and PDB2

22:48:13 SQL> create table DEMO (n number primary key, text varchar2(90) );
Table DEMO created.
 
22:48:14 SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
 
22:48:14 SQL> create table DEMO (n number primary key, text varchar2(90) );
Table DEMO created.
 
22:48:14 SQL> connect sys/oracle@//localhost/PDB2 as sysdba
Connected.
 
22:48:14 SQL> create table DEMO (n number primary key, text varchar2(90) );
Table DEMO created.

I connect to CDB$ROOT and set a transaction name, then check all transactions

22:48:14 SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.
 
22:48:14 SQL> set transaction name 'Franck';
Transaction NAME succeeded.
 
22:48:14 SQL> select con_id,addr,xidusn,ubafil,ses_addr,ptx_xid,name,used_urec from containers(v$transaction);
no rows selected
 

I’m alone here with no transactions.

CONTAINERS()

Here is the cross-container syntax: using the CONTAINERS() and specifying the CON_ID column and value (CON_ID=3 for PDB1)

22:48:14 SQL> insert into containers(DEMO) (con_id,n,text) values (3,1,'Cross-container insert');
1 row inserted.
 
22:48:14 SQL> select con_id,addr,xidusn,ubafil,ses_addr,ptx_xid,name,used_urec from containers(v$transaction);
 
CON_ID ADDR XIDUSN UBAFIL SES_ADDR PTX_XID NAME USED_UREC
------ ---- ------ ------ -------- ------- ---- ---------
1 0000000067BB19E8 7 0 000000006ADD2EA8 0000000000000000 Franck 1
3 000000006642AEB8 1 62 000000006AC99610 0000000000000000 2

The interesting thing is that I have two transactions: one on my current container, and one on the container CON_ID=3 specified in my insert.

I’m doing the same for PDB2 which is CON_ID=4

22:48:14 SQL> insert into containers(DEMO) (con_id,n,text) values (4,1,'Cross-container insert');
1 row inserted.
 
22:48:15 SQL> select addr,con_id,addr,xidusn,ubafil,ses_addr,ptx_xid,name,used_urec from containers(v$transaction);
 
ADDR CON_ID ADDR XIDUSN UBAFIL SES_ADDR PTX_XID NAME USED_UREC
---- ------ ---- ------ ------ -------- ------- ---- ---------
0000000067BB19E8 1 0000000067BB19E8 7 0 000000006ADD2EA8 0000000000000000 Franck 1
000000006642AEB8 3 000000006642AEB8 1 62 000000006AC99610 0000000000000000 2
000000006644EA90 4 000000006644EA90 6 66 000000006B20F828 0000000000000000 2

looking at the transactions sessions, the ones on the PDBs looks like a database link connection:

22:48:15 SQL> select taddr,con_id,program,action,module from v$session where saddr in (select ses_addr from v$transaction);
 
TADDR CON_ID PROGRAM ACTION MODULE
----- ------ ------- ------ ------
000000006644EA90 4 oracle@VM104 (TNS V1-V3) oracle@VM104 (TNS V1-V3)
000000006642AEB8 3 oracle@VM104 (TNS V1-V3) oracle@VM104 (TNS V1-V3)
0000000067BB19E8 1 java@VM104 (TNS V1-V3) java@VM104 (TNS V1-V3)

It looks as database links, and we can actually see those open links in V$DBLINKS:

23:06:53 SQL> select * from v$dblink;
 
DB_LINK OWNER_ID LOGGED_ON HETEROGENEOUS PROTOCOL OPEN_CURSORS IN_TRANSACTION UPDATE_SENT COMMIT_POINT_STRENGTH CON_ID
------- -------- --------- ------------- -------- ------------ -------------- ----------- --------------------- ------
PDB1 0 YES YES UNKN 0 YES YES 1 1
PDB2 0 YES YES UNKN 0 YES YES 1 1

Commit

However, when using CONTAINERS() the session is not using the database links but something like parallel query switching to the containers. This means that it is not the same transaction and we don’t see the modifications:

22:48:15 SQL> select * from containers(DEMO);
no rows selected

Now, I commit:

22:48:15 SQL> commit;
Commit complete.

and all transactions are ended:

22:48:15 SQL> select taddr,con_id,program,action,module from v$session where saddr in (select ses_addr from v$transaction);
no rows selected

the links are still opened but not in a transaction anymore:

23:10:21 SQL> select * from v$dblink;
 
DB_LINK OWNER_ID LOGGED_ON HETEROGENEOUS PROTOCOL OPEN_CURSORS IN_TRANSACTION UPDATE_SENT COMMIT_POINT_STRENGTH CON_ID
------- -------- --------- ------------- -------- ------------ -------------- ----------- --------------------- ------
PDB1 0 YES YES UNKN 0 NO NO 1 1
PDB2 0 YES YES UNKN 0 NO NO 1 1

My inserts are now visible, either from the root with CONTAINER()

22:48:15 SQL> select * from containers(DEMO);
 
N TEXT CON_ID
- ---- ------
1 Cross-container insert 4
1 Cross-container insert 3

or from each PDB:

22:48:15 SQL> alter session set container=PDB1;
Session altered.
 
22:48:15 SQL> select * from DEMO;
 
N TEXT
- ----
1 Cross-container insert

So what?

This is a convenient way for the CDB administrator, or for the Application Root administrator, to do some DML on different containers, without having to create a database link. Of course, the common user can also switch to a PDB using the ‘alter session set container’ but this one does not allow to have a transaction that spans multiple containers. You can think of it as a shortcut to avoid creating database links from the root to its containers.

 

Cet article 12cR2 Cross-container DML – insert into container() est apparu en premier sur Blog dbi services.

Personal road trip – PowerShell Conference EU 2017

Fri, 2017-05-12 04:03

PSConfEU

Welcome to PowerShell Conference EU 2017

Started on April, 5th at 9:00 AM, I was immersed in a wonderful and amazing Open Ceremony among light effects and an epic music. Indeed, using PowerShell is always an epic moment…

Then, lights turned on and all the speakers stood up and come behind the scene. As you may see, they came from all around the worlds to bring us the best of PowerShell!

PSConfEU_ceremony

 

This Open Ceremony was followed by the Keynote “State of the Union” presented by Jeffrey SNOVER, also know as the “father” of Windows PowerShell. He really congratulated the PowerShell Team for their great work which now make us possible to use our favorite Shell on Linux environments. Moreover, he described this Transformational Change as a logical evolution of the tool.

PSConfEU_Keynote

Before the lunch, Will SCHROEDER presented how in the last years, Red Team and Blue Team played cat and mouse regarding the security. As a conclusion for this session, we can admit notable improvement in security was performed over the PowerShell versions.

PSConfEU_security

 

If you’ve got a question, it was the place to go! PSConfEU_Welcome

In the afternoon, you had the choice between four different streams. I decided to learn more about PowerShell on Linux and to attend to “Hell freezing over: PowerShell on Linux and GitHub”  presented by Bartosz Bielawski.

PSConfEU_PowerShell_Linux

If you wanted to obtain a good overview of what are Windows Server Containers and how convenient it is, the session “Getting started with Windows Server container” presented by Flynn Bundy was great.

PSConfEU_Windows_Container

Then, the session “Look! Up in the sky! It’s a bird. It’s a plane. It’s Nanoman!” presented by Aleksandar Nikolic and Jan Egil Ring gave you key clues to connect and administrate a Windows Server Nano with PowerShell.

PSConfEU_Nano_Server

 

And finally, I discovered an interesting PowerShell module called Pester which automate code verification. So it can be used as a MSSQL instance policy check but with PowerShell. This session called “Green is bad Red is Good – Turning your Checklists into Pester Tests” was presented by Rob Sewell.

PSConfEU_Pester

The second day, Bartosz Bielawski presented an other session about PowerShell on Linux called “Knock, knock, knock – Linux at your door”.

PSConfEU_Linux

An interested session presented by Jason Yoder called “FASTER!!!! Make Your Code Run Faster!”, gave some key clues and approaches to optimize your PowerShell Code.

PSConfEU_Faster

In the afternoon, Rob Sewell rocked again with “Using the SQL Server Provider”
PSConfEU_Provider

Then I followed Mathias R Jessen’s session named “Regex 2.0: Full Coverage”.

PSConfEU_Regex

During the last day, I wanted to be initiated to Visual Studio Code. So I went to David Wilson’s session “Advanced PowerShell Module Development with Visual Studio Code”.

However my favorite session was “Modern Database Administration using PowerShell and dbatools” presented by Chrissy LeMaire and Rob Sewell.

PSConfEU_dba_toolsYou can find dba tools on the official website. I highly recommend to take a look, it’s worth it!

I will finish my Road Trip with Luc Dekens’ session called “Visualising performance in VMware vSphere”.

PSConfEU_vSphere

Many interesting and pertinent sessions happened at this PowerShell Conference Europe 2017. If you want to learn more about this event, let’s take a look to the official website. And the slides and demos are on GitHub.

 

Cet article Personal road trip – PowerShell Conference EU 2017 est apparu en premier sur Blog dbi services.

APEX Connect 2017 – Day 3

Fri, 2017-05-12 02:04

For the last conference day, after the Keynote about “JavaScript: The next Language YOU should learn” by Dan McGhan , I decided to attend presentations on following topics:
– Upgrade your APEX app with zero downtime by using EBR
– Understand and make use of CSS(3)
– Best Practices for APEX Administrators
– APEX Version Control and Team Working
– Database Cloud Services with APEX
– Date, Time, Calendar and Co with APEX
I also got the chance to have a 1:1 talk with Marc Sewtz to expose some wishes about APEX and talk about some customer issue.

JavaScript programming language
JavaScript allows asynchronous programming: main thread can make calls to asynchronous API which return there result via an event/callback queue.
When the web browser parses the page to be rendered the DOM tree is built. That build process is stopped whenever JavaScript is found, until the JavaScript execution is completed. That means the JavaScript can only address the the element that where already put into the DOM tree, not the full page. That explains why people would want to put the JavaScripts at the bottom of the page definition.
Fortunately, there is the JQuery DOM manipulation library which allows developer to abstract from that constraint.
What you also need to keep about JavaScript is:
– functions are the first class
– functions provide scope and closure
In APEX you can make use of JavaScript for AJAX calls by using apex.server.process.
JavaScript is definitely a language that APEX developer should master beside PL/SQL to embrace customer requirements.

Upgrade your APEX app with zero downtime by using EBR:
EBR (Edition Based Redefinition) is a tool in the Oracle database to support online application upgrades.
You can find details about EBR in following Oracle documentation: Technet Edition Based Redefinition
EBR allows you to have your DB looking like 2 DBs, each identified in the normal way by a service (Just like 2 CDBs in the same CDB).
Hot rollover will be enabled by using a traffic director (load balancer).
This introduces end user session to application version affinity.
In APEX you need to create a copy of your application with the changes for the new version. The application version switch is managed with APEX_UTIL.SET_EDITION setting the value of the current application in the APPLICATION_PREFERENCES table.

Understand and make use of CSS(3):
CSS defines the layout design. The basic structure is: selector { attribute : value }
There are numerous selectors which can be combined. Also pseudo classes and elements can be used as reference.
You need to be careful about the cascading order, some strong rules are set to define them. The “!important” tag which overrules should be used as less as possible.
The definition of objects to be rendered is based on the Box Model (Margin – Border – Padding – Content).
There are 2 levels for the definitions:
– Block
– Inline (mainly for the text)
Positions also have multiple definition references:
– static
– relative
– fixed
– absolute
Media queries will allow the definitions for responsive design.
You can verify if some elements your want to use are supported by your web browser by checking the following web site:
http://www.caniuse.com/
I can also recommend you to visit W3C site: https://www.w3schools.com/css/

Best Practices for APEX Administrators:
Following best practice rules were presented based on presenter’s experience:
– Create a dedicated folder to store the APEX installation package (e.g. under the Oracle “product” folder)
– Create a dedicated tablespace for APEX
– Put every worspace on it’s own schema with dedicated tablespace
– Build your own workspace delivery system
– Restrict access workspace administration
– Use a version dedicated folder instead of /i/
– Rename ords.war based on application using it
– Setup automated export and backup of the workspaces and applications
– When patching also keep a copy of the full install package
– Manage downtimes during update operations (only required while sql is running)
– Set ACLs to give only required acces

APEX Version Control and Team Working:
There are different ways to manage team work and version control for APEX development, but none are identical to what is done for other programming languages due to the way APEX works.
What is common in the exposed ways is that the development cycle is about 3 weeks and there is always a Development, an integration, a test and a production environment. Code related to the database (DML, packages, …) is stored and managed with a version control system like GitHub and APEX application exported and pushed to the version control system on a daily basis.
Some people use CI (Continuous Integration) engine to generate a VM with the full test environment from the committed development work.
To manage the deployment of selective features developed you need to use conditional build.
There are different way to export/import APEX applications:
– ApexExport java class
– “apex export” and “apximp” in SQLcl
– “Application Archive” packaged application
– manual export/import
Oracle provides some white paper describing best practices to manage APEX development process:
http://www.oracle.com/technetwork/developer-tools/apex/learnmore/apex-life-cycle-management-wp-3030229.pdf

Database Cloud Services with APEX:
The requirements were about the setup of a private cloud to host about 200TB of data over 300000 schemas.
In order to be able to properly manage the infrastructure following tolls have been created in APEX:
– DB service portal (request management, password reset, reporting, cost tracking, approvals,…)
– DB metadata manager (reporting, interfacing, measurement of space, cpu, aso)
– DB service automation (order management, log management, messaging, maintenance management)
This allowed to raise customer satisfaction, enhance DBA efficiency and metadata maintenance.

Date, Time, Calendar and Co with APEX:
Dates can be represented in different ways in the Oracle DB:
– DATE
– TIMESTAMP
– TIMESTAMP WITH TIME ZONE
– TIMESTAMP WITH LOCAL TIME ZONE
If you dump those data types you will see the details of there implementation and how the Oracle DB stores them.
If you subtract a number to a DATE it will return a decimal number and if you add a number to a DATE you get a DATE.
There are function to manipulate dates like ADD_MONTHS (for DATE) and INTERVAL (for TIMESTAMP).
The EXTRACT function allows to get specific elements of a timestamp.
APEX items are always of VARCHAR2 type, so any date manipulation with implicitly use TO_CHAR and TO_DATE conversion.
The date format is to be set in the APEX application globalization parameters. Most of those parameters map to the DB NLS values by default.
In APEX 5.1 there were some additions to the Calendar like:
– ability to define Dynamic Actions
– JavaScript initialization on parameters
– …
I would recommend to play with the Calendar Sample application to see all new capabilities.

Enjoy APEX!

 

Cet article APEX Connect 2017 – Day 3 est apparu en premier sur Blog dbi services.

SQL Server 2017: Python server

Thu, 2017-05-11 08:17

After my article SQL Server 2017: first steps with Python Service, I write this article about my first test with the Python Server feature in SQL Server 2017.

The installation is very simple, in the features Selection window, you have in the Shared Features, you have a Machine Learning Server (Standalone) with R and Python.
I check Python to install the Python Server and here we go.
Python_srv01
Like for the Python services, you need to accept “Consent to install Python”
Python_srv02
Don’t forget to give access to Internet to your server. If not, the installation failed and you have this message:
Python_srv03
With Internet, the installation is successful.
Python_srv06
At the end of the installation, I open the summary file to analyze what is installed.
I find feature parameters for the script installation for the Python Server:

  • SQL_SHARED_AA for the Machine Learning Server (Standalone) – AA is for ADVANCED ANALYTICS
  • SQL_SHARED_MPY for Python

Python_srv07
At this point, I have a Python Server but, now, what is a Python Server?
On my server, I open the search and tape Python as keyword…
Python_srv08
No result found… Ok. Old method, go to the folder and search manually.
I go directly in the folder “C:\Program Files\Microsoft SQL Server\140” and I see a folder “PYTHON_SERVER”:
Python_srv09
I open the folder “PYTHON_SERVER” and see three executables:

  • Uninstall-Anaconda: to uninstall the python server
  • Python: Command Console to execute Python code
  • Pythonw: Command Console to execute Python code without Window

Python_srv10
To Test the Python server, I run the executable python (to have a cmd window) and use the same example as my article for the Python service.
The Python query is:

Import pip
Installed_packages = pip.get_installed_distributions()
Installed_packages_list = sorted([“%s %s” % (i.key, i.version) for I in Installed_packages])
Print(Installed_packages_list)
Len(Installed_packages_list)

Python_srv11
As you can see in the picture, the same 125 packages as for the Python Service are installed on the Python Server.
:idea: Let yourself be tempted

 

Cet article SQL Server 2017: Python server est apparu en premier sur Blog dbi services.

APEX Connect 2017 – Day 2

Wed, 2017-05-10 15:48

After the Keynote about “Oracle Application Express – Ahead of it’s time. Leading the way forward” by Joel Kallmann (Director of Software Development for APEX) and some overall presentation about Development with APEX 5.1, I decided to attend presentations on following topics oriented towards tools to make developer’s life easier:
– APEX OOS Tools and helper
– Useful APEX plugins
– LESS is more
– JavaScript Debugging and Tuning
– Introduction to Oracle JET

Oracle Application Express – Ahead of it’s time. Leading the way forward:
In 2008 APEX was already matching the definition of cloud given by the NIST (National Institute of Standards and Technology) and even matching the definition of PaaS in 2002: APEX was ahead of it’s time!
APEX allows to increase productivity and consistency, reducing complexity due to it’s declarative framework setup. It’s therefore to be seen as lowcode development environment: https://apex.oracle.com/lowcode/
What is impressive about APEX is that a packaged app written in 2008 is still able to run on apex.oracle.com today.
It’s the most used development platform within Oracle for running there internal business.
There is now an online APEX curriculum available for free on Oracle academy: APEX on Oracle academy
Oracle database free services online will be launched, replacing apex.oracle.com.

Upcoming features in APEX 5.2:
The next version of APEX may contain features like:
– Blueprint wizard: allows to integrate features from existing packaged apps in your on application declaratively
– Improved packaged apps
– Update of JET and JQuery versions
– Enhancements in Page designer:
-* Dialogs/pop-overs
-* Client side dialogs
-* Adaptive UI based on preference options declaratively enabled/disabled
-* Removal of legacy component view
– Enhancements on JET Charts:
-* New Gantt chart
-* New pyramid report
-* New Box Plot
-* New interactive report and websheet charts
-* Removal of 32k limit of data visualization
– Improved interactive grids
-* Flexible row height
-* Group by view
-* Pivot view
-* Region button position
-* Printing
-* Subscription
-* Computation
-* Required filtering
-* Complex filtering and highlighting
-* No stretch columns
-* Copy down facility
-* API documentation
-* Migration for interactive reports
– Improved REST services
-* Declarative support of REST services
-* REST services as data source for APEX components
-* Simple access to Cloud services
-* ORDS remote SQL data access
-* SQL execution on remote DB
All very nice and promising features :-)

APEX OOS Tools and helper::
Oracle Open Source Tools and other useful Open Source projects can be found in GitHub within different repositories.
Most famous OOS tools for APEX developers are:
– oracle-db-tools
– oraclejet
– db-sample-schemas
– node-dboracle
– docker-images
Link to oracle GitHub: https://github.com/oracle
Beside that to Open Source community provides various other tools for APEX developers:
– oxar (automated installation of full development environment)https://github.com/OraOpenSource/OXAR
– docker images https://github.com/Dani3lSun/docker-db-apex-dev
– Logger https://github.com/OraOpenSource/Logger
– APEX Diff (comparison of application exports based on JSON using node.js and sqlcl) https://github.com/OraOpenSource/apex-diff
– APEX Client extension https://github.com/OraOpenSource/apex-frontend-boost
– PL/SQL libraries like Alexandria https://github.com/mortenbra/alexandria-plsql-utils
– APEX backup scripts https://github.com/OraOpenSource/apexbackup
There is a lot out there to make you life easier. Enjoy them!
Thanks a lot to all contributors!

Useful APEX plugins::
APEX is a very nice development framework but it sometimes needs more than what is provided by Oracle.
Fortunately APEX allows to write extensions to fill-in the gap: plug-ins
There are many to be found on apex.world
Here is an non exhaustive list of useful plug-ins that were presented at APEX connect:
– select2 https://select2.github.io/
– Date range (based on JS moment library) http://apex.pretius.com/apex/f?p=105:DATERANGE:::NO:::
– Help text tooltip http://apex.pretius.com/apex/f?p=105:HELPTEXT:::NO:::
– Dropzone (for multiple file upload) https://github.com/Dani3lSun/apex-plugin-dropzone
– Excel to collections http://www.apex-plugin.com/oracle-apex-plugins/process-type-plugin/excel2collections_271.html
– Enhanced notification http://apex.pretius.com/apex/f?p=105:NOTIFICATIONS:::NO:::
– Nested reports http://apex.pretius.com/apex/f?p=105:NESTED_REPORTS:::NO:::
Thanks to all developers who provide those plug-ins and make APEX be even more enjoyable!

LESS is more::
What is LESS?
LESS is a CSS pre-processor which allows to use variable, mix-ins and nested rules to facilitate the rules CSS management.
It’s already in use in APEXso that you can leverage thattool to adjust your application to your Corporate identity guidelines.
The Theme roller of APEX 5.x make use of it, but the core.css of APEX is not modified that way.
I would suggest you to visit following website if you are interested in who LESS works:
http://lesscss.org/

JavaScript Debugging and Tuning::
APEX makes use of JQuery.
In some cases you might run into performance issues while loading or using your application pages.
Here are some tips and tricks to optimize JavaScript in your application based on experience.
– Use the right selector to reducing searches in the DOM (from the most to the less selective: id, element, class, attribute, pseudo)
– Use loops the right way (arrays can help)
– Use variables to reduce DOM access
– Use detach function to have an “offline” DOM and reduce the cost of parsing and accessing the DOM
– In some case native JavaScript is faster than JQuery
There are tools to help you measure the performance of your page:
– jsperf.com
– ESBench.com
– Chrome Canary developer tools
When it comes about page load performance, the size of your JavaScript library file is key. This can be reduced using tools like uglifyJS.
For Debugging and Logging purpose you can make use of the client console.log or even better the APEX.debug wrapper on it. Unfortunately those logs are only visible on the client console during runtime. One option to centralize them would be to write the result of the logs into a DB table using AJAX. Also stacktrace.js is of help as it captures the user context which can then be put with the logs to better understand the issue.
Ultimately REST service could also be an alternative to send back the logs to the DB.

Introduction to Oracle JET:
Oracle JET: JavaScript Extension Toolkit
It supports multilingual and follows W3C standards.
A JET module is always made of at least 2 parts:
– JavaScript file (view Models)
– HTML file (views)
When using JET modules you always have to take care of the required libraries/modules (dependencies).
APEX only makes use of the JET Charts for now.
I can only recommend to visit the Oracle web page on that subject:
http://www.oracle.com/webfolder/technetwork/jet/index.html/

 

Cet article APEX Connect 2017 – Day 2 est apparu en premier sur Blog dbi services.

Upgrade to Oracle 12.2 with dbupgrade utility

Wed, 2017-05-10 10:24

Oracle 12.2 is released and we will have to upgrade. And in this blog we are going to talk about upgrading to oracle 12.2 using the dbupgrade utility.

Of course the dbua tool is still available and can be also used.

Oracle Database 12.1 introduced the Parallel Upgrade Utility, catctl.pl. This utility reduces the total amount of time it takes to perform an upgrade by loading the database dictionary in parallel, and by using multiple SQL processes to upgrade the database.

In Oracle 12.1 catcpl.pl had to be run using perl like
$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 -l /home/oracle catupgrd.log
Starting with Oracle 12.2, we can use dbupgrade utility which starts up catctl.pl instead to run it from perl.

In this article we will describe the steps for upgrade using dbupgrade.

The first step is to run the preupgrade information tool preupgrade.jar tool which replaced the preupgrd.sql and utluppkg.sql scripts. This tool can be run from operating system command line. Remember that in 12.1, the Pre-Upgrade Information Tool was run within SQL*Plus as a SQL file.

Following environment variables must be set. Indeed ORACLE_HOME must point to the earlier release.

[oracle@serveroracle ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[oracle@serveroracle ~]$ export ORACLE_BASE=/u01/app/oracle
[oracle@serveroracle ~]$ export ORACLE_SID=ORCL
[oracle@serveroracle ~]$ export PATH=.:$ORACLE_HOME/bin:$PATH

The preupgrade.jar file is located to the oracle 12.2 $ORACLE_HOME/rdbms/admin. We can execute it using the TERMINAL or FILE options. The TERMINAL option will send output in the terminal and with FILE we will have the output in a file.

oracle@serveroracle:~[ORCL] $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT
Report generated by Oracle Database Pre-Upgrade Information Tool Version
12.2.0.1.0
Upgrade-To version: 12.2.0.1.0
=======================================
Status of the database prior to upgrade
=======================================
Database Name: ORCL
Container Name: ORCL
Container ID: 0
Version: 12.1.0.2.0
Compatible: 12.1.0.2.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone File: 18
Database log mode: ARCHIVELOG
Readonly: FALSE
Edition: EE
...
...
...
Preupgrade generated files:
/u01/app/oracle/cfgtoollogs/ORCL/preupgrade/preupgrade_fixups.sql
/u01/app/oracle/cfgtoollogs/ORCL/preupgrade/postupgrade_fixups.sql
oracle@serveroracle:~[ORCL]

As specified if we want the output in a file, we can use the option FILE instead of TERMINAL. The output directory is $ORACLE_BASE/cfgtoollogs/$ORACLE_SID

[oracle@serveroracle ~]$ $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/admin/preupgrade.jar FILE TEXT
Preupgrade generated files:
/u01/app/oracle/cfgtoollogs/ORCL/preupgrade/preupgrade.log
/u01/app/oracle/cfgtoollogs/ORCL/preupgrade/preupgrade_fixups.sql
/u01/app/oracle/cfgtoollogs/ORCL/preupgrade/postupgrade_fixups.sql
[oracle@serveroracle ~]$

Once the execution done, the tool will identify possible issues during the upgrade and will also generate 2 scripts
 preupgrade_fixups.sql: fixes issues that an automated script can fix safely. This script must be run before the upgrade
 postupgrade_fixups.sql: fixes the upgrade issues that can be automatically fixed after the upgrade.
For all other issues we have to manually fix them.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> @preupgrade_fixups.sql

Once done we are ready to run the dbupgrade utility.

Prepare the environment

oracle@serveroracle:/home/oracle/ [rdbm12201] which dbupgrade
/u01/app/oracle/product/12.2.0.1/dbhome_1/bin/dbupgrade
oracle@serveroracle:/home/oracle/ [rdbm12201] echo $ORACLE_HOME
/u01/app/oracle/product/12.2.0.1/dbhome_1
oracle@serveroracle:/home/oracle/ [rdbm12201] export ORACLE_SID=ORCL

Copy the parameter file to the oracle 12.2 ORACLE_HOME/dbs

oracle@serveroracle:/home/oracle/ [ORCL] cp /u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileORCL.ora /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/
oracle@serveroracle:/home/oracle/ [ORCL]

Start the database in the new environment in an upgrade mode

oracle@serveroracle:/home/oracle/ [ORCL] which sqlplus
/u01/app/oracle/product/12.2.0.1/dbhome_1/bin/sqlplus
oracle@serveroracle:/home/oracle/ [ORCL] sqlplus
SQL*Plus: Release 12.2.0.1.0 Production on Thu May 4 16:24:22 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup upgrade
ORACLE instance started.
Total System Global Area 394264576 bytes
Fixed Size 8621136 bytes
Variable Size 293602224 bytes
Database Buffers 83886080 bytes
Redo Buffers 8155136 bytes
Database mounted.
Database opened.
SQL>

And then run the dbupgrade utility

oracle@serveroracle:/home/oracle/upgrade/ [ORCL] dbupgrade -n 2 -l /home/oracle/upgrade
Argument list for [/u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/admin/catctl.pl] Run in c = 0
Do not run in C = 0
Input Directory d = 0
Echo OFF e = 1
Simulate E = 0
Forced cleanup F = 0
Log Id i = 0
Child Process I = 0
Log Dir l = /home/oracle/upgrade
Priority List Name L = 0
Upgrade Mode active M = 0
SQL Process Count n = 2
SQL PDB Process Count N = 0
Open Mode Normal o = 0
Start Phase p = 0
End Phase P = 0
Reverse Order r = 0
AutoUpgrade Resume R = 0
Script s = 0
Serial Run S = 0
RO User Tablespaces T = 0
Display Phases y = 0
Debug catcon.pm z = 0
Debug catctl.pl Z = 0
catctl.pl VERSION: [12.2.0.1.0] STATUS: [production] BUILD: [RDBMS_12.2.0.1.0_LINUX.X64_170125] ...
...
...
LOG FILES: (/home/oracle/upgrade/catupgrd*.log)
Upgrade Summary Report Located in:
/home/oracle/upgrade/upg_summary.log
Grand Total Upgrade Time: [0d:1h:38m:21s] oracle@serveroracle:/home/oracle/upgrade/ [ORCL]

Once the upgraded finished without errors, we can run the postupgrade_fixups.sql.

SQL*Plus: Release 12.2.0.1.0 Production on Thu May 4 23:17:10 2017
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> @postupgrade_fixups.sql

Hope that this article will help

 

Cet article Upgrade to Oracle 12.2 with dbupgrade utility est apparu en premier sur Blog dbi services.

SQL Server 2017: first steps with Python Service

Wed, 2017-05-10 02:54

Last year, I written blogs about SQL Server 2016 and his integration from the R language.
Just for information, you can find these blogs about SQL Server 2016 R Services here:

Now, the next version of SQL Server, named officially SQL Server 2017 integrate also the language Python.
The first step is to install the Python Services during the installation of SQL Server.
Python01
In the Instance Features > Database Engine Services, you have a new structure named “Machine Learning Services (In-Database)” with R and Python.
I check the case “Python” and next button.
After the classical instance, server and database Engine Configurations, a window “Consent to install Python” need your acceptance to be validate and go to the next installation step.
Python02
After that, the installation begins and failed… :-(
Python03
The eternal question, WHY???…
Like a majority of peoples, I don’t read the text in the window “Consent to install Python”.
The text explains “By clicking Next, you will start the download and installation of these packages to your machine.”. This means that you need an access to Internet.
You can also go the installation log file to see the error message:
Python10
After activating the access to Internet to download all necessary packages to use Python, the installation was successful. :-D
Python04
I have the curiosity to have a look in the installation file and find the feature parameter for the script installation for Python. In addition of ADVANCEDANALYTICS, to install Python the parameter is SQL_INST_MPY.
Python05
Like for R, in the instance configuration, you need to set the ‘external script enabled’ to 1.
If you haven’t read my article on SQL Server 2016: R Services, I rewrite the method to search and configure the service:
Script to search the configuration:

SELECT * FROM sys.configurations WHERE name='external scripts enabled'

Python06
As you can see in the result, this configuration is not dynamic and need a restart of the engine.
After a restart, the configuration is enabled.
Python07
Time to test a query….
Like my article SQL Server 2016 – R Services Tips: How to find installed packages using T-SQL?, I will find all installed packages.
I have created the Query using packages pip and pandas:

  • pip is a package management system used to install and manage software packages
  • Pandas is a library for data manipulation and analysis

The query uses the stored procedure sp_execute_external_script and Python as parameter for the language:

exec sp_execute_external_script  @language =N'Python',
@script=N'import pip
import pandas
installed_packages = pip.get_installed_distributions()
installed_packages_list = sorted(["%s %s" % (i.key,i.version)
     for i in installed_packages])
OutputDataSet = pandas.DataFrame(installed_packages_list)'

Python08

 

You can see that you have 125 packages installed for Python with the CPT2 of SQL Server 2017.
Have fun with Python on SQL Server… 8-)

 

Cet article SQL Server 2017: first steps with Python Service est apparu en premier sur Blog dbi services.

APEX Connect 2017 – Day 1

Tue, 2017-05-09 18:26

This year again the APEX connect conference spans over three days with mixed topics around APEX, like JavaScript, CSS, SQL and much more.
After the welcome speech and the Keynote about “Reconciling APEX and the Thick Database Paradigm” by Bryn Llewellyn I decided to attend presentations on following topics:
– Temporal Validity and Flashback Data Archive
– Universal Theme and Custom Themes in APEX 5.1
– Using REST Services in APEX applications the nice way
– Uncover the Truth (of APEX application)
– Browser Developer Tools for APEX developers
– Docker for Dummies

Thick Database Paradigm:
What was highlighted by Bryn Llewellyn is that the key to proper Database development, is to encapsulate your database through PL/SQL APIs.
When you work on DB development make your environment such as it can be easily rebuild from scratch with sripts so you have no fear in making anything wrong.
Your schema user should have as less rights as needed so you keep your environment safe.
If you build proper APIs, no mater what kind of client application uses your data (APEX, REST, Java Web App, …), it will be able to interface.

Temporal Validity and Flashback Data Archive:
There is an increasing demand on data history and audit.
Data history means not only keeping track of past data but also managing different versions of the same data over time (e.g. customer delivery address). This is managed by Temporal validity.
Oracle 12c allows to automatically manage such time dependent data by using “ADD PERIOD FOR” on a table.
When retrieving the data use “AS OF PERIOD FOR” in the select statement.
Details can be found on the Oracle website:
Implementing temporal validity
Audit can be managed using the well known trigger business with all issues it can generate but also automatically by using flashback archive.
In this second case data audit is written in a specified tablespace for which you define the data retention period. SYS_FBA tables get automatically created and information tracked is managed by setting context level. This is very powerful tool as it also takes in account DML changes.
Also very important for audit purpose, flashback data cannot be modified.
You can find further information on following Blog:
Oracle 12c Flashback Data Archive

Universal Theme and Custom Themes in APEX 5.1:
After a brief overview of Theme and Templates history in APEX, we were shown how easy (at least it seems) it is to create and manage custom Theme and Templates.
Template options introduced in APEX 5 aim to reduce the number of templates for a specific “object” type to a minimum in order to ease maintenance.
Live template options have been introduced with APEX 5.1 to have a preview of the changes at run time and facilitate their usage.
Theme subscription allows to distribute changes made to a master Theme which can now be defined at workspace level.
Theme styles allow you to have a dedicated CSS file on top of you application standard CSS and define user based styles from the Theme roller.
Note: Themes based on JQuery Mobile for mobile applications should no longer be used, rather use the Universal Theme responsive as JQuery UI wasn’t updated for long and might have issues with the new JQuery core version that might be used in future versions of APEX.

Using REST Services in APEX applications the nice way:
The definition of REST is based on 3 pillars:
– Resources
– Methods (GET, PUT, POST, DELETE, PATCH)
– Representation (JSON, HTML, CSV, …)
The new REST Client Assistant packaged application in APEX 5.1 will be on a great help for developer as it manages to generate the procedures required to parse JSON data returned by a given REST Data service URL as well as the underlying SQL query to display the data in report.
When the amount of data is becoming to large, REST data services can return them on a pagination fashion which needs to be supported on the client side. At this point only classic report can support that feature in APEX 5.1. Filtering on the data query to the service needs also to be managed. The REST Data Sample application is showing how to implement the different kind of interaction with REST Data services based on Oracle standards.
There will be improvements in supporting REST Data service in the upcoming version 5.2 of APEX, such as remote SQL execution.

Uncover the Truth (of APEX application):
When you have to modify an existing APEX application or take over from customer development you need to understand thee heart of the application which can be a challenge. To do so you need to identify it’s structure and how different elements are used and interact.
Various people are interested in this:
– DB developers
– APEX developers
– Cloud developers
– Project leaders
This is all about:
– Functionality (Page function, application logic, Interfaces)
– Complexity (APEX components, PL/SQL objects, JavaScript, CSS, DB objects)
– Transparency (who changed, when, Conditions, relations between pages)
There are already different tools in APEX allowing to see different aspects of those data:
– Searches
– History
– Application utilities
– Reports
– SQL Workshop
So it can be cumbersome to walk through all those.
We were presented a self developed toll which can be seen as a kind of “Dashboard” to analyze everything in one place base on all sorts of charts reading out the APEX metadata tables. I’m looking forward to seeing it released next summer.

Browser Developer Tools for APEX developers:
The IDE for APEX is the web browser, so it’s important to know about the developer tools provided in your web browser.
Each and every web browser has it’s own tools, some being better than others.
The most used browser has also the most complete tool set: Chrome (Firefox Developer Edition is also worth looking at)
As there are a lot of information to be displayed in the Developer tools, its highly recommended to detach the window from the browser to display it on a secondary screen when possible.
CSS usage is showing all level of the style Sheet to understand what is set where.
Specifications of the web page can be modified on the fly to understand their impact.
Debugging of JavaScript can be done with setting breakpoints.
Application panel allows to monitor cookies.
Device mode allows to emulate mobile devices and even set the network speed to have a better understanding of end user experience.
Even remote debugging can be used on attached devices.

Docker for Dummies
I already heard a lot about Docker but I never took some time to look into it, so I took the opportunity to have it shown today.
What is Docker? A light weight VM?
No, a container!
It allows to share resources and get ride of things like Hypervisor and full target OS (which are used for VMs), which makes it light.
Main characteristics are:
– You can put almost anything into it
– It stays locked
– It’s efficient for transport
– It’s small, light weight
– It’s scallable
Actually it can be seen more as a software delivery platform.
The basic component is an image:
– It contains File System and parameters
– It has no state
– Layers are read only
– Layers are shared
– Updates require only updated files to be downloaded
A container is a running instance of an image. It adds a R/W layer on top of the image.
Images are not cloned.
Persistence of the data used in the container is managed by mapping local folder into the container.
Also Docker is command line based, there is a GUI available called Kitematic.
Resources:
Docker website
Kitematic website
You can find further details on following blog:
Overview and installation

 

Cet article APEX Connect 2017 – Day 1 est apparu en premier sur Blog dbi services.

Grid Infrastructure Installation on SLES 12 SP1

Tue, 2017-05-09 01:43

This week I needed to install Oracle Grid Infrastructure 12c release 1 in a SLES 12 SP1 environment. Everything worked fine until I ran the root.sh at the end of the installation. Here’s a quick description of the problem and the workaround.

The root.sh script ran into error and the installation was completely unsuccessfull:
oracle:/u00/app/grid/12.1.0.2 # /u00/app/grid/12.1.0.2/root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/12.1.0.2
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 39: [: -eq: unary operator expected
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying dbhome to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying oraenv to /usr/local/bin ...
/u00/app/grid/12.1.0.2/install/utl/rootinstall.sh: line 100: [: too many arguments
   Copying coraenv to /usr/local/bin ...
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/12.1.0.2/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/03/31 09:56:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
PRCR-1006 : Failed to add resource ora.ons for ons
PRCR-1115 : Failed to find entities of type resource type that match filters (TYPE_NAME ends .type) and contain attributes
CRS-0184 : Cannot communicate with the CRS daemon.
2017/03/31 09:57:04 CLSRSC-180: An error occurred while executing the command 'srvctl add ons' (error code 0)
 
2017/03/31 09:57:55 CLSRSC-115: Start of resource 'ora.evmd' failed
 
2017/03/31 09:57:55 CLSRSC-202: Failed to start EVM daemon
 
The command '/u00/app/grid/12.1.0.2/perl/bin/perl -I/u00/app/grid/12.1.0.2/perl/lib -I/u00/app/grid/12.1.0.2/crs/install /u00/app/grid/12.1.0.2/crs/install/roothas.pl ' execution failed

When we run crsctl stat res –t :

grid@oracle_grid:/u00/app/grid/product/12.1.0.2/grid/bin> ./crsctl stat res -t 
 -------------------------------------------------------------------------------- 
 Name Target State Server State details 
 -------------------------------------------------------------------------------- 
 Cluster Resources 
 -------------------------------------------------------------------------------- 
 ora.cssd 
 1 OFFLINE OFFLINE STABLE 
 ora.diskmon 
 1 OFFLINE OFFLINE STABLE 
 ora.evmd 
 1 OFFLINE OFFLINE STABLE 
 --------------------------------------------------------------------------------

After trying multiple times with other Oracle Grid Infrastructure versions from 11.2.0.4 to 12.2.0.1, I has to open a service request at Oracle, and they furnished me the following workaround:

Once rot.sh has failed, we do not close the GUI installer windows because we will use it to complete the installation after the root.sh is complete, at first we have to deconfigure the failed installation:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # ./roothas.pl -verbose -deconfig –force
oracle_grid:/u00/app/grid/product/12.1.0.2/grid/crs/install # . rootcrs.sh -deconfig –force

Then we modify the /etc/ld.so file by adding /lib64/noelision as first entry. The file should look like:

oracle@oracle_grid:/u00/app/oracle/product/12.1.0.2/dbhome_1/dbs/ [DORSTREA] cat /etc/ld.so.conf
/lib64/noelision
/usr/local/lib64
/usr/local/lib
include /etc/ld.so.conf.d/*.conf

Finally we create a symbolic link between $GI_HOME/lib/libpthread.so.0 and /lib64/noelision/libpthread-2.19.so

lrwxrwxrwx 1 root root            35 Apr 11 15:56 libpthread.so.0 -> /lib64/noelision/libpthread-2.19.so

We only have to try to run the root.sh, and finally it works fine:

oracle_grid:/u00/app/grid/product/12.1.0.2/grid # . root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u00/app/grid/product/12.1.0.2/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u00/app/grid/product/12.1.0.2/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oracle_grid successfully pinned.
2017/04/11 15:56:37 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
oracle_grid    /u00/app/grid/product/12.1.0.2/grid/cdata/oracle_grid/backup_20170411_155653.olr     0    
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracle_grid'
CRS-2673: Attempting to stop 'ora.evmd' on 'oracle_grid'
CRS-2677: Stop of 'ora.evmd' on 'oracle_grid' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracle_grid' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/04/11 15:57:09 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

After the root.sh is successfully completed, we continue with the Oracle Installer, and everything is correctly configured for the Oracle Grid Infrastructure.

I will keep you informed of the bug evolution, and I will test ASAP the Oracle Grid Infrastructure installation  under SLES 12 SP2 …

 

 

 

Cet article Grid Infrastructure Installation on SLES 12 SP1 est apparu en premier sur Blog dbi services.

What is in a transportable tablespace dumpfile?

Mon, 2017-05-08 15:20

On 31st of May in Düsseldorf, at DOAG Datenbank, I’ll talk about transportable tablespaces and pluggable databases. Both methods are transporting data physically, the difference is in the transport of the metadata, which can be more flexible when transported logically, as with TTS, but faster when transported physically with PDB. I have a lot of demos to show transportable tablespaces with RMAN, and the different cloning features available in 12cR2. If I have time I’ll show what is inside the dumpfile when using Data Pump to export the metadata. Here is the idea.

expdp transport_tablespaces

Here is how we export metadata with Data Pump for transportable tablespaces.


expdp system/oracle@//localhost/PDB1 directory=VAR_TMP dumpfile=expdat.tmp transport_tablespaces=USERS exclude=table_statistics,index_statistics;
 
...
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/oradata/tmp/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
/u01/oradata/CDB1/PDB1/users01.dbf

The metadata is exported into expdata.dmp and the data resides in the original datafile. The dumpfile is a binary file but there is a way to extract metadata as DDL using impdp

impdp sqlfile

Here I run impdp with sqlfile to generate all DDL into this file. Nothing is imported and the datafiles are not read, reason why I’ve just put something wrong to transport_datafiles:


impdp system/oracle@//localhost/PDB1 directory=VAR_TMP transport_datafiles=blahblahblah sqlfile=sqlfile.sql ;

No error. Only the dumpfile has been read and here is an extract of the DDP in sqlfile.sql concerning the PK_DEPT and PK_EMP indexes:


-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
ALTER TABLE "SCOTT"."DEPT" ADD CONSTRAINT "PK_DEPT" PRIMARY KEY ("DEPTNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_DEPT" ON "SCOTT"."DEPT" ("DEPTNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 138 OBJNO_REUSE 73197
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX (CREATE UNIQUE INDEX "SCOTT"."PK_EMP" ON "SCOTT"."EMP" ("EMPNO")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(SEG_FILE 26 SEG_BLOCK 154 OBJNO_REUSE 73205
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ) ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/INDEX_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
ALTER TABLE "SCOTT"."EMP" ADD CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE;
-- new object type path: TRANSPORTABLE_EXPORT/TABLE_STATISTICS
-- new object type path: TRANSPORTABLE_EXPORT/STATISTICS/MARKER
-- new object type path: TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

This looks like the DDL used to re-create the same table except that we can see two storage attributes that are not documented:

  • SEG_FILE and SEG_BLOCK
  • OBJNO_REUSE
SEG_FILE and SEG_BLOCK

When you create an empty table, you just provide the tablespace name and Oracle will allocate the first extent, with the segment header. You don’t choose the data placement within the tablespace. But here we are in a different case: the extents already exist in the datafiles that we transport, and the DDL must just map to it. This is why in this case the segment header file number and block number is specified. The remaining extent allocation information is stored within the datafiles (Locally Managed Tablespace), only the segment header must be known by the dictionary.

As an example, when I look at the database where the export comes from, I can see that the attributes for PK_EMP (SEG_FILE 26 SEG_BLOCK 154) are the relative file number and header block number of the PK_EMP segment:


10:49:10 SQL> select owner,segment_name,header_file,header_block,blocks,extents,tablespace_name,relative_fno from dba_segments where owner='SCOTT';
 
OWNER SEGMENT_NAME HEADER_FILE HEADER_BLOCK BLOCKS EXTENTS TABLESPACE_NAME RELATIVE_FNO
----- ------------ ----------- ------------ ------ ------- --------------- ------------
SCOTT DEPT 31 130 8 1 USERS 26
SCOTT EMP 31 146 8 1 USERS 26
SCOTT SALGRADE 31 162 8 1 USERS 26
SCOTT PK_DEPT 31 138 8 1 USERS 26
SCOTT PK_EMP 31 154 8 1 USERS 26

This file identifier is a relative file number within the tablespace, which means that there is no need to change it when a tablespace is transported.

You will see exactly the same information in the database where you import the tablespace (except for HEADER_FILE which is the absolute file number).

OBJNO_REUSE

Each segment has a DATA_OBJECT_ID, which is referenced in each block, the ROWIDs. This must not change when we transport a tablespace because the goal is that nothing has to be modified in the datafiles. For this reason, the data object id is exported with the metadata, as we can see for this PK_EMP example (OBJNO_REUSE 73205), and set to the same in the target dictionary. Here are the data object IDs for the objects exported here:


10:49:20 SQL> select owner,object_name,object_type,object_id,data_object_id from dba_objects where owner='SCOTT';
 
OWNER OBJECT_NAME OBJECT_TYPE OBJECT_ID DATA_OBJECT_ID
----- ----------- ----------- --------- --------------
SCOTT DEPT TABLE 73196 73196
SCOTT PK_DEPT INDEX 73197 73197
SCOTT EMP TABLE 73198 73206
SCOTT PK_EMP INDEX 73199 73205
SCOTT BONUS TABLE 73200 73200
SCOTT SALGRADE TABLE 73201 73201

The OBJECT_ID will be different in the target, assigned in the same way as when we create an object, but this one is not referenced anywhere within the datafiles.

So what?

Usually, the metadata precedes the data. With transportable tablespaces, it is the opposite: data is there and metadata is re-created to map the data. This metadata is what is stored into the dumpfile exported to transport tablespaces.
From what you have seen, you can understand now that the RELATIVE_FNO and the DATA_OBJECT_ID are not unique within a database, but only within a tablespace. You can understand also that Transportable Tablespace import duration does not depend on the size of data, but is proportional to the number of objects (metadata). This is where Pluggable Databases is more efficient: metadata is transported physically and import duration does not depend on the number of objects, especially when it does not involve an upgrade to new version and object recompilation.

 

Cet article What is in a transportable tablespace dumpfile? est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Optimizer Statistics Advisor

Mon, 2017-05-08 10:47

The Optimizer Statistics Advisor is a new Advisor in Oracle 12.2.
The goal of this Advisor is to check the way you gather the statistics on your database, and depending on what is found, it will makes some recommendations on how you can improve the statistics gathering strategy in order to provide more efficient statistics to the CBO.
This Advisor is also able to generate remediation scripts to apply the statistics gathering “best practices”.
adv
The recommendations are based on 23 predefined rules :

SQL> select rule_id, name, rule_type, description from v$stats_advisor_rules;


RULE_ID NAME RULE_TYPE DESCRIPTION
---------- ----------------------------------- --------- -------------------------------------------------------------------------------------
0 SYSTEM
1 UseAutoJob SYSTEM Use Auto Job for Statistics Collection
2 CompleteAutoJob SYSTEM Auto Statistics Gather Job should complete successfully
3 MaintainStatsHistory SYSTEM Maintain Statistics History
4 UseConcurrent SYSTEM Use Concurrent preference for Statistics Collection
5 UseDefaultPreference SYSTEM Use Default Preference for Stats Collection
6 TurnOnSQLPlanDirective SYSTEM SQL Plan Directives should not be disabled
7 AvoidSetProcedures OPERATION Avoid Set Statistics Procedures
8 UseDefaultParams OPERATION Use Default Parameters in Statistics Collection Procedures
9 UseGatherSchemaStats OPERATION Use gather_schema_stats procedure
10 AvoidInefficientStatsOprSeq OPERATION Avoid inefficient statistics operation sequences
11 AvoidUnnecessaryStatsCollection OBJECT Avoid unnecessary statistics collection
12 AvoidStaleStats OBJECT Avoid objects with stale or no statistics
13 GatherStatsAfterBulkDML OBJECT Do not gather statistics right before bulk DML
14 LockVolatileTable OBJECT Statistics for objects with volatile data should be locked
15 UnlockNonVolatileTable OBJECT Statistics for objects with non-volatile should not be locked
16 MaintainStatsConsistency OBJECT Statistics of dependent objects should be consistent
17 AvoidDropRecreate OBJECT Avoid drop and recreate object seqauences
18 UseIncremental OBJECT Statistics should be maintained incrementally when it is beneficial
19 NotUseIncremental OBJECT Statistics should not be maintained incrementally when it is not beneficial
20 AvoidOutOfRange OBJECT Avoid Out of Range Histogram endpoints
21 UseAutoDegree OBJECT Use Auto Degree for statistics collection
22 UseDefaultObjectPreference OBJECT Use Default Object Preference for statistics collection
23 AvoidAnalyzeTable OBJECT Avoid using analyze table commands for statistics collection


24 rows selected.


SQL>

You can have a look at this blog if you want a little bit more informations about these rules.
If you want to exclude some rules or some database objects of the Advisor’s recommandation, you can define multiple filters. (I will do that below.)

Well, let’s see how to use the Advisor. The first step is to create a task which will run it :

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.CREATE_ADVISOR_TASK(tname);
END;
/

The task is created :

SQL> select task_name, advisor_name, created, status from dba_advisor_tasks where advisor_name = 'Statistics Advisor';


TASK_NAME ADVISOR_NAME CREATED STATUS
------------------------------ ------------------------------ ------------------- -----------
STAT_ADVISOR_1 Statistics Advisor 04.05.2017-11:19:25 INITIAL


SQL>

Now, I want to define some filters.
The first one will disable the Advisor for all objects, the 2nd will enable it only on a specific table and the 3th and 4th will exclude two rules :

DECLARE
filter1 CLOB;
filter2 CLOB;
filter3 CLOB;
filter4 CLOB;
BEGIN
filter1 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => NULL,
tabname => NULL,
action => 'DISABLE' );


filter2 := DBMS_STATS.CONFIGURE_ADVISOR_OBJ_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => NULL,
ownname => 'JOC',
tabname => 'T2',
action => 'ENABLE' );


filter3 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'AvoidDropRecreate',
action => 'DISABLE' );


filter4 := DBMS_STATS.CONFIGURE_ADVISOR_RULE_FILTER(
task_name => 'STAT_ADVISOR_1',
stats_adv_opr_type => 'EXECUTE',
rule_name => 'UseGatherSchemaStats',
action => 'DISABLE' );
END;
/

All is ready, let’s run the task…

DECLARE
tname VARCHAR2(32767);
ret VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
END;
/

…and generate the report :

SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2172
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:35:10
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2172 of task STAT_ADVISOR_1, the Statistics Advisor has no
findings.

-------------------------------------------------------------------------------
SQL>

Cool ! Nothing to report regarding statistics gathering on my the table JOC.T2 (see filter2 above).
But how does the Advisor reacts when I run it after having deleted the statistics on this table ?
SQL> exec dbms_stats.delete_table_stats(ownname=>'JOC',tabname=>'T2');


PL/SQL procedure successfully completed.


SQL> DECLARE
2 tname VARCHAR2(32767);
3 ret VARCHAR2(32767);
4 BEGIN
5 tname := 'stat_advisor_1';
6 ret := DBMS_STATS.EXECUTE_ADVISOR_TASK(tname);
7 END;
8 /


PL/SQL procedure successfully completed.


SQL> select dbms_stats.report_advisor_task('stat_advisor_1',null,'text','all','all') as report from dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : STAT_ADVISOR_1
Execution Name : EXEC_2182
Created : 05-04-17 11:34:51
Last Modified : 05-04-17 11:44:22
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_2182 of task STAT_ADVISOR_1, the Statistics Advisor has 1
finding(s). The findings are related to the following rules: AVOIDSTALESTATS.
Please refer to the finding section for detailed information.

-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: AvoidStaleStats
Rule Description: Avoid objects with stale or no statistics
Finding: There are 1 object(s) with no statistics.
Schema:
JOC
Objects:
T2


Recommendation: Gather Statistics on those objects with no statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
-------------------------------------------------------------------------------

It looks to work well. The Advisor detected that there is no stats on the table, and a rule were triggered.
And what about the remediation scripts ? Firstly, we have to generate them :

VARIABLE script CLOB
DECLARE
tname VARCHAR2(32767);
BEGIN
tname := 'stat_advisor_1';
:script := DBMS_STATS.SCRIPT_ADVISOR_TASK(tname);
END;
/


PL/SQL procedure successfully completed.

And then display them :

set linesize 3000
set long 500000
set pagesize 0
set longchunksize 100000
set serveroutput on


DECLARE
v_len NUMBER(10);
v_offset NUMBER(10) :=1;
v_amount NUMBER(10) :=10000;
BEGIN
v_len := DBMS_LOB.getlength(:script);
WHILE (v_offset < v_len)
LOOP
DBMS_OUTPUT.PUT_LINE(DBMS_LOB.SUBSTR(:script,v_amount,v_offset));
v_offset := v_offset + v_amount;
END LOOP;
END;
13 /
-- Script generated for the recommendations from execution EXEC_2182
-- in the statistics advisor task STAT_ADVISOR_1
-- Script version 12.2
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USECONCURRENT
-- Rule Description: Use Concurrent preference for Statistics Collection
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USEDEFAULTPREFERENCE
-- Rule Description: Use Default Preference for Stats Collection
-- Set global preferenes to default values.
-- Scripts for rule USEDEFAULTOBJECTPREFERENCE
-- Rule Description: Use Default Object Preference for statistics collection
-- Setting object-level preferences to default values
-- setting CASCADE to default value for object level preference
-- setting ESTIMATE_PERCENT to default value for object level preference
-- setting METHOD_OPT to default value for object level preference
-- setting GRANULARITY to default value for object level preference
-- setting NO_INVALIDATE to default value for object level preference
-- Scripts for rule USEINCREMENTAL
-- Rule Description: Statistics should be maintained incrementally when it is beneficial
-- Turn on the incremental option for those objects for which using incremental is helpful.
-- Scripts for rule UNLOCKNONVOLATILETABLE
-- Rule Description: Statistics for objects with non-volatile should not be locked
-- Unlock statistics for objects that are not volatile.
-- Scripts for rule LOCKVOLATILETABLE
-- Rule Description: Statistics for objects with volatile data should be locked
-- Lock statistics for volatile objects.
-- Scripts for rule NOTUSEINCREMENTAL
-- Rule Description: Statistics should not be maintained incrementally when it is not beneficial
-- Turn off incremental option for those objects for which using incremental is not helpful.
-- Scripts for rule USEAUTODEGREE
-- Rule Description: Use Auto Degree for statistics collection
-- Turn on auto degree for those objects for which using auto degree is helpful.
-- Scripts for rule AVOIDSTALESTATS
-- Rule Description: Avoid objects with stale or no statistics
-- Gather statistics for those objcts that are missing or have no statistics.
-- Scripts for rule MAINTAINSTATSCONSISTENCY
-- Rule Description: Statistics of dependent objects should be consistent
-- Gather statistics for those objcts that are missing or have no statistics.
declare
obj_filter_list dbms_stats.ObjectTab;
obj_filter dbms_stats.ObjectElem;
obj_cnt number := 0;
begin
obj_filter_list := dbms_stats.ObjectTab();
obj_filter.ownname := 'JOC';
obj_filter.objtype := 'TABLE';
obj_filter.objname := 'T2';
obj_filter_list.extend();
obj_cnt := obj_cnt + 1;
obj_filter_list(obj_cnt) := obj_filter;
dbms_stats.gather_database_stats(
obj_filter_list=>obj_filter_list);
end;
/

PL/SQL procedure successfully completed.
SQL>

It was a very simple demo, but as you can see above, the Advisor provides a small script to adjust what is wrong or what is missing concerning the statistics of the table.

Conclusion :
Once you have upgraded your database to Oracle 12.2, don’t hesitate to set up the new Statistics Advisor. It is easy to deploy and can be fully personalized depending on what you want to check (which objects ? which rules ?). Moreover, it has been developped by the same team who develops and maintains the CBO. Therefore, they know which statistics the Optimizer needs !

 

Cet article Oracle 12cR2 : Optimizer Statistics Advisor est apparu en premier sur Blog dbi services.

PostgreSQL on a Pure Storage All Flash Array – Populating PostgreSQL

Mon, 2017-05-08 07:51

In the last post we explained on how we did the setup for connecting a PostgreSQL server to the Pure Storage array. In this post we’ll actually give the array something to work with. What we use for the tests is pgbench which is included with PostgreSQL and can be used to load a database and then perform a TPC-B like performance test against the loaded data. Btw: pgbench is a great tool when you want to make the same tests against different configurations of PostgreSQL or the same configurations of PostgreSQL against different physical or virtual hardware configurations.

To begin with lets create a dedicated database for loading the data into:

postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage" postgres
CREATE DATABASE
Time: 552.693 ms

We’ll use pgbench in initialization (“-i”) mode with a scale factor (“-s”) of 10000 for populating the database. The will create the pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers and will result in 1000000000 rows in total for the pgbench_accounts table.

postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage

The total run-time (including the vacuum at the end) was around 2,5 hours (this is including the index creation and the vacuum at the end) and this is the size of the database:

pgpurestorage/postgres MASTER (postgres@5432) # \l+
                                                                      List of databases
┌─────────────┬──────────┬──────────┬─────────────┬─────────────┬───────────────────────┬─────────┬────────────┬────────────────────────
│    Name     │  Owner   │ Encoding │   Collate   │    Ctype    │   Access privileges   │  Size   │ Tablespace │                Descript
├─────────────┼──────────┼──────────┼─────────────┼─────────────┼───────────────────────┼─────────┼────────────┼────────────────────────
│ postgres    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 7629 kB │ pg_default │ default administrative 
│ purestorage │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                        
│ template0   │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ unmodifiable empty data
│             │          │          │             │             │ postgres=CTc/postgres │         │            │                        
│ template1   │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ default template for ne
│             │          │          │             │             │ postgres=CTc/postgres │         │            │                        
└─────────────┴──────────┴──────────┴─────────────┴─────────────┴───────────────────────┴─────────┴────────────┴────────────────────────
(4 rows)

The biggest table is the pgbench_accounts table:

pgpurestorage/purestorage MASTER (postgres@5432) # \d+
                           List of relations
┌────────┬──────────────────┬───────┬──────────┬─────────┬─────────────┐
│ Schema │       Name       │ Type  │  Owner   │  Size   │ Description │
├────────┼──────────────────┼───────┼──────────┼─────────┼─────────────┤
│ public │ pgbench_accounts │ table │ postgres │ 125 GB  │             │
│ public │ pgbench_branches │ table │ postgres │ 392 kB  │             │
│ public │ pgbench_history  │ table │ postgres │ 0 bytes │             │
│ public │ pgbench_tellers  │ table │ postgres │ 4360 kB │             │
└────────┴──────────────────┴───────┴──────────┴─────────┴─────────────┘
(4 rows)

.. which contains 1000000000 rows:

pgpurestorage/purestorage MASTER (postgres@5432) # select count(*) from pgbench_accounts;
┌────────────┐
│   count    │
├────────────┤
│ 1000000000 │
└────────────┘
(1 row)

Time: 219538.051 ms (03:39.538)

Here are the screenshots from the Pure Storage management console for the duration of the run (which started some minutes before 20:00). Orange are write operations, blue is read:

purestorage-pg-load-1
purestorage-pg-load-2

As you can see we are fare away from saturating the storage. So, lets do the same load again but this time with two sessions into two databases in parallel:

# session one 
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "drop database if exists purestorage" postgres
DROP DATABASE
Time: 983.297 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage" postgres
CREATE DATABASE
Time: 514.140 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage

# session two
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "drop database if exists purestorage2" postgres
DROP DATABASE
Time: 0.110 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] psql -c "create database purestorage2" postgres
CREATE DATABASE
Time: 274.576 ms
postgres@pgpurestorage:/home/postgres/ [pgpure] pgbench -i -s 10000 purestorage2

This screenshots from the Pure Storage console:

purestorage-pg-load-2-1
purestorage-pg-load-2-2

It took more than double as long to load the 2000000000 rows but the limit is not on the storage. The average bandwidth went up from around 65 mb/sec to around 90 mb/sec. What is interesting to see is that we need only around 45GB real storage:

purestorage-pg-load-2-reduction

This means we have a almost 9 times compression/de-duplication on the storage layer. Remember that each database has a size of 146GB ( ( 146 * 2 = 292 ) / 9 = 32, plus the WAL files ):

                                                                     List of databases
┌──────────────┬──────────┬──────────┬─────────────┬─────────────┬───────────────────────┬─────────┬────────────┬──────────────────────────────────────
│     Name     │  Owner   │ Encoding │   Collate   │    Ctype    │   Access privileges   │  Size   │ Tablespace │                Description           
├──────────────┼──────────┼──────────┼─────────────┼─────────────┼───────────────────────┼─────────┼────────────┼──────────────────────────────────────
│ postgres     │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 7629 kB │ pg_default │ default administrative connection dat
│ purestorage  │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                                      
│ purestorage2 │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │                       │ 146 GB  │ pg_default │                                      
│ template0    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ unmodifiable empty database          
│              │          │          │             │             │ postgres=CTc/postgres │         │            │                                      
│ template1    │ postgres │ UTF8     │ en_US.UTF-8 │ en_US.UTF-8 │ =c/postgres          ↵│ 7497 kB │ pg_default │ default template for new databases   
│              │          │          │             │             │ postgres=CTc/postgres │         │            │                                      
└──────────────┴──────────┴──────────┴─────────────┴─────────────┴───────────────────────┴─────────┴────────────┴──────────────────────────────────────

What happens then when the file system gets full on the host but there still is storage available in the back-end? Lets copy one of the test databases: That should almost fill the file system:

pgpurestorage/postgres MASTER (postgres@5432) # \! df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.2G   24G   9% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G   16K  3.9G   1% /dev/shm
tmpfs                              3.9G   41M  3.8G   2% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G  302G  149G  68% /u02/pgdata
pgpurestorage/postgres MASTER (postgres@5432) # create database test template = purestorage;

CREATE DATABASE
Time: 3312062.975 ms (55:12.063)
pgpurestorage/postgres MASTER (postgres@5432) # 
pgpurestorage/postgres MASTER (postgres@5432) # \! df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.2G   24G   9% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G   16K  3.9G   1% /dev/shm
tmpfs                              3.9G   41M  3.8G   2% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G  449G  1.7G 100% /u02/pgdata
pgpurestorage/postgres MASTER (postgres@5432) # 

Almost full, so lets create another database:

pgpurestorage/postgres MASTER (postgres@5432) # create database test2 template = purestorage;
ERROR:  could not write to file "base/16562/16534": No space left on device
Time: 34449.626 ms (00:34.450)

So, behavior is what has been expected.

In the next post we’ll do some pgbench standard benchmarks against the data set although the numbers will probably not be very good as we are running in a ESX test environment and this seems to be the bottle neck in our case. We’ll have to analyze the sar statistics for that, but no time yet.

What I can say about the storage system so far: I like the simplicity of the management console. There is not much more you can do than creating hosts, volumes and connections between them, that’s it. Reduction is happening automatically and you have the option to replicate to another storage system. Snapshots are available as well, but not yet tested.

 

Cet article PostgreSQL on a Pure Storage All Flash Array – Populating PostgreSQL est apparu en premier sur Blog dbi services.

Can you open PDB$SEED read write?

Sun, 2017-05-07 10:27

If you are in multitenant, you probably already felt the desire to open the PDB$SEED in READ WRITE mode.

  • Can you open PDB$SEED read write yourseld? Yes and No.
  • Should you open PDB$SEED read write yourself? Yes and No.
  • How to run upgrade scripts that need to write to PDB$SEED? catcon.pl


In 12.1 you have no reason to open the seed read write yourself. In 12.2 there is one reason when you are in LOCAL UNDO mode, because you may want to customize the UNDO tablespace.

12c in local undo

I am in 12.1 or in 12.2 in shared undo mode:
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
 
no rows selected

When the CDB is opened, the PDB$SEED is opened in read only mode.
SYS@CDB$ROOT SQL> show pdbs
&nsbp;
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO

I try to open the PDB$SEED in read write mode (FORCE is a shortcut to avoid to close it before)
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open force;
&nsbp;
Error starting at line : 1 in command -
alter pluggable database pdb$seed open force
Error report -
ORA-65017: seed pluggable database may not be dropped or altered
65017. 00000 - "seed pluggable database may not be dropped or altered"
*Cause: User attempted to drop or alter the Seed pluggable database which is not allowed.
*Action: Specify a legal pluggable database name.
SYS@CDB$ROOT SQL>

Obviously, this is impossible and clearly documented. PDB$SEED is not a legal pluggable database for this operation.

Oracle Script

There is an exception to that: internal Oracle scripts need to run statements in the PDB$SEED. They run with “_oracle_script”=true where this operation is possible:

SYS@CDB$ROOT SQL> alter session set "_oracle_script"=true;
Session altered.
 
SYS@CDB$ROOT SQL> alter pluggable database pdb$seed open read write force;
Pluggable database PDB$SEED altered.

catcon.pl

Of course, when upgrading, there are phases where you need the seed opened read-write. But you don’t to that yourself. The scripts to run in each container are called through catcon.pl which, by default, opens the seed read-write and ensures that the initial open mode is restored at the end even in case of error.

-m mode in which PDB$SEED should be opened; one of the following values
may be specified:
- UNCHANGED - leave PDB$SEED in whatever mode it is already open
- READ WRITE (default)
- READ ONLY
- UPGRADE
- DOWNGRADE

I have the following “/tmp/show_open_mode.sql” script

column name format a10
select name,open_mode,current_timestamp-open_time from v$containers;

I call it with catcon to run in PDB$SEED:

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -c 'PDB$SEED' -n 1 -d /tmp -l /tmp -b tmp -show_open_mode.sql

Here is the output in /tmp/tmp0.log

CATCON_STATEMENT
--------------------------------------
catconExec(): @/tmp/show_open_mode.sql
SQL> SQL> column name format a10
SQL> select name,open_mode,current_timestamp-open_time from v$containers;
NAME OPEN_MODE CURRENT_TIMESTAMP-OPEN_TIME
---------- ---------- ---------------------------------------------------------------------------
PDB$SEED READ WRITE +000000000 00:00:00.471398
SQL>
END_RUNNING
------------------------------------------------------------------------------------------------------------------------
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====
SQL>
END_RUNNING
------------------------------------------------------------------------------------------------------------------------
==== @/tmp/show_open_mode.sql Container:PDB$SEED Id:2 17-05-07 05:02:06 Proc:0 ====

The PDB$SEED was opened READ WRITE to run the statements.

We can see that in alert.log:

alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ WRITE
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read write
Completed: alter pluggable database pdb$seed OPEN READ WRITE
alter pluggable database pdb$seed close immediate instances=all
ALTER SYSTEM: Flushing buffer cache inst=0 container=2 local
Pluggable database PDB$SEED closed
Completed: alter pluggable database pdb$seed close immediate instances=all
alter pluggable database pdb$seed OPEN READ ONLY instances=all
Database Characterset for PDB$SEED is WE8MSWIN1252
Opening pdb PDB$SEED (2) with no Resource Manager plan active
Pluggable database PDB$SEED opened read only
Completed: alter pluggable database pdb$seed OPEN READ ONLY instances=all

When the pre-upgrade and post-upgrade scripts are run from DBUA you can see the following in the logs:
exec_DB_script: opened Reader and Writer
exec_DB_script: executed connect / AS SYSDBA
exec_DB_script: executed alter session set "_oracle_script"=TRUE
/
exec_DB_script: executed alter pluggable database pdb$seed close immediate instances=all
/
exec_DB_script: executed alter pluggable database pdb$seed OPEN READ WRITE
/

This is displayed because DBUA runs catcon.pl in debug mode and you can do the same by adding ‘-g’ to the catcon.pl arguments.

12cR2 in local undo

In 12.2 there is a case where you can make a change to the PDB$SEED to customize the UNDO tablespace template. Here I am changing to LOCAL UNDO:


SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
&nsbp;
SYS@CDB$ROOT SQL> startup upgrade;
ORACLE instance started.
&nsbp;
Total System Global Area 1107296256 bytes
Fixed Size 8791864 bytes
Variable Size 939526344 bytes
Database Buffers 150994944 bytes
Redo Buffers 7983104 bytes
Database mounted.
Database opened.
&nsbp;
SYS@CDB$ROOT SQL> alter database local undo on;
Database altered.
&nsbp;
SYS@CDB$ROOT SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
 
SYS@CDB$ROOT SQL> select * from database_properties where property_name like '%UNDO%';
 
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------- -------------- -----------
LOCAL_UNDO_ENABLED TRUE true if local undo is enabled

PDB$SEED is read only:

SYS@CDB$ROOT SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB01 READ WRITE NO

and _oracle_script is not set:

SYS@CDB$ROOT SQL> show parameter script
 
NAME TYPE VALUE
---- ---- -----
 

I get no error now and can open the seed in read-write mode:

SYS@CDB$ROOT SQL> alter pluggable database PDB$SEED open force;
Pluggable database PDB$SEED altered.
 
SYS@CDB$ROOT SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ WRITE NO
3 PDB01 READ WRITE NO

Customize UNDO seed

Once you open read write an undo tablespace is created. If you want to customize it, you can create another one and drop the previous one. This requires changing the undo_tablespace parameter:


SYS@CDB$ROOT SQL> show parameter undo
NAME TYPE VALUE
----------------- ------- ------
undo_tablespace string UNDO_1
 
SYS@CDB$ROOT SQL> create undo tablespace UNDO;
Tablespace UNDO created.
 
SYS@CDB$ROOT SQL> alter system set undo_tablespace=UNDO;
System SET altered.
 
SYS@CDB$ROOT SQL> drop tablespace UNDO_1 including contents and datafiles;
Tablespace UNDO_1 dropped.
 
SYS@CDB$ROOT SQL> shutdown immediate
Pluggable Database closed

You can leave it like this, just close and re-open read only. If you want to keep the same undo tablespace name as before, you need to play with create and drop, and change undo_tablespace again.

So what?

Don’t forget that you should not modify or drop PDB$SEED. If you want a customized template for your PDB creations, then you should create your PDB template to clone. You can clone remotely, so this is possible in single-tenant as well. Being able to open the PDB$SEED in read write is possible only for the exception of creating the UNDO tablespace in PDB$SEED when you move to local undo mode. This is not required, and then an UNDO tablespace will be created when you open a PDB with no undo_tablespace.
When running pre-upgrade and post-upgrade scripts, then don’t worry: catcon.pl is there to help run scripts in containers and handles that for you.

 

Cet article Can you open PDB$SEED read write? est apparu en premier sur Blog dbi services.

Testing new PostgreSQL features before alpha/beta/rc releases

Sun, 2017-05-07 04:32

A long time ago I blogged on how you can use the PostgreSQL development snapshots to test new PostgreSQL features before alpha/beta/rc releases are officially released. Another way to do this is to use git to get the latest sources and build PostgreSQL from there. Everything which was committed will be available to test. Btw: A great way to stay up to date is to subscribe to the mailing list just referenced. You’ll get a mail for each commit that happened, maybe one of those is getting your attention?

To start you’ll obviously need git. For distributions using yum this is just a matter of:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install git

For systems using apt use:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install git

Depending on how you want to configure PostgreSQL you’ll need some development packages as well. For yum based systems this is a good starting point:

postgres@pgbox:/home/postgres/ [pg960final] sudo yum install -y gcc openldap-devel python-devel readline-devel redhat-lsb bison flex perl-ExtUtils-Embed zlib-devel crypto-utils openssl-devel pam-devel libxml2-devel libxslt-devel tcl tcl-devel openssh-clients bzip2 net-tools wget screen ksh unzip

For apt based systems you might want to start with this:

postgres@pgbox:/home/postgres/ [pg960final] sudo apt-get install libldap2-dev libpython-dev libreadline-dev libssl-dev bison flex libghc-zlib-dev libcrypto++-dev libxml2-dev libxslt1-dev tcl tclcl-dev bzip2 wget screen ksh libpam0g-dev libperl-dev make unzip libpam0g-dev tcl-dev python

Not all of those packages are required, they just reflect what we usually install before building PostgreSQL from source. Of course you should adjust this and remove packages that are not required for what you plan to do.

How do you then get the latest PostgreSQL sources? Quite easy, it is documented in the PostgreSQL wiki:

postgres@pgbox:/home/postgres/ [pg960final] mkdir IwantToTest
postgres@pgbox:/home/postgres/ [pg960final] cd IwantToTest/
postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] git clone git://git.postgresql.org/git/postgresql.git

The result should look similar to this:

Cloning into 'postgresql'...
remote: Counting objects: 629074, done.
remote: Compressing objects: 100% (95148/95148), done.
remote: Total 629074 (delta 534080), reused 626282 (delta 531478)
Receiving objects: 100% (629074/629074), 184.31 MiB | 26.40 MiB/s, done.
Resolving deltas: 100% (534080/534080), done.

From now on you have the complete PostgreSQL sources locally available.

postgres@pgbox:/home/postgres/IwantToTest/ [pg960final] cd postgresql/; ls
aclocal.m4  config  configure  configure.in  contrib  COPYRIGHT  doc  GNUmakefile.in  HISTORY  Makefile  README  README.git  src

Ready to test? Yes, but what? One possible way to start is asking git for what was committed recently:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] git log
commit 0de791ed760614991e7cb8a78fddd6874ea6919d
Author: Peter Eisentraut peter_e@gmx.net
Date:   Wed May 3 21:25:01 2017 -0400

    Fix cursor_to_xml in tableforest false mode
    
    It only produced  elements but no wrapping table element.
    
    By contrast, cursor_to_xmlschema produced a schema that is now correct
    but did not previously match the XML data produced by cursor_to_xml.
    
    In passing, also fix a minor misunderstanding about moving cursors in
    the tests related to this.
    
    Reported-by: filip@jirsak.org
    Based-on-patch-by: Thomas Munro thomas.munro@enterprisedb.com
...

Usually you can find a link to the discussion in the commit message so can you read through the history of a specific commit. Another way is to read the development documentation or the upcoming release notes once available.

All you need to do then is to build PostgreSQL:

postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] ./configure
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/ [pg960final] cd contrib
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] make all
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] sudo make install
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/initdb -D /var/tmp/test
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/pg_ctl -D /var/tmp/test start
postgres@pgbox:/home/postgres/IwantToTest/postgresql/contrib/ [pg960final] /usr/local/pgsql/bin/psql postgres
psql (10devel)
Type "help" for help.

pgbox/postgres MASTER (postgres@5432) # 

Happy testing …

 

Cet article Testing new PostgreSQL features before alpha/beta/rc releases est apparu en premier sur Blog dbi services.

PostgreSQL on a Pure Storage All Flash Array – Preparations

Thu, 2017-05-04 09:07

Yesterday we got a Pure Storage All Flash Array for testing. As the name implies this is all about Flash storage. What makes Pure Storage different from other vendors is that you don’t buy just a storage box and then pay the usual maintenance costs but you pay for a storage subscription which should keep your storage up to date all the time. The promise is that all the components of the array get replaced by the then current versions over time without forcing you to re-buy. Check the link above for more details on the available subscriptions. This is the first post and describes the setup we did for connecting a PostgreSQL VMWare based machine to the Pure Storage box. The PostgreSQL server will be running as a virtual machine in VMWare ESX and connect over iSCSI to the storage system.

As usual we used CentOS 7 for the PostgreSQL server:

[root@pgpurestorage ~]$ cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core) 
[root@pgpurestorage ~]$ uname -a
Linux pgpurestorage.it.dbi-services.com 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

We have 4 vCPUs:

[root@pgpurestorage ~]$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
Stepping:              1
CPU MHz:               2399.583
BogoMIPS:              4799.99
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              35840K
NUMA node0 CPU(s):     0-3

… and 8GB of memory:

[root@pgpurestorage ~]$ cat /proc/meminfo | head -5
MemTotal:        7994324 kB
MemFree:         7508232 kB
MemAvailable:    7528048 kB
Buffers:            1812 kB
Cached:           233648 kB

purestorage-setup-vm

Because by default you’ll get the “virtual-guest” tuned profile when you install CentOS in a virtualized environment we created our own and switched to the same:

root@:/home/postgres/ [] tuned-adm active
Current active profile: virtual-guest
root@:/home/postgres/ [] tuned-adm profile dbi-postgres
root@:/home/postgres/ [] tuned-adm active
Current active profile: dbi-postgres
root@:/home/postgres/ [] cat /usr/lib/tuned/dbi-postgres/tuned.conf | egrep -v "^#|^$"
[main]
summary=dbi services tuned profile for PostgreSQL servers
[cpu]
governor=performance
energy_perf_bias=performance
min_perf_pct=100
[disk]
readahead=>4096
[sysctl]
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
vm.overcommit_memory=2
vm.swappiness=0
vm.dirty_ratio=2
vm.dirty_background_ratio=1
vm.nr_hugepages=1024

To gather statistics we created a cronjob:

root@:/home/postgres/ []  crontab -l
* * * * * /usr/lib64/sa/sa1 -S XALL 60 1

PostgreSQL was installed from source with what was committed to the source tree as of today with the following options:

[postgres@pgpurestorage postgresql]$ PGHOME=/u01/app/postgres/product/10/db_0
[postgres@pgpurestorage postgresql]$ SEGSIZE=2
[postgres@pgpurestorage postgresql]$ BLOCKSIZE=8
[postgres@pgpurestorage postgresql]$ WALSEGSIZE=64
[postgres@pgpurestorage postgresql]$ ./configure --prefix=${PGHOME} \
>             --exec-prefix=${PGHOME} \
>             --bindir=${PGHOME}/bin \
>             --libdir=${PGHOME}/lib \
>             --sysconfdir=${PGHOME}/etc \
>             --includedir=${PGHOME}/include \
>             --datarootdir=${PGHOME}/share \
>             --datadir=${PGHOME}/share \
>             --with-pgport=5432 \
>             --with-perl \
>             --with-python \
>             --with-tcl \
>             --with-openssl \
>             --with-pam \
>             --with-ldap \
>             --with-libxml \
>             --with-libxslt \
>             --with-segsize=${SEGSIZE} \
>             --with-blocksize=${BLOCKSIZE} \
>             --with-wal-segsize=${WALSEGSIZE}  \
>             --with-extra-version=" dbi services build"

For being able to connect to the Pure Storage box you’ll need the iSCSI IQN:

root@:/home/postgres/ [] cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:185a3499ac9

Knowing the IQN (InitiatorName) we can logon to the Pure Storage console for adding our host, creating a volume and attaching the volume to the host:

Thu May 04 11:44:10 2017
Welcome pureuser. This is Purity Version 4.8.8 on FlashArray dbipure01

http://www.purestorage.com/

pureuser@dbipure01> purehost create --iqn iqn.1994-05.com.redhat:185a3499ac9 pgpurestorage
Name           WWN  IQN                               
pgpurestorage  -    iqn.1994-05.com.redhat:185a3499ac9
pureuser@dbipure01> purevol create --size 500G volpgtest
Name       Size  Source  Created                   Serial                  
volpgtest  500G  -       2017-05-04 11:46:58 CEST  BA56B4A72DE94A4400011012
pureuser@dbipure01> purehost connect --vol volpgtest pgpurestorage
Name           Vol        LUN
pgpurestorage  volpgtest  1  

The Pure Storage system has two controllers (10.10.1.93 and 10.10.1.94) so we should be able to ping them:

root@:/home/postgres/ [] ping 10.10.1.93
PING 10.10.1.93 (10.10.1.93) 56(84) bytes of data.
64 bytes from 10.10.1.93: icmp_seq=1 ttl=63 time=2.53 ms
64 bytes from 10.10.1.93: icmp_seq=2 ttl=63 time=0.816 ms
64 bytes from 10.10.1.93: icmp_seq=3 ttl=63 time=0.831 ms
...
root@:/u02/pgdata/pgpure/ [] ping 10.10.1.94
PING 10.10.1.94 (10.10.1.94) 56(84) bytes of data.
64 bytes from 10.10.1.94: icmp_seq=1 ttl=63 time=0.980 ms
64 bytes from 10.10.1.94: icmp_seq=2 ttl=63 time=0.848 ms
...

Ok for the connectivity so a discover should work as well:

root@:/home/postgres/ [] iscsiadm -m discovery -t st -p 10.10.1.93
10.10.1.93:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21
10.10.1.94:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21
root@:/home/postgres/ [] iscsiadm -m node
10.10.1.93:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21
10.10.1.94:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21

Fine as well, so login:

root@:/home/postgres/ [] iscsiadm -m node --login 
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.93,3260] (multiple)
Logging in to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.94,3260] (multiple)
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.93,3260] successful.
Login to [iface: default, target: iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21, portal: 10.10.1.94,3260] successful.
root@:/home/postgres/ [] iscsiadm -m session -o show 
tcp: [13] 10.10.1.93:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21 (non-flash)
tcp: [14] 10.10.1.94:3260,1 iqn.2010-06.com.purestorage:flasharray.516cdd52f827bd21 (non-flash)

The new device is available (sdb) from now on:

root@:/home/postgres/ [] ls -la /dev/sd*
brw-rw----. 1 root disk 8,  0 May  4 13:23 /dev/sda
brw-rw----. 1 root disk 8,  1 May  4 13:23 /dev/sda1
brw-rw----. 1 root disk 8,  2 May  4 13:23 /dev/sda2
brw-rw----. 1 root disk 8, 16 May  4 13:23 /dev/sdb
brw-rw----. 1 root disk 8, 32 May  4 13:23 /dev/sdc

LVM setup:

root@:/home/postgres/ [] pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
root@:/home/postgres/ [] vgcreate vgpure /dev/sdb
  Volume group "vgpure" successfully created
root@:/home/postgres/ [] lvcreate -L 450G -n lvpure vgpure
  Logical volume "lvpure" created.
root@:/home/postgres/ [] mkdir -p /u02/pgdata
root@:/home/postgres/ [] mkfs.xfs /dev/mapper/vgpure-lvpure 
meta-data=/dev/mapper/vgpure-lvpure isize=512    agcount=4, agsize=29491200 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=117964800, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=57600, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@:/home/postgres/ [] echo "/dev/mapper/vgpure-lvpure  /u02/pgdata  xfs defaults,noatime 0 0" >> /etc/fstab
root@:/home/postgres/ [] mount -a
root@:/home/postgres/ [] df -h
Filesystem                         Size  Used Avail Use% Mounted on
/dev/mapper/cl_pgpurestorage-root   26G  2.0G   25G   8% /
devtmpfs                           3.9G     0  3.9G   0% /dev
tmpfs                              3.9G     0  3.9G   0% /dev/shm
tmpfs                              3.9G  8.5M  3.9G   1% /run
tmpfs                              3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1                         1014M  183M  832M  19% /boot
tmpfs                              781M     0  781M   0% /run/user/1000
/dev/mapper/vgpure-lvpure          450G   33M  450G   1% /u02/pgdata
root@:/home/postgres/ [] chown postgres:postgres /u02/pgdata

Initialized the PostgreSQL cluster:

postgres@pgpurestorage:/home/postgres/ [pg10] initdb -D /u02/pgdata/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u02/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/ -l logfile start

What we changed from the default configuration is:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat postgresql.auto.conf 
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
listen_addresses = '*'
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_rotation_age = '8d'
log_line_prefix = '%m - %l - %p - %h - %u@%d '
log_directory = 'pg_log'
log_min_messages = 'WARNING'
log_autovacuum_min_duration = '360s'
log_min_error_statement = 'error'
log_min_duration_statement = '5min'
log_checkpoints = 'on'
log_statement = 'ddl'
log_lock_waits = 'on'
log_temp_files = '1'
log_timezone = 'Europe/Zurich'
client_min_messages = 'WARNING'
wal_level = 'replica'
hot_standby_feedback = 'on'
max_wal_senders = '10'
cluster_name = 'pgpure'
max_replication_slots = '10'
shared_buffers=2048MB
work_mem=128MB
effective_cache_size=6144MB
maintenance_work_mem=512MB
max_wal_size=10GB

Calculating the minimum required amount of huge pages for the PostgreSQL instance:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] head -1 $PGDATA/postmaster.pid
3662
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] grep ^VmPeak /proc/3662//status
VmPeak:	 2415832 kB
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] echo "2415832/2048" | bc
1179

Set it slightly higher:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo sed -i 's/vm.nr_hugepages=1024/vm.nr_hugepages=1200/g' /usr/lib/tuned/dbi-postgres/tuned.conf
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo tuned-adm profile dbi-postgres
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /proc/meminfo | grep Huge
AnonHugePages:      6144 kB
HugePages_Total:    1200
HugePages_Free:     1200
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

To disable transparent huge pages we created a file called “disable-thp.service” (from here):

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /etc/systemd/system/disable-thp.service
# Disable transparent huge pages
# put this file under:
#   /etc/systemd/system/disable-thp.service
# Then:
#   sudo systemctl daemon-reload
#   sudo systemctl start disable-thp
#   sudo systemctl enable disable-thp
[Unit]
Description=Disable Transparent Huge Pages (THP)

[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"

[Install]
WantedBy=multi-user.target

Then reload the systemd daemon and start and enable the service:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo systemctl daemon-reload
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo systemctl start disable-thp
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] sudo systemctl enable disable-thp

To verify:

postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
postgres@pgpurestorage:/u02/pgdata/pgpure/ [pgpure] cat /sys/kernel/mm/transparent_hugepage/defrag 
always madvise [never]

For being sure that PostgreSQL really will use the huge pages set huge_pages to ‘on’ as this will prevent PostgreSQL from starting when the required pages can not be allocated:

pgpurestorage/postgres MASTER (postgres@5432) # alter system set huge_pages='on';
ALTER SYSTEM
Time: 2.417 ms

… and then restart the instance. When all is fine PostgreSQL will come up.

Finally to close this setup post here are some screenshots of the Pure Storage Management Web Console. The first one shows the “Storage” tab where you can see that the volume “volpgtest” is mapped to my host “pgpurestorage”.
purestorage-setup-storage-1

The name you give the server is not important. The important information is the mapping of the “Host Port” which you can see here (this is the iSCSI IQN):

purestorage-setup-storage-2

Once your server is connected you can see it in the connection map of the server in the console:
purestorage-connection-map

System health:
purestorage-system-health

Last, but not least, here is the dashboard:
purestorage-dashboard

Not much traffic right now but we’ll be changing that in the next post.

 

Cet article PostgreSQL on a Pure Storage All Flash Array – Preparations est apparu en premier sur Blog dbi services.

12c nologging and Data Guard

Wed, 2017-05-03 14:23

The title sounds weird because Data Guard synchronisation is based on the redo stream, so it makes no sense to do nologging operations on the primary. And this is the reason why we set FORCE LOGGING on a Data Guard configuration. However, to lower the downtime of a migration done with Data Pump, you may want to import with minimal logging and then re-synchronize the standby. This post is about the re-synchronisation in 12.1

Nologging Data Pump

When you want to lower the downtime for a migration, you can disable force logging (alter database no force logging), and run impdp with the following: transform=disable_archive_logging:y
Don’t forget to re-enable force_logging at the end and to re-synchronize the standby.

nonlogged (aka unrecoverable)

So, you have nonlogged blocks, we also call that unrecoverable because it cannot be recovered with the redo stream. If you are in 12.2 then everything is easy with recover database nonlogged block; and I explained that in a previous post: https://blog.dbi-services.com/12cr2-recover-nonlogged-blocks-after-nologging-in-data-guard/

If you are in 12.2 then it is half easy only. You can see where you have nonlogged blocks:
RMAN> select file#,reason,count(*) from v$nonlogged_block group by file#,reason;
&bsp;
FILE# REASON COUNT(*)
---------- ------- ----------
5 UNKNOWN 158
6 UNKNOWN 159
7 UNKNOWN 336
8 UNKNOWN 94
9 UNKNOWN 16
10 UNKNOWN 14

and this is the right way to query them. If you use RMAN ‘report unrecoverable’ it will not display the datafiles that had nologging operations on the primary.

In 12.1 you can RESTORE FROM SERVICE to recover from the primary rather than from a backup. It is straightforward. I’m just writing this blog post in case you see the following when you try to do this because the message can be misinterpreted:


RMAN> restore database from service 'MYDB_SITE1_dgmgrl';
 
Starting restore at 03-MAY-2017 13:22:12
using channel ORA_DISK_1
 
skipping datafile 1; already restored to SCN 3849354
skipping datafile 2; already restored to SCN 3849356
skipping datafile 3; already restored to SCN 3849358
skipping datafile 4; already restored to SCN 3849360
skipping datafile 5; already restored to SCN 3849365
skipping datafile 6; already restored to SCN 3849372
skipping datafile 7; already restored to SCN 3849382
skipping datafile 8; already restored to SCN 3849389
skipping datafile 9; already restored to SCN 3849395
skipping datafile 10; already restored to SCN 3849398
restore not done; all files read only, offline, or already restored
Finished restore at 03-MAY-2017 13:22:12

RMAN is clever enough: the data files are ok, according to their header and it skipped the restore. But you know that they are not ok, because some blocks are marked as corrupt because of nologging operations. Then what to do? There is a FORCE option in the restore command. But you probably don’t need it. If you get the previous message, it means that the datafiles are synchronized, which means that the APPLY is running. And, anyway, in order to restore you need to stop the APPLY.


DGMGRL> edit database orclb set state=apply-off;

Of course, once you stopped the apply, you run your RESTORE DATABASE FORCE. But you probably don’t need it. Now, the datafiles are stale and RMAN will not skip them even without the FORCE keyword.


RMAN> restore database from service 'MYDB_SITE1_dgmgrl';
 
Starting restore at 03-MAY-2017 13:22:37
using channel ORA_DISK_1
 
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service MYDB_SITE1_dgmgrl
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /media/raid-db/MYDB/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service MYDB_SITE1_dgmgrl
...
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 03-MAY-2017 13:25:30
RMAN> exit

Don’t forget to re-enable the Data Guard Apply at the end.

So what?

When you see all datafiles skipped, that probably means that you didn’t stop the APPLY. With APPLY stopped, and you probably stop it before the import as you plan to restore the standby later, then you probably don’t need the FORCE command. However, I’ll always recommend using the FORCE in this case because RMAN will skip the files without looking at the unlogged blocks. Imagine that you put a tablespace in read-only after the non-logged import but before stopping the apply. Then this one will be skipped.

 

Cet article 12c nologging and Data Guard est apparu en premier sur Blog dbi services.

Oracle OGG JAGENT 12.2 installation for GoldenGate monitoring using the Cloud Control plugin

Wed, 2017-05-03 06:34

This post is mainly for self documentation as I have to search my old notes every time I want to do this (I am faster by searching on our blog :) ): If you want to monitor GoldenGate with Cloud Control using the GoldenGate plugin you’ll have to install the JAGENT on the host where GoldenGate is running on and the documentation is not quite clear on what you have to do exactly (at least when you want to use the silent installation). This is what you need to do for 12.2:

Download “Oracle GoldenGate Monitor 12.2.1.2.0″ from here. You need to have Java 8 installed (Oracle version, not openjdk) on the server you want to install the JAGENT on. You can download it from here.

Once you have Java 8 installed create a response file like this:

oracle@oelogg1:/var/tmp/ [rdbms12102] cat oggmon.rsp 
#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0
[GENERIC]
#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/ogg/product/agent/12.2.1.2.0
INSTALL_TYPE=GoldenGate Monitor Agent Installation

Then you are ready to install:

oracle@oelogg1:/var/tmp/ [rdbms12102] /usr/java/jdk1.8.0_131/bin/java -jar fmw_12.2.1.2.0_ogg.jar -silent -responseFile /var/tmp/oggmon.rsp  

The result when everything went fine:

Launcher log file is /tmp/OraInstall2017-05-03_11-00-32AM/launcher2017-05-03_11-00-32AM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz.   Actual 2793.552 MHz    Passed
Checking swap space: must be greater than 512 MB.   Actual 3967 MB    Passed
Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)
Checking temp space: must be greater than 300 MB.   Actual 8719 MB    Passed

Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2017-05-03_11-00-32AM
Log: /tmp/OraInstall2017-05-03_11-00-32AM/install2017-05-03_11-00-32AM.log
Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.
Reading response file..
Skipping Software Updates
Starting check : CertifiedVersions
Expected result: One of oracle-6,oracle-7,redhat-7,redhat-6,SuSE-11,SuSE-12
Actual Result: oracle-7.3
Check complete. The overall result of this check is: Passed
CertifiedVersions Check: Success.

Starting check : CheckJDKVersion
Expected result: 1.8.0_101
Actual Result: 1.8.0_131
Check complete. The overall result of this check is: Passed
CheckJDKVersion Check: Success.

Validations are enabled for this session.
Verifying data
Copying Files
Percent Complete : 10
Percent Complete : 20
Percent Complete : 30
Percent Complete : 40
Percent Complete : 50
Percent Complete : 60
Percent Complete : 70
Percent Complete : 80
Percent Complete : 90
Percent Complete : 100

The installation of Oracle Fusion Middleware 12c GoldenGate Monitor & Veridata 12.2.1.2.0 completed successfully.
Logs successfully copied to /u01/app/oraInventory/logs.

Once installed you can proceed by creating the Agent instance:

oracle@oelogg1:/u01/app/ogg/product/agent/12.2.1.2.0/oggmon/ogg_agent/ [rdbms12102] pwd
/u01/app/ogg/product/agent/12.2.1.2.0/oggmon/ogg_agent
oracle@oelogg1:/u01/app/ogg/product/agent/12.2.1.2.0/oggmon/ogg_agent/ [rdbms12102] ./createMonitorAgentInstance.sh
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/ogg/product/12.2.0.1.1/
Please enter absolute path of OGG Agent instance : /u01/app/ogg/product/12.2.0.1.1/agent_inst 
Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20170503113738.sh) : 
Successfully created OGG Agent instance.

Create the wallet for the Agent:

oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin/ [rdbms12102] pwd
/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin
oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin/ [rdbms12102] export JAVA_HOME=/usr/java/jdk1.8.0_131
oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/bin/ [rdbms12102] ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent: 
Please confirm password for Java Agent: 
May 03, 2017 11:46:06 AM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
May 03, 2017 11:46:06 AM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.

Adjust the Agent configuration file for your environment:

oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/cfg/ [rdbms12102] pwd
/u01/app/ogg/product/12.2.0.1.1/agent_inst/cfg
oracle@oelogg1:/u01/app/ogg/product/12.2.0.1.1/agent_inst/cfg/ [rdbms12102] cat Config.properties | egrep -v "^#|^$"
jagent.host=localhost
jagent.jmx.port=5555
interval.regular=60
interval.quick=30
monitor.host=localhost
monitor.jmx.port=5502
monitor.jmx.username=oggmsjmxusr
jagent.username=oggmajmxusr
reg.retry.interval=10
instance.query.initial.interval=5
incremental.registration.quiet.interval=5
maximum.message.retrieval=500
jagent.rmi.port=5559
agent.type.enabled=OEM
status.polling.interval=5
message.polling.interval=5
reg.retry.times=-1
jagent.backward.compatibility=false
jagent.ssl=false
jagent.keystore.file=jagentKeyStore
jagent.truststore.file=jagentKeyStore
jagent.restful.ws.timeout=15000
jagent.ggsci.timeout=30

Make sure you change “agent.type.enabled” to “OEM”.

Enable monitoring in the GLOBALS file:

oracle@oelogg1:/var/tmp/ [OGG122] ggi

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (oelogg1) 1> view params ./GLOBALS

ENABLEMONITORING

Restart ggscsi:

oracle@oelogg1:/var/tmp/ [OGG122] ggi

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.



GGSCI (oelogg1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
JAGENT      STOPPED                                           


GGSCI (oelogg1) 2> start jagent

Sending START request to MANAGER ...
GGCMD JAGENT starting


GGSCI (oelogg1) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
JAGENT      RUNNING                              

… and you’re done. Hope this helps.

 

Cet article Oracle OGG JAGENT 12.2 installation for GoldenGate monitoring using the Cloud Control plugin est apparu en premier sur Blog dbi services.

Pages