Feed aggregator

DBCA problem installing on ORACLE LINUX 7.3 with ORACLE 12cR2

Tom Kyte - Fri, 2017-03-10 02:26
hi all! NOW ORACLE12cR2 for Linux released.I won't test its new feature,so dowload oracle12cR2 for linux to install on oracle linux7.3,and it appear an error when DBCA,the log is: <code>cat /u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log [ 2017-...
Categories: DBA Blogs

Archive log

Tom Kyte - Fri, 2017-03-10 02:26
why the current and oldest online redo log sequence is by 2??
Categories: DBA Blogs

sys_context returning session id in RAC

Tom Kyte - Fri, 2017-03-10 02:26
Hi, In a RAC oracle environment where lets say multiple instance with same session id's are running, then what is the best way of getting the session ID from gv$session without affecting performance? what would sys_context('userenv', 'sessionid') re...
Categories: DBA Blogs

How to exclude mviews that are not being updated in the script?

Tom Kyte - Fri, 2017-03-10 02:26
Hello, I have a scheduled job on Oracle GRID to send alerts when any mviews are not updated. But I have three mviews that are not beeig updated and I canno't change it for now. So I need to change my script to exclude these three mviews (only thos...
Categories: DBA Blogs

Updating columns in tables having million of records

Tom Kyte - Fri, 2017-03-10 02:26
Hi, I have gone through your forums on how to update a table with millions of records Approach 1 - To create a temporary table and make the necessary changes, drop the original table and rename temporary table to original table. I have not gone...
Categories: DBA Blogs

RMAN Full Hot or Cold Backup - which is faster?

Tom Kyte - Fri, 2017-03-10 02:26
Which RMAN backup is faster, full hot or cold?
Categories: DBA Blogs

Run a Spring Cloud Task from Pivotal Cloud Foundry using Cloud Foundry Tasks

Pas Apicella - Fri, 2017-03-10 02:26
Recently we announced Spring Cloud Task under the umbrella of Spring Cloud through the following blog entry.  In the post below I am going to show you how you would create a Cloud Foundry Task that can invoke this Spring Cloud Task itself.

Spring Cloud Task allows a user to develop and run short lived microservices using Spring Cloud and run them locally, in the cloud, even on Spring Cloud Data Flow. In this example we will run it in the cloud using Pivotal Cloud Foundry (PWS instance run.pivotal.io). For more information on this follow the link below.

https://cloud.spring.io/spring-cloud-task/

For more information on Cloud Foundry Tasks follow the link below

https://docs.cloudfoundry.org/devguide/using-tasks.html

Steps

Note: This demo assumes you are already logged into PCF you can confirm that using a command as follows

pasapicella@pas-macbook:~/temp$ cf target
API endpoint:   https://api.run.pivotal.io
API version:    2.75.0
User:           papicella@pivotal.io
Org:            apples-pivotal-org
Space:          development

Also ensure your using the correct version of CF CLI which at the time of this blog was as follows you will need at least that version.

pasapicella@pas-macbook:~/temp$ cf --version
cf version 6.25.0+787326d95.2017-02-28

You will also need an instance of Pivotal Cloud Foundry which supports Tasks within the Applications Manager UI which Pivotal Web Services (PWS) does

1. Clone the simple Spring Cloud Task as follows

$ git clone https://github.com/papicella/SpringCloudTaskTodaysDate.git

pasapicella@pas-macbook:~/temp$ git clone https://github.com/papicella/SpringCloudTaskTodaysDate.git
Cloning into 'SpringCloudTaskTodaysDate'...
remote: Counting objects: 19, done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 19 (delta 0), reused 19 (delta 0), pack-reused 0
Unpacking objects: 100% (19/19), done.

2. Change into SpringCloudTaskTodaysDate directory

3. If you look at the class "pas.au.pivotal.pa.sct.demo.SpringCloudTaskTodaysDateApplication" you will see it's just a Spring Boot application that has an annotation "@EnableTask". As long as Spring Cloud Task is on the classpath any Spring Boot application with @EnableTask will record the start and finish of the boot application.

4. Package the application using "mvn package"

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ mvn package
[INFO] Scanning for projects...
Downloading: https://repo.spring.io/snapshot/org/springframework/cloud/spring-cloud-task-dependencies/1.2.0.BUILD-SNAPSHOT/maven-metadata.xml
Downloaded: https://repo.spring.io/snapshot/org/springframework/cloud/spring-cloud-task-dependencies/1.2.0.BUILD-SNAPSHOT/maven-metadata.xml (809 B at 0.6 KB/sec)
[INFO]

..

[INFO] Building jar: /Users/pasapicella/temp/SpringCloudTaskTodaysDate/target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:1.5.2.RELEASE:repackage (default) @ springcloudtasktodaysdate ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10.621 s
[INFO] Finished at: 2017-03-10T18:51:15+11:00
[INFO] Final Memory: 29M/199M
[INFO] ------------------------------------------------------------------------

5.  Push the application as shown below

$ cf push springcloudtask-date --no-route --health-check-type none -p ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar -m 512m

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf push springcloudtask-date --no-route --health-check-type none -p ./target/springcloudtasktodaysdate-0.0.1-SNAPSHOT.jar -m 512m

Creating app springcloud-task-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

App springcloud-task-date is a worker, skipping route creation
Uploading springcloud-task-date...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app069139431
Uploading 239.1K, 89 files

...

1 of 1 instances running

App started


OK

App springcloudtask-date was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher`

Showing health and status for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls:
last uploaded: Fri Mar 10 07:57:17 UTC 2017
stack: cflinuxfs2
buildpack: container-certificate-trust-store=2.0.0_RELEASE java-buildpack=v3.14-offline-https://github.com/cloudfoundry/java-buildpack.git#d5d58c6 java-main open-jdk-like-jre=1.8.0_121 open-jdk-like-memory-calculator=2.0.2_RELEASE spring-auto-reconfiguration=1.10...

     state      since                    cpu    memory         disk         details
#0   starting   2017-03-10 06:58:43 PM   0.0%   936K of 512M   1.3M of 1G


6. Stop the application as we only want to run it as a CF Task when we are ready to run it.

$ cf stop springcloudtask-date

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf stop springcloudtask-date
Stopping app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

7. In a separate lets tail the logs from the application as follows. Don't worry there is no output yet as the application invocation through a task has not yet occurred.

$ cf logs springcloudtask-date

** Output **

pasapicella@pas-macbook:~$ cf logs springcloudtask-date
Connected, tailing logs for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...


8. Now log into PWS apps manager console and navigate to your application settings page as shown below. On this page you will see the run command for the spring boot application as shown below


9. To invoke the task we run a command as follows using the "invocation command" we get from step #8 above.

Format: cf run-task {app-name} {invocation command}

$ cf run-task springcloudtask-date 'INVOCATION COMMAND from step #8 above'

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf run-task springcloudtask-date 'CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher'
Creating task for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

Task has been submitted successfully for execution.
Task name:   371bb9b1
Task id:     1

10. Return to PWS Applications Manager and click on the "Tasks" tab to verify if was successful


11. Return to the terminal window where we were tailing the logs to verify the task was run

pasapicella@pas-macbook:~$ cf logs springcloudtask-date
Connected, tailing logs for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...

2017-03-10T19:15:29.55+1100 [APP/TASK/371bb9b1/0]OUT Creating container
2017-03-10T19:15:29.89+1100 [APP/TASK/371bb9b1/0]OUT Successfully created container
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT   .   ____          _            __ _ _
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT   '  |____| .__|_| |_|_| |_\__, | / / / /
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  :: Spring Boot ::        (v1.5.2.RELEASE)
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2017-03-10T19:15:34.45+1100 [APP/TASK/371bb9b1/0]OUT  =========|_|==============|___/=/_/_/_/
2017-03-10T19:15:34.71+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.706  INFO 7 --- [           main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext
2017-03-10T19:15:34.85+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.853  INFO 7 --- [           main] nfigurationApplicationContextInitializer : Adding cloud service auto-reconfiguration to ApplicationContext
2017-03-10T19:15:34.89+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.891  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : The following profiles are active: cloud
2017-03-10T19:15:34.89+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:34.890  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Starting SpringCloudTaskTodaysDateApplication on b00b045e-dea4-4e66-8298-19dd71edb9c8 with PID 7 (/home/vcap/app/BOOT-INF/classes started by vcap in /home/vcap/app)
2017-03-10T19:15:35.00+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.009  INFO 7 --- [           main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a07c5b4: startup date [Fri Mar 10 08:15:35 UTC 2017]; root of context hierarchy
2017-03-10T19:15:35.91+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.912  INFO 7 --- [           main] urceCloudServiceBeanFactoryPostProcessor : Auto-reconfiguring beans of type javax.sql.DataSource
2017-03-10T19:15:35.91+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:35.916  INFO 7 --- [           main] urceCloudServiceBeanFactoryPostProcessor : No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.
2017-03-10T19:15:36.26+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.259 DEBUG 7 --- [           main] o.s.c.t.c.SimpleTaskConfiguration        : Using org.springframework.cloud.task.configuration.DefaultTaskConfigurer TaskConfigurer
2017-03-10T19:15:36.74+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.748  INFO 7 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup
2017-03-10T19:15:36.75+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.758 DEBUG 7 --- [           main] o.s.c.t.r.support.SimpleTaskRepository   : Creating: TaskExecution{executionId=0, parentExecutionId=null, exitCode=null, taskName='DateSpringCloudTask:cloud:', startTime=Fri Mar 10 08:15:36 UTC 2017, endTime=null, exitMessage='null', externalExecutionId='null', errorMessage='null', arguments=[]}
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.776 DEBUG 7 --- [           main] o.s.c.t.r.support.SimpleTaskRepository   : Updating: TaskExecution with executionId=0 with the following {exitCode=0, endTime=Fri Mar 10 08:15:36 UTC 2017, exitMessage='null', errorMessage='null'}
2017-03-10T19:15:36.75+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.757  INFO 7 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Starting beans in phase 0
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.775  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Executed at : 3/10/17 8:15 AM
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.777  INFO 7 --- [           main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext@7a07c5b4: startup date [Fri Mar 10 08:15:35 UTC 2017]; root of context hierarchy
2017-03-10T19:15:36.77+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.779  INFO 7 --- [           main] o.s.c.support.DefaultLifecycleProcessor  : Stopping beans in phase 0
2017-03-10T19:15:36.78+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.782  INFO 7 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Unregistering JMX-exposed beans on shutdown
2017-03-10T19:15:36.78+1100 [APP/TASK/371bb9b1/0]OUT 2017-03-10 08:15:36.788  INFO 7 --- [           main] s.d.SpringCloudTaskTodaysDateApplication : Started SpringCloudTaskTodaysDateApplication in 3.205 seconds (JVM running for 3.985)
2017-03-10T19:15:36.83+1100 [APP/TASK/371bb9b1/0]OUT Exit status 0
2017-03-10T19:15:36.86+1100 [APP/TASK/371bb9b1/0]OUT Destroying container
2017-03-10T19:15:37.79+1100 [APP/TASK/371bb9b1/0]OUT Successfully destroyed container

12. Finally you can verify tasks using a command as follows

$ cf tasks springcloudtask-date

** Output **

pasapicella@pas-macbook:~/temp/SpringCloudTaskTodaysDate$ cf tasks springcloudtask-date
Getting tasks for app springcloudtask-date in org apples-pivotal-org / space development as papicella@pivotal.io...
OK

id   name       state       start time                      command
1    371bb9b1   SUCCEEDED   Fri, 10 Mar 2017 08:15:28 UTC   CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher


Categories: Fusion Middleware

Reminder: Upgrade Oracle Internet Directory 11.1.1.7 to 11.1.1.9

Steven Chan - Fri, 2017-03-10 02:05

Oracle Fusion Middleware products get new Patch Set updates.  When a new Patch Set has been released, a 12 month Grace Period for the previous Patch Set begins.  Once that Grace Period ends, no new patches for the previous Patch Set will be released.

For more details, see:

Oracle Internet Directory is part of a suite of products called "Oracle Identity and Access Management" (IAM).  OID 11.1.1.7 was released in April 2013 .  OID 11.1.1.9 was released in May 2015, which means that the Grace Period for OID 11.1.1.7 ended after May 2016. 

All E-Business Suite users running OID 11.1.1.7 should upgrade to OID 11.1.1.9 to remain under Error Correction Support. OID 11.1.1.x is covered by Premier Support to December 2018, and covered by Extended Support to December 2021.

Related Articles

Categories: APPS Blogs

Links for 2017-03-09 [del.icio.us]

Categories: DBA Blogs

Oracle PaaS EMEA Partner Community Forum 2017

Oracle Partner Community Forum is waiting for you! Don't miss the chance to learn about latest and comprehensive Oracle Cloud Platform updates with hands-on trainings from Oracle experts. Forum will...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle PaaS EMEA Partner Community Forum 2017

Oracle Partner Community Forum is waiting for you! Don't miss the chance to learn about latest and comprehensive Oracle Cloud Platform updates with hands-on trainings from Oracle experts. Forum will...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12.2 Index Advanced Compression “High” Part IV (The Width of a Circle)

Richard Foote - Thu, 2017-03-09 23:18
A quick post (for me) with a long weekend beckoning… In Part I, Part II and Part III of looking at the new Index Advanced Compression level of “High”, we discussed how it can significantly decrease the size of your indexes in a manner not previously possible. This can result in significant reductions of index storage and […]
Categories: DBA Blogs

Python cx_Oracle 5.3 for Oracle Database Released

Christopher Jones - Thu, 2017-03-09 22:22

Today we are pleased to announce the release of cx_Oracle 5.3, theextremely popular Python interface for Oracle Database. Binary andsource bundles can be installed from PyPi, as normal.

cx_Oracle 5.3 works with Python version 2.7, and with versions 3.4and higher. You can use cx_Oracle with Oracle 11.2, 12.1 and 12.2client libraries, allowing connection to multiple Oracle Databaseversions. Oracle's standard client-server version interoperabilityallows connection to both older and newer databases. For exampleOracle 11.2 client libraries can connect to Oracle Database 10.2 orlater.

This release marks the move of the source coderepository and homepage to GitHub. User documentation remains on readthedocs. The cx_Oracle mail list also continues to operate, but you may find it easierto transition to GitHubfor questions and issues.

The New cx_Oracle 5.3

Ever since Anthony Tuininga joined Oracle, he has been hard at workadding improvements to cx_Oracle. (He's also been laying theground work for the next big release - more on that later in thispost). I think you'll be impressed with cx_Oracle 5.3. At lastyear's Oracle OpenWorld, Anthony talked about some of the changes andgot great positive feedback. Check out his presentation here.

This is a big update. While a few of the changes are specific toOracle Database 12c features (see release notes), you will appreciate all the things that will makeyour applications better even if you have older databases. And youare upgrading to Oracle Database 12.2, right? Don't forget you can get immediateaccess to Oracle Database 12.2 using the Oracle DatabaseCloud.

Key New Features
  • Support for Python 3.6. Note Python versions earlier than 2.6 are no longer supported. (You can still get cx_Oracle 5.2 from PyPI, if you really need Python 2.6 support).

  • Support for Oracle client 12.2. Note Oracle clients earlier than 11.2 are no longer supported.

  • Direct binding to SQL objects. One great use case is binding Python objects to Oracle Spatial SDO objects. See InsertGeometry.py

  • PL/SQL records and collections can be bound. Interacting with PL/SQL stored logic has never been easier.

  • Support for scrollable cursors. Go back and forth through your query results. See ScrollableCursors.py

  • Support for Advanced Queuing. Use database notifications to build micro-service applications. See AdvancedQueuing.py

  • Fetch PL/SQL implicit results. Easily return query results from PL/SQL. See ImplicitResults.py

  • Transaction Guard support. Verify transaction status after unplanned server outages. See TransactionGuard.py

  • Support for Edition Based Redefinition. Easily switch applications to use updated PL/SQL logic. See Editioning.py

  • Support for Statement Caching. Reduce costs of frequently executed statements. Now available with all connections.

  • Support for setting application context during the creation of a connection. Make application metadata more accessible to the database, including in LOGON triggers. See AppContext.py

Other New Features
  • Continuous Query Notification use has been simplified. See DatabaseChangeNotification.py and QueryChangeNotification.py

  • Support for pickling/unpickling error objects

  • Support for binding native integers

  • Support setting the internal and external name for use in distributed transactions

  • Set the maximum lifetime of connections in a session pool

  • Larger row counts (greater than 2^32)

  • Increased default query array size to 100

Changes from cx_Oracle 5.2

In line with previous deprecations and announcements, the following items are no longer usable in cx_Oracle 5.3:

  • The variable attribute maxlength has been dropped. Use bufferSize instead.

  • The variable attribute allocelems has been dropped. Use numElements instead.

  • The types UNICODE, FIXED_UNICODE and LONG_UNICODE have been dropped. Use NCHAR, FIXED_NCHAR and LONG_NCHAR instead.

  • Callbacks on OCI functions are no longer supported.

Future Deprecation Announcements

In a future cx_Oracle 6 release, the following items will be removed:

  • The module, action and clientinfo arguments to connection creation are deprecated and will be removed in cx_Oracle 6. Use the new application context instead.

  • The cursor attribute numbersAsStrings is deprecated and will be removed in cx_Oracle 6. An output type handler should be used instead.

Things you could already do in cx_Oracle 5.2

All the new features in cx_Oracle 5.3 listed above are on top of agreat history of database support. The high level features thatyou've already been using in the previous release are:

  • SQL and PL/SQL Execution

  • Extensive data type support

  • Fetching of large result sets

  • REF CURSORs

  • Binding PL/SQL Arrays

  • Large Objects: CLOBs and BLOBs

  • Transaction Management

  • Session Pooling

  • Database Resident Connection Pooling (DRCP)

  • Privileged Connections and Database startup/shutdown

  • External Authentication

  • Continuous Query Notification

  • Row Prefetching

  • Client Result Caching

  • End-to-end tracing

  • Oracle Database High Availability Features

A future cx_Oracle 6

I said you'd be impressed with cx_Oracle 5.3, but Anthony hasalready looked ahead to cx_Oracle 6, which aims to be even better.Over the past year, Anthony has been working hard on ODPI-C, an opensource library of C code that simplifies and standardizes the use ofcommon Oracle Call Interface (OCI) features, such as those used bycx_Oracle. ODPI-C was recently released on Github. As you can seefrom the cx_Oracle source code, the master branch of cx_Oracle code hasbeen updated to use ODPI-C. If you want to test the current state ofthis future release, download a bundle from GitHub and build it. Therelease notes (so far) for user visible changes are here.

Summary

The move of cx_Oracle to under the umbrella of Oracle has givenAnthony more time to focus on cx_Oracle and on making database accessbetter for all developers.

Overall cx_Oracle is the most comprehensive and popular scriptinglanguage driver for Oracle Database. It takes advantage of the greatfeatures built into the Oracle client libraries and into OracleDatabase. And, of course, Python cx_Oracle applications can takeadvantage of Oracle Net features such as network data encyption.

cx_Oracle 5.3 is a big release with support for some great OracleDatabase features. Take a look.

cx_Oracle Resources

Home Page

Documentation

Mail List | GitHub Issues

Source Code

Python cx_Oracle 5.3 for Oracle Database Released

Christopher Jones - Thu, 2017-03-09 22:22

Today we are pleased to announce the release of cx_Oracle 5.3, the extremely popular Python interface for Oracle Database. Binary and source bundles can be installed from PyPi, as normal.

cx_Oracle 5.3 works with Python version 2.7, and with versions 3.4 and higher. You can use cx_Oracle with Oracle 11.2, 12.1 and 12.2 client libraries, allowing connection to multiple Oracle Database versions. Oracle's standard client-server version interoperability allows connection to both older and newer databases. For example Oracle 11.2 client libraries can connect to Oracle Database 10.2 or later.

This release marks the move of the source code repository and home page to GitHub. User documentation remains on readthedocs. The cx_Oracle mail list also continues to operate, but you may find it easier to transition to GitHub for questions and issues.

The New cx_Oracle 5.3

Ever since Anthony Tuininga joined Oracle, he has been hard at work adding improvements to cx_Oracle. (He's also been laying the ground work for the next big release - more on that later in this post). I think you'll be impressed with cx_Oracle 5.3. At last year's Oracle OpenWorld, Anthony talked about some of the changes and got great positive feedback. Check out his presentation here.

This is a big update. While a few of the changes are specific to Oracle Database 12c features (see release notes), you will appreciate all the things that will make your applications better even if you have older databases. And you are upgrading to Oracle Database 12.2, right? Don't forget you can get immediate access to Oracle Database 12.2 using the Oracle Database Cloud.

Key New Features
  • Support for Python 3.6. Note Python versions earlier than 2.6 are no longer supported. (You can still get cx_Oracle 5.2 from PyPI, if you really need Python 2.6 support).

  • Support for Oracle client 12.2. Note Oracle clients earlier than 11.2 are no longer supported.

  • Direct binding to SQL objects. One great use case is binding Python objects to Oracle Spatial SDO objects. See InsertGeometry.py

  • PL/SQL records and collections can be bound. Interacting with PL/SQL stored logic has never been easier.

  • Support for scrollable cursors. Go back and forth through your query results. See ScrollableCursors.py

  • Support for Advanced Queuing. Use database notifications to build micro-service applications. See AdvancedQueuing.py

  • Fetch PL/SQL implicit results. Easily return query results from PL/SQL. See ImplicitResults.py

  • Transaction Guard support. Verify transaction status after unplanned server outages. See TransactionGuard.py

  • Support for Edition Based Redefinition. Easily switch applications to use updated PL/SQL logic. See Editioning.py

  • Support for Statement Caching. Reduce costs of frequently executed statements. Now available with all connections.

  • Support for setting application context during the creation of a connection. Make application metadata more accessible to the database, including in LOGON triggers. See AppContext.py

Other New Features
  • Continuous Query Notification use has been simplified. See DatabaseChangeNotification.py and QueryChangeNotification.py

  • Support for pickling/unpickling error objects

  • Support for binding native integers

  • Support setting the internal and external name for use in distributed transactions

  • Set the maximum lifetime of connections in a session pool

  • Larger row counts (greater than 2^32)

  • Increased default query array size to 100

Changes from cx_Oracle 5.2

In line with previous deprecations and announcements, the following items are no longer usable in cx_Oracle 5.3:

  • The variable attribute maxlength has been dropped. Use bufferSize instead.

  • The variable attribute allocelems has been dropped. Use numElements instead.

  • The types UNICODE, FIXED_UNICODE and LONG_UNICODE have been dropped. Use NCHAR, FIXED_NCHAR and LONG_NCHAR instead.

  • Callbacks on OCI functions are no longer supported.

Future Deprecation Announcements

In a future cx_Oracle 6 release, the following items will be removed:

  • The module, action and clientinfo arguments to connection creation are deprecated and will be removed in cx_Oracle 6. Use the new application context instead.

  • The cursor attribute numbersAsStrings is deprecated and will be removed in cx_Oracle 6. An output type handler should be used instead.

Things you could already do in cx_Oracle 5.2

All the new features in cx_Oracle 5.3 listed above are on top of a great history of database support. The high level features that you've already been using in the previous release are:

  • SQL and PL/SQL Execution

  • Extensive data type support

  • Fetching of large result sets

  • REF CURSORs

  • Binding PL/SQL Arrays

  • Large Objects: CLOBs and BLOBs

  • Transaction Management

  • Session Pooling

  • Database Resident Connection Pooling (DRCP)

  • Privileged Connections and Database startup/shutdown

  • External Authentication

  • Continuous Query Notification

  • Row Prefetching

  • Client Result Caching

  • End-to-end tracing

  • Oracle Database High Availability Features

A future cx_Oracle 6

I said you'd be impressed with cx_Oracle 5.3, but Anthony has already looked ahead to cx_Oracle 6, which aims to be even better. Over the past year, Anthony has been working hard on ODPI-C, an open source library of C code that simplifies and standardizes the use of common Oracle Call Interface (OCI) features, such as those used by cx_Oracle. ODPI-C was recently released on Github. As you can see from the cx_Oracle source code, the master branch of cx_Oracle code has been updated to use ODPI-C. If you want to test the current state of this future release, download a bundle from GitHub and build it. The release notes (so far) for user visible changes are here.

Summary

The move of cx_Oracle to under the umbrella of Oracle has given Anthony more time to focus on cx_Oracle and on making database access better for all developers.

Overall cx_Oracle is the most comprehensive and popular scripting language driver for Oracle Database. It takes advantage of the great features built into the Oracle client libraries and into Oracle Database. And, of course, Python cx_Oracle applications can take advantage of Oracle Net features such as network data encyption.

cx_Oracle 5.3 is a big release with support for some great Oracle Database features. Take a look.

cx_Oracle Resources

Home Page

Documentation

Mail List | GitHub Issues

Source Code

A nice Descending Index Range Scan

Jeff Kemp - Thu, 2017-03-09 20:06

I’ve been aware of some of the ways that Oracle database optimises index accesses for queries, but I’m also aware that you have to test each critical query to ensure that the expected optimisations are taking effect.

I had this simple query, the requirement of which is to get the “previous status” for a record from a journal table. Since the journal table records all inserts, updates and deletes, and this query is called immediately after an update, to get the previous status we need to query the journal for the record most recently prior to the most recent record. Since the “version_id” column is incremented for each update, we can use that as the sort order.


select status_code
from (select rownum rn, status_code
      from   xxtim_requests$jn jn
      where  jn.trq_id = :trq_id
      order by version_id desc)
where rn = 2;

The xxtim_requests$jn table has an ordinary index on (trq_id, version_id). This query is embedded in some PL/SQL with an INTO clause – so it will only fetch one record (plus a 2nd fetch to detect TOO_MANY_ROWS which we know won’t happen).

The table is relatively small (in dev it only has 6K records, and production data volumes are expected to grow very slowly) but regardless, I was pleased to find that (at least, in Oracle 12.1) it uses a nice optimisation so that it not only uses the index, it is choosing to use a Descending scan on it – which means it avoids a SORT operation, and should very quickly return the 2nd record that we desire.

index_scan_range_descending.PNG

It looks quite similar in effect to the “COUNT STOPKEY” optimisation you can see on “ROWNUM=1” queries. If this was a much larger table and this query needed to be faster or was being run more frequently, I’d probably consider appending status_code to the index in order to avoid the table access. In this case, however, I don’t think it’s necessary.


Filed under: SQL Tagged: oracle12c, SQL

Quiz Night

Jonathan Lewis - Thu, 2017-03-09 16:34

The following is a straight, continuous, untouched, cut-n-paste from an SQL*Plus session on 12.1.0.2. How come the update doesn’t execute in parallel – noting that parallel DML has been enabled and the tablescan to identify rows to be updated does execute in parallel ?


SQL> desc t1
 Name                                                                            Null?    Type
 ------------------------------------------------------------------------------- -------- ------------------------------------------------------
 OWNER                                                                           NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                                     NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                           VARCHAR2(128)
 OBJECT_ID                                                                       NOT NULL NUMBER
 DATA_OBJECT_ID                                                                           NUMBER
 OBJECT_TYPE                                                                              VARCHAR2(23)
 CREATED                                                                         NOT NULL DATE
 LAST_DDL_TIME                                                                   NOT NULL DATE
 TIMESTAMP                                                                                VARCHAR2(19)
 STATUS                                                                                   VARCHAR2(7)
 TEMPORARY                                                                                VARCHAR2(1)
 GENERATED                                                                                VARCHAR2(1)
 SECONDARY                                                                                VARCHAR2(1)
 NAMESPACE                                                                       NOT NULL NUMBER
 EDITION_NAME                                                                             VARCHAR2(128)
 SHARING                                                                                  VARCHAR2(13)
 EDITIONABLE                                                                              VARCHAR2(1)
 ORACLE_MAINTAINED                                                                        VARCHAR2(1)

SQL> select * from t1 minus select * from all_objects;

OWNER           OBJECT_NAME          SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE             CREATED   LAST_DDL_
--------------- -------------------- ---------------------- ---------- -------------- ----------------------- --------- ---------
TIMESTAMP           STATUS  T G S       NAMESPACE EDITION_NAME         SHARING       E O
------------------- ------- - - - --------------- -------------------- ------------- - -
TEST_USER       T1                                              159331         159331 TABLE                   09-MAR-17 09-MAR-17
2017-03-09:22:16:36 VALID   N N N               1                      NONE            N


1 row selected.

SQL> alter session force parallel dml;

Session altered.

SQL> set serveroutput off
SQL> update t1 set object_name = lower(object_name) where data_object_id is null;

78324 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b16abyv8p2790, child number 0
-------------------------------------
update t1 set object_name = lower(object_name) where data_object_id is
null

Plan hash value: 121765358

---------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |       |       |    26 (100)|          |        |      |            |
|   1 |  UPDATE               | T1       |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR |          | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| T1       | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("DATA_OBJECT_ID" IS NULL)

Note
-----
   - Degree of Parallelism is 8 because of table property
   - PDML disabled because single fragment or non partitioned table used


29 rows selected.

SQL> select * from v$pq_tqstat;

DFO_NUMBER      TQ_ID SERVER_TYPE       NUM_ROWS      BYTES  OPEN_TIME AVG_LATENCY      WAITS   TIMEOUTS PROCESS         INSTANCE     CON_ID
---------- ---------- --------------- ---------- ---------- ---------- ----------- ---------- ---------- --------------- -------- ----------
         1          0 Producer              8997     363737 ##########           0         14          0 P004                   1          0
                                            9721     409075 ##########           0         12          0 P007                   1          0
                                            9774     408591 ##########           0         12          0 P005                   1          0
                                            9844     396816 ##########           0         12          0 P003                   1          0
                                            9965     403926 ##########           0         13          0 P006                   1          0
                                            9727     388829 ##########           0         12          0 P002                   1          0
                                            9951     399162 ##########           0         14          0 P001                   1          0
                                           10345     408987 ##########           0         13          0 P000                   1          0
                      Consumer             78324    3179123 ##########           0          0          0 QC                     1          0



9 rows selected.

If you want to see the fully parallel plan, it would look like this (after running the query above against v$pq_tqstat I executed one statement that I’m not showing before carrying on with the statements below):


SQL> update t1 set object_name = lower(object_name) where data_object_id is null;

78324 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b16abyv8p2790, child number 0
-------------------------------------
update t1 set object_name = lower(object_name) where data_object_id is
null

Plan hash value: 3991856572

---------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |       |       |    26 (100)|          |        |      |            |
|   1 |  PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM) | :TQ10000 | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    UPDATE             | T1       |       |       |            |          |  Q1,00 | PCWP |            |
|   4 |     PX BLOCK ITERATOR |          | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| T1       | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("DATA_OBJECT_ID" IS NULL)

Note
-----
   - Degree of Parallelism is 8 because of table property


28 rows selected.

SQL> select object_name, object_type from user_objects;

OBJECT_NAME          OBJECT_TYPE
-------------------- -----------------------
T1                   TABLE

1 row selected.

Answer coming some time tomorrow.


Oracle 12cR2, RAC, Cloud, ACFS, PDB thin clones and asmadmin

Yann Neuhaus - Thu, 2017-03-09 15:21

In the Oracle Public Cloud, fast provisioning gets all its meaning when creating a RAC database service: in one hour you can get an operational highly available multitenant database. You can even create it in Data Guard for Disaster Recovery. Now, Oracle is pushing ACFS to store the datafiles rather than direct ASM. Especially in multitenant because a great feature is thin cloning: CREATE PLUGGABLE DATABASE AS SNAPSHOT COPY. However, I encountered an error when I tried it for the first time.

TDE keystore

SQL> create pluggable database pdb2 from pdb1 snapshot copy;
create pluggable database pdb2 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-28357: password required to open the wallet

Oh yes, in the cloud all tablespaces are encrypted. In 12.2 we can put the keystore password in the command:

ORA-17517

SQL> create pluggable database pdb2 from pdb1 snapshot copy keystore identified by "Ach1z0#d" ;
 
create pluggable database pdb2 from pdb1 snapshot copy keystore identified by "Ach1z0#d"
*
ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
/u02/app/oracle/oradata/CDB1/49FC0C40FCF62C05E053BAF6C40A9DD6/datafile/o1_mf_users_dcr220sd_.dbf
ORA-17517: Database cloning using storage snapshot failed on file
8:/u02/app/oracle/oradata/CDB1/49FC0C40FCF62C05E053BAF6C40A9DD6/datafile/o1_mf_users_dcr220sd_.dbf

Here we are. The call to the storage snapshot feature has failed. Usually the errors coming from OS calls are accompanied with additional information but not here.

alert.log and trace

In alert.log, the error is displayed with reference to some other trace files:

2017-03-05 16:24:38.935000 +00:00
create pluggable database pdb2 from pdb1 snapshot copy keystore identified by *
AUDSYS.AUD$UNIFIED (SQL_TEXT) - CLOB populated
WARNING: Detected that PDB needs to import keys from source. PDB can only open in restricted mode until import.
2017-03-05 16:24:40.447000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p000_8910.trc:
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p002_8918.trc:
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p001_8914.trc:
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p003_8922.trc:
**************************************************************
Undo Create of Pluggable Database PDB2 with pdb id - 4.
**************************************************************
ORA-65169 signalled during: create pluggable database pdb2 from pdb1 snapshot copy keystore identified by * ...

And those trace files have the following information:
ksfdsscre_clone: create snapshot failed error(-1) errmsg(OS dependent failure) voltag(49FF372094256196E053BAF6C40AEB9D) parent_voltag() mntbuf(/u02)

This is not very helpful by itself. We see the snapshot name (voltag) and the parent name (parent_voltag). You may know that error (-1) is EPERM which is ‘operation not permitted’. What I did to be sure was to try to create the snapshot myself:

[oracle@rac1 cdb11]$ acfsutil snap create -w 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap create: CLSU-00100: operating system function: ioctl failed with error data: 1
acfsutil snap create: CLSU-00101: operating system error message: Operation not permitted
acfsutil snap create: CLSU-00103: error location: OI_0
acfsutil snap create: ACFS-03046: unable to perform snapshot operation on /u02

EPERM

This is more clear and I also strace’d it to see where the error comes from:

open("/u02", O_RDONLY) = 41
ioctl(41, RTC_UIE_ON, 0x7fff17ae17a0) = 0
ioctl(41, 0xffffffffc1287021, 0x7fff17ae0e90) = -1 EPERM (Operation not permitted)

I’m running that with the oracle user, as the instance does when creating a PDB:
uid=1001(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(racoper),1004(asmdba)

grid

When connecting as grid, I am able to create the snapshot

[grid@rac1 ~]$ acfsutil snap create -w 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap create: Snapshot operation is complete.
[grid@rac1 ~]$ acfsutil snap delete 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap delete: Snapshot operation is complete.

Grid has the following permissions:
uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1003(racoper),1004(asmdba),1005(asmoper),1006(asmadmin)

asmadmin

This is what /etc/group looks like:

opc:x:54323:
oinstall:x:1001:
dba:x:1002:oracle
racoper:x:1003:oracle,grid
asmdba:x:1004:oracle,grid
asmoper:x:1005:grid
asmadmin:x:1006:grid

This is what the Oracle Public Cloud defines at RAC DBaaS service creation, and asmadmin is not mentioned in documentation.

So, to solve (or workaround) the issue, I’ve added oracle to the asmadmin group:

asmadmin:x:1006:grid,oracle

and now, I’m able to create a snapshot when logging as oracle:

[oracle@rac1 ~]$ acfsutil snap create -w 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap create: Snapshot operation is complete.
[oracle@rac1 ~]$ acfsutil snap delete 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap delete: Snapshot operation is complete.

restart

I thought that restarting the instance would be sufficient, but it is not. I had to restart the cluster. And this is also something easy in the Oracle Public Cloud:

CaptureRestartOPCRAC

A simple click restarts the first node, and then, once it is up again, restarts the second node.Rolling reboot ensures that the service is always up.

Thin clone

Here it is. The instance is now able to create a snapshot.

Connected to:
Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
 
SQL> set echo on
SQL> create pluggable database pdb2 from pdb1 snapshot copy keystore identified by "Ach1z0#d" ;
Pluggable database created.
 
Elapsed: 00:00:30.36

So what?

In my opinion, the configurations that stores a CDB datafiles on ACFS should give the rights to create snapshots to the user running the database. The cloud interface is very simple, but the technology behind is complex. The consequence of this gap is that using the cloud is easy when everything goes as expected, but any exception can bring us into troubleshooting.

 

Cet article Oracle 12cR2, RAC, Cloud, ACFS, PDB thin clones and asmadmin est apparu en premier sur Blog dbi services.

Deploying an ElasticSearch cluster at Oracle Cloud

Marcelo Ochoa - Thu, 2017-03-09 15:21
Continuing with my previous post about how to deploy a Docker Swarm Cluster at Oracle Cloud the idea now is how to deploy as example an Elastic Search cluster with these characteristics:

  • One node working as master
  • Two node working as data nodes
  • One node working as ingest node

a complete set of scripts used in this post is at GitHub, click here for more details.
Swarm nodes preparationFirst We will tag our cluster of five nodes with some tags to control our ES cluster allocation.
To do that using docker-machine to submit commands to the Swarm cluster do, remember oc4 and oc5 are Swarm master capable nodes:
$ eval $(docker-machine env oc5)
[oc5] $ docker node update --label-add type=es_master oc5
[oc5] $ docker node update --label-add type=es_master oc4
[oc5] $ docker node update --label-add type=es_data oc3
[oc5] $ docker node update --label-add type=es_data oc2
[oc5] $ docker node update --label-add type=es_ingest oc1
in my test official Elastic Search 5.0.0 image do not work well to recognize the primary host name on Swarm nodes, so I decided to use a custom ES image, We can build this image on each node with:
$ eval $(docker-machine env oc5)
[oc5]$ git clone https://github.com/marcelo-ochoa/docker.git
[oc5]$ cd docker/es
[oc5]$ docker build -t "elasticsearch/swarm:5.0.0" .
Step 1/6 : FROM elasticsearch:5.0.0
5.0.0: Pulling from library/elasticsearch
386a066cd84a: Pull complete
.....
Step 6/6 : CMD elasticsearch
 ---> Running in a6dd5fb17cf8
 ---> 2f1c7bfe6c67
Removing intermediate container a6dd5fb17cf8
Successfully built 2f1c7bfe6c67
[oc5]$ eval $(docker-machine env oc4)
[oc4]$ docker build -t "elasticsearch/swarm:5.0.0" .
.....
[oc1]$ docker build -t "elasticsearch/swarm:5.0.0" .
once we have a modified image built on each node of the Swarm cluster, we can start deploying Swarm services, before that We built an specific private interconnect network for our ES cluster with the posibilty of attaching other containers, specifically I'll attach:
[oc1]$ eval $(docker-machine env oc5)
[oc5]$ docker network create -d overlay --attachable --subnet=192.168.0.0/24 es_cluster
Once we have the network a Swarm visualizer could be started on oc5 or oc4 master nodes:
[oc5]$ eval $(docker-machine env oc4)
[oc4]$ docker run -d \
--name viz \
-p 8080:8080 \
--net es_cluster \
-v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer:latest
 to not open another port on oc4 node We can simple connect to swarm visualizer using an ssh tunnel, for example:
ssh -i /home/mochoa/Documents/Scotas/ubnt -L8080:localhost:8080 ubuntu@oc4
then access to http://localhost:8080/ here a sample output

OK, get ready to start deploying our ES cluster, first jump ES master:
[oc4]$ eval $(docker-machine env oc5) [oc5]$ docker service create --network es_cluster --name es_master --constraint 'node.labels.type == es_master' --replicas=1 --publish 9200:9200/tcp --env ES_JAVA_OPTS="-Xms1g -Xmx1g"  elasticsearch/swarm:5.0.0 -E cluster.name="ESCookBook" -E node.master=true -E node.data=false -E discovery.zen.ping.unicast.hosts=es_master [oc5]$ docker service ls ID            NAME       MODE        REPLICAS  IMAGEo4x15kklu520  es_master  replicated  1/1       elasticsearch/swarm:5.0.0[oc5]$ docker service ps es_masterID            NAME         IMAGE                      NODE  DESIRED STATE  CURRENT STATE       ERROR  PORTSn4a8tibaqxpw  es_master.1  elasticsearch/swarm:5.0.0  oc5   Running        Running 17 seconds ago         

because es_master was starting at OC5 node we can see the log output with:
[oc5]$ docker ps CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                NAMESad8a2ce40f68        elasticsearch/swarm:5.0.0   "/docker-entrypoin..."   2 minutes ago       Up 2 minutes        9200/tcp, 9300/tcp   es_master.1.n4a8tibaqxpwwlcrjt33ywqry [oc5]4 docker logs ad8a2ce40f68 [2017-03-09T19:23:15,427][INFO ][o.e.n.Node               ] [ad8a2ce40f68] initializing ........[2017-03-09T19:23:21,812][INFO ][o.e.n.Node               ] [ad8a2ce40f68] started[2017-03-09T19:23:21,900][INFO ][o.e.g.GatewayService     ] [ad8a2ce40f68] recovered [0] indices into cluster_state
now adding two data nodes and one ingest node:
[oc5]$ docker service create --network es_cluster --name es_data --constraint 'node.labels.type == es_data' --replicas=2 --env ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch/swarm:5.0.0 -E cluster.name="ESCookBook" -E node.master=false -E node.data=true -E discovery.zen.ping.unicast.hosts=es_master [oc5]$ docker service create --network es_cluster --name es_ingest --constraint 'node.labels.type == es_ingest' --replicas=1 --env ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch/swarm:5.0.0 -E cluster.name="ESCookBook" -E node.master=false -E node.data=false -E node.ingest=true -E discovery.zen.ping.unicast.hosts=es_master [oc5]$ docker service ls
ID            NAME       MODE        REPLICAS  IMAGE
o4x15kklu520  es_master  replicated  1/1       elasticsearch/swarm:5.0.0
ue10tw8c64be  es_ingest  replicated  1/1       elasticsearch/swarm:5.0.0
w9cpobrxvay1  es_data    replicated  2/2       elasticsearch/swarm:5.0.0
 [oc5]]$ docker service ps es_data
ID            NAME       IMAGE                      NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
q18eq708g692  es_data.1  elasticsearch/swarm:5.0.0  oc2   Running        Running 25 seconds ago        
x1ijojty4nrp  es_data.2  elasticsearch/swarm:5.0.0  oc3   Running        Running 25 seconds ago
      [oc5]]$ docker service ps es_ingest
ID            NAME         IMAGE                      NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
vcskf9lkd4xr  es_ingest.1  elasticsearch/swarm:5.0.0  oc1   Running        Running 19 seconds ago
        
visually it look like:

connecting Cerebro to ES cluster using:
[oc5]$ eval $(docker-machine env oc4)
[oc4]$ docker run -d \
  -p 9000:9000 \
  --net es_cluster \
  --env JAVA_OPTS="-Djava.net.preferIPv4Stack=true" \
  --name cerebro yannart/cerebro:latest
again to access to Cerebro console http://localhost:9000/ is through an SSH tunnel, here the output:
if you take a look above We are connecting to ES on host 192.168.0.3 even the master node have the IP 192.168.0.4, this is because the routing mesh of Swarm cluster publish our port on an specific address and then route transparent to the target IP.
So let's play with scale up and down our cluster:
[oc4]$ eval $(docker-machine env oc5)
[oc5]$ docker service scale es_data=4
es_data scaled to 4
[oc5]$ docker service scale es_ingest=2
es_ingest scaled to 2
[oc5]$ docker service ls
ID            NAME       MODE        REPLICAS  IMAGE
o4x15kklu520  es_master  replicated  1/1       elasticsearch/swarm:5.0.0
ue10tw8c64be  es_ingest  replicated  2/2       elasticsearch/swarm:5.0.0
w9cpobrxvay1  es_data    replicated  4/4       elasticsearch/swarm:5.0.0
visualize it:

that's great our ES cluster now have 7 nodes :)
Final conclusion, the purpose of this post is to show that Docker Swarm works perfect at Oracle Cloud, you can easily manage the cluster remotely using docker-machine and you can massively deploy a big Elastic Search cluster using a bunch of Compute services, don't worry about hardware crash or scale up/down your cluster, if you have enough nodes your ES indices will change to yellow state first and once your ES recovery process start the cluster will move your shards to existent capacity and that's all.
I'll show scale up/down on another post, stay tuned.
 

TekTalk Webinar: Defining Your Upgrade and Cloud Strategies: A New Path for Oracle WebCenter

WebCenter Team - Thu, 2017-03-09 13:48

TekTalk Webinar: Defining Your Upgrade and Cloud Strategies: A New Path for Oracle WebCenter
March 29th, 2017 | 1 PM EST

Too many companies still rely on legacy systems, or other outdated platforms, that are not capable of supporting the new demands of modern business.

Many companies are looking to embrace Cloud architectures to support traditional on premise applications. Devising strategies to upgrade existing applications and/or moving workloads to the Cloud can be daunting.

TekStream brings tribal knowledge of BEA WebLogic Portal, BEA AquaLogic User Interaction (ALUI), WebCenter Interaction (WCI), Plumtree, Stellent, Universal Content Management (UCM), Optika, Imaging and Process Management (IPM), and FatWire into one centrally manageable platform, Oracle WebCenter.

With many Oracle customers seeking upgrade strategies to Oracle WebCenter 12c, or devising new cloud architecture to reduce their infrastructure costs, TekStream provides you with options to ensure your success. As part of this TekTalk, we will address:
  1. How do I know if I need to upgrade?
  2. What are the upgrade options for 11g, 10g, and older to 12c?
  3. What are the options for on-prem upgrade?
  4. What are the options for upgrade and move from on-prem to Cloud?
  5. What are the options for moving from on-prem to Cloud and what tool sets are required?

Passing Values Between Pages in Oracle Application Builder Cloud Service

Shay Shmeltzer - Thu, 2017-03-09 12:50

A common use case for applications that have multiple pages is passing values between pages. For example you might want to pick up a specific record or value in one page and then use that as a parameter for a query in another page.

In the February release or Oracle Application Builder Cloud Service as part of the extension hook points that we provide, we added support for shared resources. These are JavaScript libraries you can add to your application - and that can be used across your app.

In the demo below I show you how you can use the built-in sample template for a shared resource to define a variable, and then how that variable is exposed in various places in the product through the expression builder allowing you to set its value in one page and use that value in another one.

Check it out:

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator