Feed aggregator

12c Sequence - nopartition/partition

Tom Kyte - Sat, 2017-05-27 17:46
What is the meaning of a new parameter when creating a sequence? From dbms_metadata.get_ddl: CREATE SEQUENCE "XY"."MY_SEQUENCE" MINVALUE 1000 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 NOCACHE NOORDER NOCYCLE NOPARTITI...
Categories: DBA Blogs

Rows locks from select for update clause

Tom Kyte - Sat, 2017-05-27 17:46
Hi Tom, From my knowledge of oracle, i understand that SELECT FOR UPDATE clause acquires row locks. In that case, if in a given table, when my query updates one set of data, same query should be able to update different set of data by using select f...
Categories: DBA Blogs

Building my own Container Cloud Services on top of Oracle Cloud IaaS

Marcelo Ochoa - Sat, 2017-05-27 17:37
Two weeks ago I presented in the ArOUG Cloud Day at Buenos Aires an introduction to Oracle Container Cloud Services (OCCS) and Oracle IaaS Compute management using console and API.
For this presentation I showed how to implement your Container Cloud Services (CCS) directly on top of IaaS compute.
Let's compare OCCS and my own CCS, here how they look like:
OCCS welcome pageMy CCS welcome pagethey look similar :), my own CCS is implemented using Portainer.io project.
Going deeper with both implementations I can resume the pros/cons as:
OCCS:
  • Pros
    • Easy manage
    • Pre-configured templates
    • Fast jump-start, 1-click deploy
    • Support for using official repository of Oracle images
  • Cons:
    • Host OS not configurable, ej. disk, semaphores, etc
    • Old Docker version 1.12
    • Not Swarm support
    • Basic orchestration features

My CCS:
  • Pros
    • Latest Docker/Swarm version 17.04.0-ce
    • Full Swarm support, scaling, upgrade, load-balancing
    • One-click deploy
    • Public templates (Portainer.io and LinuxServer.io)
    • Graphical scaling Swarm services
    • Console  Log/Stats for services
    • Full custom software/hardware selection
  • Cons
    • Oracle official repositories only available from command line
    • Registry with login not supported in graphical interface, only command line
    • A little complex deployment (scripts)
In my opinion the main problem of the OCCS is that you can't touch low level details of the implementation, so for example you can't change the host OS file /etc/sysctl.conf parameter vm.max_map_count=262144, then you will never get up and running a cluster of ElasticSearch 5.x version, more on this, supported version of Docker (1.12) is a bit old compared with latest features included in 17.04-0-ce.
On the other side my CCS best feature is that supports Swarm and command line operation if you connect to the host OS using ssh, with Swarm support you have a serious implementation for Docker data center solution, specially compared with other orchestration solutions like Kubernetes, see this great post about Docker Swarm exceeds Kubernetes performance at scale.
If you want to test by your self my CCS I extended the scripts of my previous post Managing Oracle Cloud IaaS - The API way to include a Portainer console up and running after deployment of the ES cluster and Swarm Visualizer.
The scripts are available at my GitHub repository, basically it deploys Portainer console at the end of the script deploy-cluster.sh using this command line:
docker run -d --name portainer -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock
if you grant access to the Swarm master host (node5 at the examples) to the port 9000 you will get access to the Portainer console as is showed at above screenshot, note that Portainer have access to the Unix socket /var/run/docker.sock to perform Swarm commands graphically.
You can also directly manage other nodes of the cluster in a native Docker way adding endpoints using TLS certificates generated during the step deploy-machines.sh.
Finally to see Portainer in action I recorded this simple walk through and below OCCS presentation.
Swarm nodesSwarm Services Scale up/down op.




Doing RDBMS hot full backup using RMan when running on Docker

Marcelo Ochoa - Sat, 2017-05-27 07:54
I think many databases are going in production now using Docker environment, specially with the official support provides by Oracle when allows pulling Docker images from official Docker store.
If you are using a custom build image using Oracle's official scripts you can do a hot full backup using RMAN as is described in this post.
We will test using a container started as:
[mochoa@localhost ols-official]$ docker run -d --privileged=true --name test --hostname test --shm-size=1g -p 1521:1521 -p 5500:5500 -e ORACLE_SID=TEST -e ORACLE_PDB=PDB1 -v /etc/timezone:/etc/timezone:ro -e TZ="America/Argentina/Buenos_Aires" -v /home/data/db/test:/opt/oracle/oradata oracle/database:12.1.0.2-ee
Note that datafiles and other RDBMS persistent data are stored at /home/data/db/test host directory.
To connect as SYSDBA to above running container do:
[mochoa@localhost ols-official]$ docker exec -ti test bash [oracle@test ~]$ sqlplus "/ as sysdba"First checks if your database is running in archive log mode:
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE------------ NOARCHIVELOG
If no, enable using the steps described in this page, login using bash and performs following steps connected as SYSDBA:
SQL> SHUTDOWN IMMEDIATE SQL> STARTUP MOUNT SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN; SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE------------ ARCHIVELOG
you can force RDBMS to generate a log file to see which directory is defined for redo log backups, for example:
SQL> ALTER SYSTEM SWITCH LOGFILE;
usually is defined by the parameter log_archive_dest, but if it empty the Docker image is doing backup at the container directory:
/opt/oracle/oradata/fast_recovery_area/${ORACLE_SID}/archivelog/yyyy_mm_dd/
Once you have your database up and running in archive log mode using RMAN utility a full daily backup can be configured in your /etc/cron.daily/ host directory as:
[mochoa@localhos ols-official]$ cat /etc/cron.daily/backup-db.sh #!/bin/bash docker exec test /opt/oracle/product/12.1.0.2/dbhome_1/bin/rman target=/ cmdfile='/opt/oracle/oradata/backup_full_compressed.sh' log='/opt/oracle/oradata/backup_archive.log'
where backup_full_compressed.sh is an RMAN's script as:
delete force noprompt obsolete; run {    configure controlfile autobackup on;    configure default device type to disk;    configure device type disk parallelism 1;    configure controlfile autobackup format for device type disk clear;    allocate channel c1 device type disk;    backup format '/opt/oracle/oradata/backup/prod/%d_%D_%M_%Y_%U' as    compressed backupset database; } delete force noprompt obsolete;
during the full backup you can see RMAN's output at the host file:
/home/data/db/test/backup_archive.log
or container file:
/opt/oracle/oradata/backup_archive.log
it looks like:
Recovery Manager: Release 12.1.0.2.0 - Production on Sat May 27 09:18:54 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST (DBID=2242310144)
RMAN> delete force noprompt obsolete;2> run3> {4> configure controlfile autobackup on;5> configure default device type to disk;6> configure device type disk parallelism 1;7> configure controlfile autobackup format for device type disk clear;8> allocate channel c1 device type disk;9> backup format '/opt/oracle/oradata/backup/prod/%d_%D_%M_%Y_%U' as compressed backupset database;10> }11> delete force noprompt obsolete;12> using target database control file instead of recovery catalogRMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1allocated channel: ORA_DISK_1channel ORA_DISK_1: SID=365 device type=DISK....Deleting the following obsolete backups and copies:Type                 Key    Completion Time    Filename/Handle-------------------- ------ ------------------ --------------------Backup Set           5      26-MAY-17           Backup Piece       5      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_05s57dn5_1_1Backup Set           6      26-MAY-17           Backup Piece       6      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_06s57dti_1_1Backup Set           7      26-MAY-17           Backup Piece       7      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_07s57e1g_1_1Archive Log          3      27-MAY-17          /opt/oracle/oradata/fast_recovery_area/TEST/archivelog/2017_05_27/o1_mf_1_48_dllv53d2_.arcBackup Set           8      26-MAY-17           Backup Piece       8      26-MAY-17          /opt/oracle/oradata/fast_recovery_area/TEST/autobackup/2017_05_26/o1_mf_s_945010852_dljvbpfq_.bkpdeleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_05s57dn5_1_1 RECID=5 STAMP=945010405deleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_06s57dti_1_1 RECID=6 STAMP=945010610deleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_07s57e1g_1_1 RECID=7 STAMP=945010736deleted archived logarchived log file name=/opt/oracle/oradata/fast_recovery_area/TEST/archivelog/2017_05_27/o1_mf_1_48_dllv53d2_.arc RECID=3 STAMP=945076211deleted backup piecebackup piece handle=/opt/oracle/oradata/fast_recovery_area/TEST/autobackup/2017_05_26/o1_mf_s_945010852_dljvbpfq_.bkp RECID=8 STAMP=945010854Deleted 5 objects

Recovery Manager complete.
and that's all when your host cron.daily script finish your full backup is at /home/data/db/test/backup/prod, note that a 3.7Gb datafiles only produce 695Mb size files and took a few minutes to do that.


Carbonated Java & JavaScript Stored Procedures

Kuassi Mensah - Fri, 2017-05-26 17:43
Carbonated Java Stored ProceduresFor accessing JSON Collections and documents without any knowledge of SQL, Oracle furnishes the SODA for Java API. It allows a convenient access and navigation using the dot notation.

How to use SODA for Java in Java Stored Procedures? I have posted the steps, the code samples and scripts on GitHub.
Carbonated JavaScript Stored Procedures Nashorn allows interoperability between Java and javaScript. By leveraging such interoperability, I've bee able to reuse SODA for Java with JavaScript Stored Procedures.

How to use SODA for Java in JavaScript Stored Procedures? I have posted the steps, the code samples and scripts on GitHub.

Enjoy!

12cR2 needs to connect with password for Cross-PDB DML

Yann Neuhaus - Fri, 2017-05-26 14:13

In a previous post, I explained that Cross-PDB DML, executing an update/delete/insert with the CONTAINERS() clause, seems to be implemented with implicit database links. Connecting through a database link requires a password and this blog post is about an error you may encounter: ORA-01017: invalid username/password; logon denied

This blog post also explains a consequence of this implementation, the big inconsistency of CONTAINERS() function because the implementation is completely different for queries (select) and for insert/delete/update, and you may finally write and read from different schemas.

We do not need Application Container for Cross-PDB DML and we don’t even need metadata link tables. Just tables with same columns. Here I have a DEMO table which is just a copy of DUAL, and it is created in CDB$ROOT and in PDB1 (CON_ID=3), owned by SYS.

Implicit database link

I’m connecting to CDB$ROOT with user, password and service name:

SQL> connect sys/oracle@//localhost/CDB1A as sysdba
Connected.

I insert a row into the DEMO table in the PDB1, which is CON_ID=3:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
1 row created.

This works in 12.2, is documented, and is an alternative way to switching to the container.

But now, let’s try to do the same when connecting with ‘/ as sysdba':

SQL> connect / as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
 
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from PDB1

The first message mentions invalid user/password, and the second one mentions a database link having the same name as the container.
As I described in the previous post the CONTAINERS() opens an implicit database link when doing some modifications to another container. But a database link requires a connection and no user/password has been provided. It seems that it tries to connect with the same user and password as the one provided to connect to the root.

Then, I provide the user/password but with local connection (no service name):


SQL> connect sys/oracle as sysdba
Connected.
SQL> insert into containers(DEMO) (con_id,dummy) values (3,'Y');
insert into containers(DEMO) (con_id,dummy) values (3,'Y')
*
ERROR at line 1:
ORA-01017: invalid username/password; logon denied

There is no mention of a database link here, but still impossible to connect. Then it seems that the session needs our connection string to find out how to connect to the PDB.

Explicit database link

There is an alternative. You can create the database link explicitly and then it will be used by the container(), having all information required password and service. But the risk is that you define this database link to connect to another user.

Here I have also a DEMO table created in SCOTT:

SQL> create database link PDB1 connect to scott identified by tiger using '//localhost/PDB1';
Database link created.
 
SQL> select * from DEMO@PDB1;
 
D
-
X

From the root I insert with CONTAINERS() without mentioning the schema:

SQL> insert into containers(DEMO) (con_id,dummy) values (3,'S');
1 row created.

I have no errors here (I’m still connected / as sysdba) because I have a database link with the same name as the one it tries to use implicitly. So it works without any error or warning. But my database link does not connect to the same schema (SYS) but to SCOTT. And because a DEMO table was there with same columns, the row was actually inserted into the SCOTT schema:

SQL> select * from DEMO@PDB1;
 
D
-
X
S

The big problem here is that when doing a select through the same CONTAINER() function, a different mechanism is used, not using the database link but session switching to the other container, in same schema, so the row inserted through INSERT INTO CONTAINER() is not displayed by SELECT FROM CONTAINER():
SQL> select * from containers(DEMO);
 
D CON_ID
- ----------
X 1
X 3
Y 3

So what?

I don’t know if the first problem (invalid user/password) will be qualified as a bug but I hope the second one will. Cross-PDB DML will be an important component of Application Containers, and having a completely different implementation for SELECT and for INSERT/UPDATE/DELETE may be a source of problems. In my opinion, both should use container switch within the same session, but that means that a transaction should be able to write in multiple containers, which is not possible currently.

 

Cet article 12cR2 needs to connect with password for Cross-PDB DML est apparu en premier sur Blog dbi services.

Automating Password Rotation for Oracle Databases

Pythian Group - Fri, 2017-05-26 14:03

Password rotation is not the most exciting task in the world, and that’s exactly why it’s a perfect candidate for automation. Automating routine tasks like this are good for everyone – DBAs can work on something that’s more exciting, companies save costs as less time is spent on changing the passwords, and there’s no place for human error, either. At Pythian, we typically use Ansible for task automation, and I like it mainly because of its non-intrusive configuration (no agents need to be installed on the target servers), and its scalability (tasks are executed in parallel on the target servers). This post will briefly describe how I automated password rotation for oracle database users using Ansible.

Overview

This blog post is not an intro to what is Ansible and how to use it, but it’s rather an example of how a simple task can be automated using Ansible in a way that’s scalable, flexible and easily reusable, and also provides the ability for other tasks to pick up the new passwords from a secure password store.

  • Scalability – I’d like to take advantage of Ansible’s ability of executing tasks on multiple servers at the same time. For example, in a large environments of tens or hundreds of machines, a solution that executes password change tasks serially would not be suitable. This would be an example of a “serial” task (it’s not a real thing, but just an illustration that it “hardcodes” a few “attributes” (environment file, the username and the hostname), and creating a separate task for every user/database you’d want to change the password for would be required:
    - hosts: ora-serv01
      remote_user: oracle
      tasks:
      - name: change password for SYS
        shell: | 
          . TEST1.env && \
          sqlplus / as sysdba @change_pasword.sql SYS \
          \"{{lookup('password','/dev/null length=8')}}\"
    
  • Flexible – I want to be able to adjust the list of users for which the passwords are changed, and the list of servers/databases that the user passwords are changed for in a simple way, that doesn’t include changing the main task list.
  • Reusable – this comes together with flexibility. The idea is that the playbook would be so generic, that it wouldn’t require any changes when it’s implemented in a completely separate environment (i.e. for another client of Pythian)
  • Secure password store – the new passwords are to be generated by the automated password rotation tool, and a method of storing password securely is required so that the new passwords could be picked up by the DBAs, application owners or the next automated task that would reconfigure the application
The implementation Prerequisites

I chose to do the implementation using Ansible 2.3, because it introduces the passwordstore lookup, which enables interaction with the pass utility (read more about it in Passwordstore.org). pass is very cool. It store passwords in gpg-encrypted files, and it can also be configured to automatically update the changes to a git repository, which relieves us of the headache of password distribution. The password can be retrieved from git on the servers that need the access to the new passwords.

Ansible 2.3 runs on python 2.6, unfortunately, the passwordstore lookup requires Python 2.7, which can be an issue if the control host for Ansible runs on Oracle Linux 6 or RHEL 6, as they don’t provide Python 2.7 in the official yum repositories. Still, there are ways of getting it done, and I’ll write another blog post about it.

So, what we’ll need is:

  • Ansible 2.3
  • jmespath plugin on Ansible control host (pip install jmespath)
  • jinja2 plugin on Ansible control host (I had to update it using pip install -U jinja2 in few cases)
  • Python 2.7 (or Python 3.5)
  • pass utility
The Playbook

This is the whole list of files that are included in the playbook:

./chpwd.yml
./inventory/hosts
./inventory/orcl1-vagrant-private_key
./inventory/orcl2-vagrant-private_key
./roles/db_users/files/change_password.sql
./roles/db_users/files/exists_user.sql
./roles/db_users/defaults/main.yml
./roles/db_users/tasks/main.yml

Let’s take a quick look at all of them:

  • ./chpwd.yml – is the playbook and (in this case) it’s extremely simple as I want to run the password change against all defined hosts:
    $ cat ./chpwd.yml
    ---
    
      - name: password change automation
        hosts: all
        roles:
          - db_users
    
  • ./inventory/hosts, ./inventory/orcl1-vagrant-private_key, ./inventory/orcl2-vagrant-private_key – these files define the hosts and the connectivity. In this case we have 2 hosts – orcl1 and orcl2, and we’ll connect to vagrant user using the private keys.
    $ cat ./inventory/hosts
    [orahosts]
    orcl1 ansible_host=127.0.0.1 ansible_port=2201 ansible_ssh_private_key_file=inventory/orcl1-vagrant-private_key ansible_user=vagrant
    orcl2 ansible_host=127.0.0.1 ansible_port=2202 ansible_ssh_private_key_file=inventory/orcl2-vagrant-private_key ansible_user=vagrant
  • ./roles/db_users/files/change_password.sql – A sql script that I’ll execute on the database to change the passwords. It takes 2 parameters the username and the password:
    $ cat ./roles/db_users/files/change_password.sql
    set ver off pages 0
    alter user &1 identified by "&2";
    exit;
  • ./roles/db_users/files/exists_user.sql – A sql script that allows verifying the existence of the users. It takes 1 argument – the username. It outputs “User exists.” when the user is there, and “User {username} does not exist.” – when it’s not.
    $ cat ./roles/db_users/files/exists_user.sql
    set ver off pages 0
    select 'User exists.' from all_users where username=upper('&1')
    union all
    select 'User '||upper('&1')||' does not exist.' from (select upper('&1') from dual minus select username from all_users);
    exit;
  • ./roles/db_users/defaults/main.yml – is the default file for the db_users role. I use this file to define the users for each host and database for which the passwords need to be changed:
    $ cat ./roles/db_users/defaults/main.yml
    ---
    
      db_users:
        - name: TEST1
          host: orcl1
          env: ". ~/.bash_profile && . ~/TEST1.env > /dev/null"
          pwdstore: "orcl1/TEST1/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp
            - system
        - name: TEST2
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST2.env > /dev/null"
          pwdstore: "orcl2/TEST2/"
          os_user: oracle
          become_os_user: yes
          users:
            - sys
            - system
            - ctxsys
        - name: TEST3
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST3.env > /dev/null"
          pwdstore: "orcl2/TEST3/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp

    In this data structure, we define everything that’s needed to be known to connect to the database and change the passwords. each entry to the list contains the following data:

    • name – just a descriptive name of the entry in this list, normally it would be the name of the database that’s described below.
    • host – the host on which the database resides. It should match one of the hosts defined in ./inventory/hosts.
    • env – how to set the correct environment to be able to connect to the DB (currently it requires sysdba connectivity).
    • pwdstore – the path to the folder in the passwordstore where the new passwords will be stored.
    • os_user and become_os_user – these are used in case sudo to another user on the target host is required. In a typical configuration, I connect to the target host using a dedicated user for ansible, and then sudo to the DB owner. if ansible connects to the DB onwer directly, then become_os_user should be set to “no”.
    • users – this is the list of all users for which the passwords need to be changed.

    As you see, this structure greatly enhances the flexibility and reusability, because adding new databases, hosts or users to the list would be done by a simple change to the “db_users:” structure in this defaults file. In this example, dbsnmp and system passwords are rotated for TEST1@orcl1, sys, system and ctxsys passwords are rotated for TEST2@orcl2, and dbsnmp on TEST3@orcl2

  • ./roles/db_users/tasks/main.yml – this is the task file of the db_users role. The soul of the playbook and the main part that does the password change depending on the contents in the defaults file described above. Instead of pasting the whole at once, I’ll break it up task by task, and will provide some comments about what’s being done.
    • populate host_db_users – This task simply filters the whole db_users data structure that’s defined in the defaults file, and creates host_db_users fact with only the DBs that belong to the host the task is currently run on. Using the ansible “when” conditional would also be possible to filter the list, however in such case there’s a lot of “skipped” entries displayed when the task is executed, so I prefer filtering the list before it’s even passed to the Ansible task.
      ---
      
        - name: populate host_db_users
          set_fact: host_db_users="{{ db_users | selectattr('host','equalto',ansible_hostname) | list }}"
      
    • create directory for target on db hosts – for each unique combination of os_user and become_os_user on the target host, and “ansible” directly is created. A json_query is used here, to filter just the os_user and become_os_user attributes that are needed. It would also work with with_items: "{{ host_db_users }}", but in that case, the outputs become cluttered as the attributes are displayed during the execution.
        - name: create directory for target on db hosts
          file:
            path: "ansible"
            state: directory
          become_user: "{{ item.os_user }}"
          become: "{{ item.become_os_user }}"
          with_items: "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
      
    • copy sql scripts to db_hosts – the missing scripts are copied from Ansible control host to the target “ansible” directories. “with_nested” is the method to create a loop in Ansible.
        - name: copy sql scripts to db_hosts
          copy:
            src="{{ item[1] }}"
            dest=ansible/
            mode=0644
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_nested:
            - "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
            - ['files/change_password.sql','files/exists_user.sql']
      
    • verify user existence – I’m using a shell module to execute the sql script after setting the environment. The outputs are collected in “exists_output” variable. This task will not fail and will not show as “changed” because of failed_when and changed_when settings of “false”.
        - name: verify user existence
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/exists_user.sql {{ item[1] }}
          register: exists_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, become_os_user: become_os_user }') }}"
            - users
          failed_when: false
          changed_when: false
      
    • User existence results – this task will fail when any of the users didn’t exist, and will display which user it was. This is done in a separate task to produce cleaner output, and in case it’s not wanted to fail if any of the users don’t exist (continue to change passwords for the existing users), this task can simply be commented or the “failed_when: false” can be uncommented.
        - name: User existence results
          fail: msg="{{ item }}"
          with_items: "{{ exists_output.results|rejectattr('stdout','equalto','User exists.')|map(attribute='stdout')|list }}"
          #failed_when: false
      
    • generate and change the user passwords – finally, this is the task that actually changes the passwords. The successful password change is detected by checking the output from the sqlscript, which should produce “User altered.” The rather complex use of lookups is there for a reason: the passwordstore lookup can also generate passwords, but it’s not possible to define the character classes that the new password should contain, however the “password” lookup allows defining these. Additionally, the 1st character is generated only containing “ascii_letters”, as there are usually some applications that “don’t like” passwords that start with numbers (this is why generating the 1st letter of the password is separated from the remaining 11 characters. And lastly, the “passwordstore” lookup is used with the “userpass=” parameter to pass and store the generated password into the passwordstore (and it also keeps the previous passwords). This part could use some improvement as in some cases different rules for the generated password complexity may be required. The password change outputs are recorded in “change_output” that’s checked in the last task.
        - name: generate and change the user passwords
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/change_password.sql \
             {{ item[1] }} \"{{ lookup('passwordstore',item[0].pwdstore + item[1] + ' create=true overwrite=true userpass=' +
                                       lookup('password','/dev/null chars=ascii_letters length=1') +
                                       lookup('password','/dev/null chars=ascii_letters,digits,hexdigits length=11')) }}\"
          register: change_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, pwdstore: pwdstore, become_os_user: become_os_user}') }}"
            - users
          failed_when: false
          changed_when: "'User altered.' in change_output.stdout"
      
    • Password change errors – The “change_output” data are verified here, and failed password changes are reported.
         # fail if the password change failed.
        - name: Password change errors
          fail: msg="{{ item }}"
          with_items: "{{ change_output.results|rejectattr('stdout','equalto','\nUser altered.')|map(attribute='stdout')|list }}"
      
It really works!

Now, when you know how it’s built – it’s time to show how it works!
Please pay attention to the following:

  • The password store is empty at first
  • The whole password change playbook completes in 12 seconds
  • The tasks on both hosts are executed in parallel (see the order of execution feedback for each task)
  • The passwordstore contains the password entries after the playbook completes, and they can be retrieved by using the pass command
$ pass
Password Store

$ time ansible-playbook -i inventory/hosts chpwd.yml

PLAY [pasword change automation] *******************************************************

TASK [Gathering Facts] *****************************************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : populate host_db_users] ***********************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : create directory for target on db hosts] ******************************
changed: [orcl1] => (item={'become_os_user': True, 'os_user': u'oracle'})
changed: [orcl2] => (item={'become_os_user': True, 'os_user': u'oracle'})

TASK [db_users : copy sql scripts to db_hosts] *****************************************
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])

TASK [db_users : verify user existance] ************************************************
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : User existance results] ***********************************************

TASK [db_users : generate and change the user passwords] *******************************
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST3/', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : Password change errors] ***********************************************

PLAY RECAP *****************************************************************************
orcl1                      : ok=6    changed=3    unreachable=0    failed=0
orcl2                      : ok=6    changed=3    unreachable=0    failed=0

real    0m12.418s
user    0m8.590s
sys     0m3.900s

$ pass
Password Store
|-- orcl1
|   |-- TEST1
|       |-- dbsnmp
|       |-- system
|-- orcl2
    |-- TEST2
    |   |-- ctxsys
    |   |-- sys
    |   |-- system
    |-- TEST3
        |-- dbsnmp

$ pass orcl1/TEST1/system
HDecEbjc6xoO
lookup_pass: First generated by ansible on 26/05/2017 14:28:50
Conclusions

For past 2 months I’ve been learning Ansible and trying it for various DBA tasks. It hasn’t always been a smooth ride, as I had to learn quite a lot, because I wasn’t exposed much to beasts like jinja2, json_query, YAML, python (very handy for troubleshooting) and Ansible itself before. I feel that my former PL/SQL coder’s experience had created some expectations from Ansible, that turned out not to be true. The biggest challenges to me were getting used to the linear execution of the playbook (while with PL/SQL I can call packages, functions, etc. to process the data “outside” the main linear code line), and the lack of execution feedback, because one has to learn creating Ansible tasks in a way that they either succeed or fail (no middle states like ‘this is a special case – process it differently’), as well as the amount of visual output is close to none – which does make sense to some degree, it’s “automation” after all, right? Nobody should be watching :)
A separate struggle for me was working with the complex data structure that I created for storing the host/database/user information. It’s a mix of yaml “dictionary” and “list”, and it turned out to be difficult to process it in a way I wanted – this is why I used the json_query at times (although not in a very complex way in this case). There are probably simpler ways I didn’t know of (didn’t manage finding), and I’d be glad if you’d let me know of possible improvements or even other approaches to such tasks that you have worked on and implemented.
Despite all the complaining above, I think it’s really worth investing time in automating tasks like this, it really works and once done it doesn’t require much attention. Happy Automating!

Categories: DBA Blogs

Introducing UEK4 and DTrace on Oracle Linux for SPARC

Wim Coekaerts - Fri, 2017-05-26 13:18

About 2 months ago we released the first version of Oracle Linux 6, Update 7 for SPARC. That was the same version of Oracle Linux used in Exadata SL6. OL6 installed on T4, T5 and T7 systems but it did not yet support the S7 processors/systems. It contained support for the various M7 processor features (DAX, ADI, crypto,...), gcc optimizations to support better code generation for SPARC, important optimizations in functions like memcpy() etc.

We also introduced support for Linux as the control domain (guest domain worked before). So this was the first time one could use Linux as the control domain with a vdiskserver, vswitch and virtual console driver. For this release we based the kernel on UEK2 (2.6.39).

The development team has been hard at work doing a number of things:

- continue to work with upstream Linux  and gcc/glibc/binutils development to submit all the code changes for inclusion. Many SPARC features have already been committed upstream and many are pending/Work in Progress.

- part of the work is  to forward port, so to speak, a lot of the uek2/sparc/exadata features into uek4, alongside upstream/mainline development.

- performance work, both in kernel and userspace (glibc, gcc in particular)

Today, we released an updated version of the ISO image that contains UEK4 QU4 (4.1.12-94.3.2). The main reason for updating the ISO is to introduce support for the S7 processor and S7-based servers. It contains a ton of improvements over UEK2,  we also added support for DTrace.

You can download the latest version of the ISO here :  http://www.oracle.com/technetwork/server-storage/linux/downloads/oracle-linux-sparc-3665558.html

The DTrace utilities can be downloaded here : http://www.oracle.com/technetwork/server-storage/linux/downloads/linux-dtrace-2800968.html

As we add more features we will update the kernel and we will also publish a new version of the software collections for Oracle Linux for SPARC with newer versions of gcc (6.x etc) so more coming!

We are working on things like gccgo, valgrind, node... and the yum repo on http://yum.oracle.com/ contains about 5000 RPMs.

Download it, play with it, have fun.

 

ora-01453 set transaction must be first statement of transaction when using 2 dblinks between 3 databases

Tom Kyte - Fri, 2017-05-26 05:06
Hi Tom, I have a stored procedure that returns ref cursor, the procedure runs in DB.1 (10g), reading from a remote view on DB.2 (11g), and that view is selecting union of 2 tables from DB.2 and the other table from remote DB.3 (11g). Th...
Categories: DBA Blogs

Getting ORA-29284 file read error

Tom Kyte - Fri, 2017-05-26 05:06
Hello Experts, I have below pl/sql function with that I am trying to upload the data from .xls file to pl/sql table but getting ora-29284 file read error. PL/SQL Function: CREATE OR REPLACE FUNCTION LOAD_data ( p_table in varchar2, p_dir i...
Categories: DBA Blogs

Findinf records that were NOT inserted from a list of accounts

Tom Kyte - Fri, 2017-05-26 05:06
I have a list of accounts that users want me to check if they got updated in my table in a refresh or not. Is there a way to check for those records in a list that were NOT inserted after a certain date? I just need the ones not inserted: SELEC...
Categories: DBA Blogs

Extract tablename from Explain Plan Output

Tom Kyte - Fri, 2017-05-26 05:06
Hello, Our DBA team has extracted the entire PLAN output and store them in a CLOB column. This PLAN is directly associated to a particular SQL_ID. My requirement is for each SQL_ID, I want to get the list of tables (preferably comma separated) th...
Categories: DBA Blogs

How to trace sql execution with "no rows selected" result

Tom Kyte - Fri, 2017-05-26 05:06
Hi a customer of mine need to retrieve all queries executed and find which are the ones that return an empty result set it means I've to catch each query, with it's bind variables values and it's results the only solution I'm thinking is to se...
Categories: DBA Blogs

2 Different Oracle DB Version on same server

Tom Kyte - Fri, 2017-05-26 05:06
we has all the applications on oracle 11g. one of the dev team want to move application to oracele 12c. can we have both the versions 11g and 12c on machine. is it recommended? what are the proc and cons involved in this. pre-requisites and post-...
Categories: DBA Blogs

calling pl/sql in unix

Tom Kyte - Fri, 2017-05-26 05:06
Dear Team, Is it possible to have a script in sql and called in unix. update table < set col1= val1, col2=val2 where <condition> insert into table2 <> Declare <variable1>, <variable2> cursor <> Begin -- logic --- ...
Categories: DBA Blogs

Oracle Security Training

Pete Finnigan - Fri, 2017-05-26 05:06
Yesterday I made a short video to talk about my two day class " How to Perform a Security audit of an Oracle database " and added the video to YouTube. This class is going to be delivered at a....[Read More]

Posted by Pete On 26/05/17 At 09:39 AM

Categories: Security Blogs

Interview with PeopleSoft Administrator Podcast: Cost-Based Optimizer Statistics in PeopleSoft

David Kurtz - Fri, 2017-05-26 04:56
I recently recorded another interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about management of Cost-Based Optimizer Statistics in PeopleSoft systems.
(19 May 2017) #81 - Database Statistics You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Another Personal Milestone at Alliance 2017

Steven Chan - Fri, 2017-05-26 02:00

I was deeply moved when I received a Lifetime Service Award from the Oracle Applications User Group (OAUG) in 2011. I still think I was awfully young for that award at the time, but I was honored and consider it an important milestone.

I reached another milestone at the recent Higher Education User Group (HEUG) Alliance conference in Las Vegas. I was in a meeting with the HEUG's Product Advisory Group when I noticed something unusual hanging on someone's nametag:

To this day, I don't think that my friends or family really understand what I do for a living.  But when my name shows up unexpectedly on a button, that's powerful positive feedback from the people who really count.

I'd like to thank the Higher Education User Group and the Oracle Applications User Group for their ongoing support for our EBS user community. I know that I speak for the entire E-Business Suite team when I say that your energy and enthusiasm inspires us to keep building the best products for you.

And, if you're an EBS user who isn't an HEUG or OAUG member, I strongly encourage you to sign up.  The cost is trivial but the benefits substantial.  In addition to running some of the most-useful independent conferences for EBS, these groups advocate tirelessly on behalf of the EBS user community. They have influenced our products and policy in countless ways and they deserve your support.

Categories: APPS Blogs

High and Maximum Availability Architectures

Anthony Shorten - Thu, 2017-05-25 17:51

One of the most common questions I get from partners is what are the best practices that Oracle recommends for implementing high availability and also business continuity. Oracle has a set of flexible architectures and capabilities to support a wide range of high availability and business continuity solutions available in the marketplace.

The Oracle Utilities Application Framework supports the Oracle WebLogic and Oracle Database and related products with features inherited from the architecture or native facilities that allow features to be implemented. In summary the Oracle Utilities Application Framework supports the following:

  • Oracle WebLogic Clustering and high availability architectures are supported natively including support for the load balancing facilities supported, whether they be hardware or software based. This support extends to the individual channels supported by the Framework and to individual J2EE resources such as JMS, Data Sources, MDB etc..
  • Oracle Coherence high availability clustering is available natively for the batch architecture. We now also support using Oracle WebLogic to cluster and manage our batch architecture (though it is exclusively used in our Oracle Cloud implementations at the moment).
  • The high availability and business continuity features of the Oracle Database are also supported. For example, it is possible to implement Oracle Notification Service support within the architecture to implement Fast Connection Failure etc.

Oracle publishes a set of guidelines for Oracle WebLogic, Oracle Coherence and Oracle Database that can be used with Oracle Utilities Application Framework to implement high availability and business continuity solutions. Refer to the following references for this information:

review at amazon: Apache SOLR for newbies

Dietrich Schroff - Thu, 2017-05-25 14:39
Last weekend this book fell into my hands:

https://www.amazon.de/Apache-Solr-Newbies-Paul-Lawson/dp/1540604187/ref=sr_1_3?ie=UTF8&qid=1495703687&sr=8-3&keywords=apache+solr

Paul Lawson wrote a splendid introduction into Apache SOLR. At the beginning he describes the motivation behind the project SOLR:
 "Outside the firewall  [searching] is used to make money, and inside to save money."
Within 130 pages this books covers everything from installation, adminstration GUI, examples to start (importing documents, first searches, etc.) and the quick overview to the fundamental configuration files.
If you are interested in Apache SOLR and you do not know where to start: this book is the answer.
But one thing is missing: The book only covers searching XML documents. On page 42 (!) it is pointed out, that you have to configure SOLR cell. I think this was a good decision, because deep dives should be done after an introduction.

if you are interested, take a look at my review at amazon.de. (as all my reviews written in german ;-)

Pages

Subscribe to Oracle FAQ aggregator