Feed aggregator

CREATE_CHUNKS_BY_SQL

Tom Kyte - Sat, 2017-05-27 17:46
How to frame the select query which will divide the result set into required chunks by using CREATE_CHUNKS_BY_SQL by_rowid => true. For example: i have a table contains 100 records and my select query fetches after filtration 50 records, i want ...
Categories: DBA Blogs

Undo Data files resize

Tom Kyte - Sat, 2017-05-27 17:46
Hi, I have One undo Table space with 4 undo datafiles in my production database Oracle 11g and standby db. Each data file size is 30gb but uses only 1gb. so can i resize (shrink) all data files without downtime. please reply me with desc...
Categories: DBA Blogs

How to fetch results even when one of the DB LINK Fails

Tom Kyte - Sat, 2017-05-27 17:46
Hi Tom, Hope you are doing good. I got a requirement of getting a values from view that was built from 4 other views combined by UNION.Each 4 views referring and getting data from different DBlinks.So when a main view executed it still need to fe...
Categories: DBA Blogs

unique vs. non unique index

Tom Kyte - Sat, 2017-05-27 17:46
Hi, I searched in the archives for my question but I was unable to find an answer and I do apologize for asking what seems like such an easy question. I have always assumed that unique indexes were faster than non-unique indexes but I have never bee...
Categories: DBA Blogs

12c Sequence - nopartition/partition

Tom Kyte - Sat, 2017-05-27 17:46
What is the meaning of a new parameter when creating a sequence? From dbms_metadata.get_ddl: CREATE SEQUENCE "XY"."MY_SEQUENCE" MINVALUE 1000 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 NOCACHE NOORDER NOCYCLE NOPARTITI...
Categories: DBA Blogs

Rows locks from select for update clause

Tom Kyte - Sat, 2017-05-27 17:46
Hi Tom, From my knowledge of oracle, i understand that SELECT FOR UPDATE clause acquires row locks. In that case, if in a given table, when my query updates one set of data, same query should be able to update different set of data by using select f...
Categories: DBA Blogs

Building my own Container Cloud Services on top of Oracle Cloud IaaS

Marcelo Ochoa - Sat, 2017-05-27 17:37
Two weeks ago I presented in the ArOUG Cloud Day at Buenos Aires an introduction to Oracle Container Cloud Services (OCCS) and Oracle IaaS Compute management using console and API.
For this presentation I showed how to implement your Container Cloud Services (CCS) directly on top of IaaS compute.
Let's compare OCCS and my own CCS, here how they look like:
OCCS welcome pageMy CCS welcome pagethey look similar :), my own CCS is implemented using Portainer.io project.
Going deeper with both implementations I can resume the pros/cons as:
OCCS:
  • Pros
    • Easy manage
    • Pre-configured templates
    • Fast jump-start, 1-click deploy
    • Support for using official repository of Oracle images
  • Cons:
    • Host OS not configurable, ej. disk, semaphores, etc
    • Old Docker version 1.12
    • Not Swarm support
    • Basic orchestration features

My CCS:
  • Pros
    • Latest Docker/Swarm version 17.04.0-ce
    • Full Swarm support, scaling, upgrade, load-balancing
    • One-click deploy
    • Public templates (Portainer.io and LinuxServer.io)
    • Graphical scaling Swarm services
    • Console  Log/Stats for services
    • Full custom software/hardware selection
  • Cons
    • Oracle official repositories only available from command line
    • Registry with login not supported in graphical interface, only command line
    • A little complex deployment (scripts)
In my opinion the main problem of the OCCS is that you can't touch low level details of the implementation, so for example you can't change the host OS file /etc/sysctl.conf parameter vm.max_map_count=262144, then you will never get up and running a cluster of ElasticSearch 5.x version, more on this, supported version of Docker (1.12) is a bit old compared with latest features included in 17.04-0-ce.
On the other side my CCS best feature is that supports Swarm and command line operation if you connect to the host OS using ssh, with Swarm support you have a serious implementation for Docker data center solution, specially compared with other orchestration solutions like Kubernetes, see this great post about Docker Swarm exceeds Kubernetes performance at scale.
If you want to test by your self my CCS I extended the scripts of my previous post Managing Oracle Cloud IaaS - The API way to include a Portainer console up and running after deployment of the ES cluster and Swarm Visualizer.
The scripts are available at my GitHub repository, basically it deploys Portainer console at the end of the script deploy-cluster.sh using this command line:
docker run -d --name portainer -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock
if you grant access to the Swarm master host (node5 at the examples) to the port 9000 you will get access to the Portainer console as is showed at above screenshot, note that Portainer have access to the Unix socket /var/run/docker.sock to perform Swarm commands graphically.
You can also directly manage other nodes of the cluster in a native Docker way adding endpoints using TLS certificates generated during the step deploy-machines.sh.
Finally to see Portainer in action I recorded this simple walk through and below OCCS presentation.
Swarm nodesSwarm Services Scale up/down op.




Doing RDBMS hot full backup using RMan when running on Docker

Marcelo Ochoa - Sat, 2017-05-27 07:54
I think many databases are going in production now using Docker environment, specially with the official support provides by Oracle when allows pulling Docker images from official Docker store.
If you are using a custom build image using Oracle's official scripts you can do a hot full backup using RMAN as is described in this post.
We will test using a container started as:
[mochoa@localhost ols-official]$ docker run -d --privileged=true --name test --hostname test --shm-size=1g -p 1521:1521 -p 5500:5500 -e ORACLE_SID=TEST -e ORACLE_PDB=PDB1 -v /etc/timezone:/etc/timezone:ro -e TZ="America/Argentina/Buenos_Aires" -v /home/data/db/test:/opt/oracle/oradata oracle/database:12.1.0.2-ee
Note that datafiles and other RDBMS persistent data are stored at /home/data/db/test host directory.
To connect as SYSDBA to above running container do:
[mochoa@localhost ols-official]$ docker exec -ti test bash [oracle@test ~]$ sqlplus "/ as sysdba"First checks if your database is running in archive log mode:
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE------------ NOARCHIVELOG
If no, enable using the steps described in this page, login using bash and performs following steps connected as SYSDBA:
SQL> SHUTDOWN IMMEDIATE SQL> STARTUP MOUNT SQL> ALTER DATABASE ARCHIVELOG; SQL> ALTER DATABASE OPEN; SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE------------ ARCHIVELOG
you can force RDBMS to generate a log file to see which directory is defined for redo log backups, for example:
SQL> ALTER SYSTEM SWITCH LOGFILE;
usually is defined by the parameter log_archive_dest, but if it empty the Docker image is doing backup at the container directory:
/opt/oracle/oradata/fast_recovery_area/${ORACLE_SID}/archivelog/yyyy_mm_dd/
Once you have your database up and running in archive log mode using RMAN utility a full daily backup can be configured in your /etc/cron.daily/ host directory as:
[mochoa@localhos ols-official]$ cat /etc/cron.daily/backup-db.sh #!/bin/bash docker exec test /opt/oracle/product/12.1.0.2/dbhome_1/bin/rman target=/ cmdfile='/opt/oracle/oradata/backup_full_compressed.sh' log='/opt/oracle/oradata/backup_archive.log'
where backup_full_compressed.sh is an RMAN's script as:
delete force noprompt obsolete; run {    configure controlfile autobackup on;    configure default device type to disk;    configure device type disk parallelism 1;    configure controlfile autobackup format for device type disk clear;    allocate channel c1 device type disk;    backup format '/opt/oracle/oradata/backup/prod/%d_%D_%M_%Y_%U' as    compressed backupset database; } delete force noprompt obsolete;
during the full backup you can see RMAN's output at the host file:
/home/data/db/test/backup_archive.log
or container file:
/opt/oracle/oradata/backup_archive.log
it looks like:
Recovery Manager: Release 12.1.0.2.0 - Production on Sat May 27 09:18:54 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST (DBID=2242310144)
RMAN> delete force noprompt obsolete;2> run3> {4> configure controlfile autobackup on;5> configure default device type to disk;6> configure device type disk parallelism 1;7> configure controlfile autobackup format for device type disk clear;8> allocate channel c1 device type disk;9> backup format '/opt/oracle/oradata/backup/prod/%d_%D_%M_%Y_%U' as compressed backupset database;10> }11> delete force noprompt obsolete;12> using target database control file instead of recovery catalogRMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1allocated channel: ORA_DISK_1channel ORA_DISK_1: SID=365 device type=DISK....Deleting the following obsolete backups and copies:Type                 Key    Completion Time    Filename/Handle-------------------- ------ ------------------ --------------------Backup Set           5      26-MAY-17           Backup Piece       5      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_05s57dn5_1_1Backup Set           6      26-MAY-17           Backup Piece       6      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_06s57dti_1_1Backup Set           7      26-MAY-17           Backup Piece       7      26-MAY-17          /opt/oracle/oradata/backup/prod/TEST_26_05_2017_07s57e1g_1_1Archive Log          3      27-MAY-17          /opt/oracle/oradata/fast_recovery_area/TEST/archivelog/2017_05_27/o1_mf_1_48_dllv53d2_.arcBackup Set           8      26-MAY-17           Backup Piece       8      26-MAY-17          /opt/oracle/oradata/fast_recovery_area/TEST/autobackup/2017_05_26/o1_mf_s_945010852_dljvbpfq_.bkpdeleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_05s57dn5_1_1 RECID=5 STAMP=945010405deleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_06s57dti_1_1 RECID=6 STAMP=945010610deleted backup piecebackup piece handle=/opt/oracle/oradata/backup/prod/TEST_26_05_2017_07s57e1g_1_1 RECID=7 STAMP=945010736deleted archived logarchived log file name=/opt/oracle/oradata/fast_recovery_area/TEST/archivelog/2017_05_27/o1_mf_1_48_dllv53d2_.arc RECID=3 STAMP=945076211deleted backup piecebackup piece handle=/opt/oracle/oradata/fast_recovery_area/TEST/autobackup/2017_05_26/o1_mf_s_945010852_dljvbpfq_.bkp RECID=8 STAMP=945010854Deleted 5 objects

Recovery Manager complete.
and that's all when your host cron.daily script finish your full backup is at /home/data/db/test/backup/prod, note that a 3.7Gb datafiles only produce 695Mb size files and took a few minutes to do that.


Carbonated Java & JavaScript Stored Procedures

Kuassi Mensah - Fri, 2017-05-26 17:43
Carbonated Java Stored ProceduresFor accessing JSON Collections and documents without any knowledge of SQL, Oracle furnishes the SODA for Java API. It allows a convenient access and navigation using the dot notation.

How to use SODA for Java in Java Stored Procedures? I have posted the steps, the code samples and scripts on GitHub.
Carbonated JavaScript Stored Procedures Nashorn allows interoperability between Java and javaScript. By leveraging such interoperability, I've bee able to reuse SODA for Java with JavaScript Stored Procedures.

How to use SODA for Java in JavaScript Stored Procedures? I have posted the steps, the code samples and scripts on GitHub.

Enjoy!

Automating Password Rotation for Oracle Databases

Pythian Group - Fri, 2017-05-26 14:03

Password rotation is not the most exciting task in the world, and that’s exactly why it’s a perfect candidate for automation. Automating routine tasks like this are good for everyone – DBAs can work on something that’s more exciting, companies save costs as less time is spent on changing the passwords, and there’s no place for human error, either. At Pythian, we typically use Ansible for task automation, and I like it mainly because of its non-intrusive configuration (no agents need to be installed on the target servers), and its scalability (tasks are executed in parallel on the target servers). This post will briefly describe how I automated password rotation for oracle database users using Ansible.

Overview

This blog post is not an intro to what is Ansible and how to use it, but it’s rather an example of how a simple task can be automated using Ansible in a way that’s scalable, flexible and easily reusable, and also provides the ability for other tasks to pick up the new passwords from a secure password store.

  • Scalability – I’d like to take advantage of Ansible’s ability of executing tasks on multiple servers at the same time. For example, in a large environments of tens or hundreds of machines, a solution that executes password change tasks serially would not be suitable. This would be an example of a “serial” task (it’s not a real thing, but just an illustration that it “hardcodes” a few “attributes” (environment file, the username and the hostname), and creating a separate task for every user/database you’d want to change the password for would be required:
    - hosts: ora-serv01
      remote_user: oracle
      tasks:
      - name: change password for SYS
        shell: | 
          . TEST1.env && \
          sqlplus / as sysdba @change_pasword.sql SYS \
          \"{{lookup('password','/dev/null length=8')}}\"
    
  • Flexible – I want to be able to adjust the list of users for which the passwords are changed, and the list of servers/databases that the user passwords are changed for in a simple way, that doesn’t include changing the main task list.
  • Reusable – this comes together with flexibility. The idea is that the playbook would be so generic, that it wouldn’t require any changes when it’s implemented in a completely separate environment (i.e. for another client of Pythian)
  • Secure password store – the new passwords are to be generated by the automated password rotation tool, and a method of storing password securely is required so that the new passwords could be picked up by the DBAs, application owners or the next automated task that would reconfigure the application
The implementation Prerequisites

I chose to do the implementation using Ansible 2.3, because it introduces the passwordstore lookup, which enables interaction with the pass utility (read more about it in Passwordstore.org). pass is very cool. It store passwords in gpg-encrypted files, and it can also be configured to automatically update the changes to a git repository, which relieves us of the headache of password distribution. The password can be retrieved from git on the servers that need the access to the new passwords.

Ansible 2.3 runs on python 2.6, unfortunately, the passwordstore lookup requires Python 2.7, which can be an issue if the control host for Ansible runs on Oracle Linux 6 or RHEL 6, as they don’t provide Python 2.7 in the official yum repositories. Still, there are ways of getting it done, and I’ll write another blog post about it.

So, what we’ll need is:

  • Ansible 2.3
  • jmespath plugin on Ansible control host (pip install jmespath)
  • jinja2 plugin on Ansible control host (I had to update it using pip install -U jinja2 in few cases)
  • Python 2.7 (or Python 3.5)
  • pass utility
The Playbook

This is the whole list of files that are included in the playbook:

./chpwd.yml
./inventory/hosts
./inventory/orcl1-vagrant-private_key
./inventory/orcl2-vagrant-private_key
./roles/db_users/files/change_password.sql
./roles/db_users/files/exists_user.sql
./roles/db_users/defaults/main.yml
./roles/db_users/tasks/main.yml

Let’s take a quick look at all of them:

  • ./chpwd.yml – is the playbook and (in this case) it’s extremely simple as I want to run the password change against all defined hosts:
    $ cat ./chpwd.yml
    ---
    
      - name: password change automation
        hosts: all
        roles:
          - db_users
    
  • ./inventory/hosts, ./inventory/orcl1-vagrant-private_key, ./inventory/orcl2-vagrant-private_key – these files define the hosts and the connectivity. In this case we have 2 hosts – orcl1 and orcl2, and we’ll connect to vagrant user using the private keys.
    $ cat ./inventory/hosts
    [orahosts]
    orcl1 ansible_host=127.0.0.1 ansible_port=2201 ansible_ssh_private_key_file=inventory/orcl1-vagrant-private_key ansible_user=vagrant
    orcl2 ansible_host=127.0.0.1 ansible_port=2202 ansible_ssh_private_key_file=inventory/orcl2-vagrant-private_key ansible_user=vagrant
  • ./roles/db_users/files/change_password.sql – A sql script that I’ll execute on the database to change the passwords. It takes 2 parameters the username and the password:
    $ cat ./roles/db_users/files/change_password.sql
    set ver off pages 0
    alter user &1 identified by "&2";
    exit;
  • ./roles/db_users/files/exists_user.sql – A sql script that allows verifying the existence of the users. It takes 1 argument – the username. It outputs “User exists.” when the user is there, and “User {username} does not exist.” – when it’s not.
    $ cat ./roles/db_users/files/exists_user.sql
    set ver off pages 0
    select 'User exists.' from all_users where username=upper('&1')
    union all
    select 'User '||upper('&1')||' does not exist.' from (select upper('&1') from dual minus select username from all_users);
    exit;
  • ./roles/db_users/defaults/main.yml – is the default file for the db_users role. I use this file to define the users for each host and database for which the passwords need to be changed:
    $ cat ./roles/db_users/defaults/main.yml
    ---
    
      db_users:
        - name: TEST1
          host: orcl1
          env: ". ~/.bash_profile && . ~/TEST1.env > /dev/null"
          pwdstore: "orcl1/TEST1/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp
            - system
        - name: TEST2
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST2.env > /dev/null"
          pwdstore: "orcl2/TEST2/"
          os_user: oracle
          become_os_user: yes
          users:
            - sys
            - system
            - ctxsys
        - name: TEST3
          host: orcl2
          env: ". ~/.bash_profile && . ~/TEST3.env > /dev/null"
          pwdstore: "orcl2/TEST3/"
          os_user: oracle
          become_os_user: yes
          users:
            - dbsnmp

    In this data structure, we define everything that’s needed to be known to connect to the database and change the passwords. each entry to the list contains the following data:

    • name – just a descriptive name of the entry in this list, normally it would be the name of the database that’s described below.
    • host – the host on which the database resides. It should match one of the hosts defined in ./inventory/hosts.
    • env – how to set the correct environment to be able to connect to the DB (currently it requires sysdba connectivity).
    • pwdstore – the path to the folder in the passwordstore where the new passwords will be stored.
    • os_user and become_os_user – these are used in case sudo to another user on the target host is required. In a typical configuration, I connect to the target host using a dedicated user for ansible, and then sudo to the DB owner. if ansible connects to the DB onwer directly, then become_os_user should be set to “no”.
    • users – this is the list of all users for which the passwords need to be changed.

    As you see, this structure greatly enhances the flexibility and reusability, because adding new databases, hosts or users to the list would be done by a simple change to the “db_users:” structure in this defaults file. In this example, dbsnmp and system passwords are rotated for TEST1@orcl1, sys, system and ctxsys passwords are rotated for TEST2@orcl2, and dbsnmp on TEST3@orcl2

  • ./roles/db_users/tasks/main.yml – this is the task file of the db_users role. The soul of the playbook and the main part that does the password change depending on the contents in the defaults file described above. Instead of pasting the whole at once, I’ll break it up task by task, and will provide some comments about what’s being done.
    • populate host_db_users – This task simply filters the whole db_users data structure that’s defined in the defaults file, and creates host_db_users fact with only the DBs that belong to the host the task is currently run on. Using the ansible “when” conditional would also be possible to filter the list, however in such case there’s a lot of “skipped” entries displayed when the task is executed, so I prefer filtering the list before it’s even passed to the Ansible task.
      ---
      
        - name: populate host_db_users
          set_fact: host_db_users="{{ db_users | selectattr('host','equalto',ansible_hostname) | list }}"
      
    • create directory for target on db hosts – for each unique combination of os_user and become_os_user on the target host, and “ansible” directly is created. A json_query is used here, to filter just the os_user and become_os_user attributes that are needed. It would also work with with_items: "{{ host_db_users }}", but in that case, the outputs become cluttered as the attributes are displayed during the execution.
        - name: create directory for target on db hosts
          file:
            path: "ansible"
            state: directory
          become_user: "{{ item.os_user }}"
          become: "{{ item.become_os_user }}"
          with_items: "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
      
    • copy sql scripts to db_hosts – the missing scripts are copied from Ansible control host to the target “ansible” directories. “with_nested” is the method to create a loop in Ansible.
        - name: copy sql scripts to db_hosts
          copy:
            src="{{ item[1] }}"
            dest=ansible/
            mode=0644
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_nested:
            - "{{ host_db_users | json_query('[*].{os_user: os_user, become_os_user: become_os_user }') | unique | list }}"
            - ['files/change_password.sql','files/exists_user.sql']
      
    • verify user existence – I’m using a shell module to execute the sql script after setting the environment. The outputs are collected in “exists_output” variable. This task will not fail and will not show as “changed” because of failed_when and changed_when settings of “false”.
        - name: verify user existence
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/exists_user.sql {{ item[1] }}
          register: exists_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, become_os_user: become_os_user }') }}"
            - users
          failed_when: false
          changed_when: false
      
    • User existence results – this task will fail when any of the users didn’t exist, and will display which user it was. This is done in a separate task to produce cleaner output, and in case it’s not wanted to fail if any of the users don’t exist (continue to change passwords for the existing users), this task can simply be commented or the “failed_when: false” can be uncommented.
        - name: User existence results
          fail: msg="{{ item }}"
          with_items: "{{ exists_output.results|rejectattr('stdout','equalto','User exists.')|map(attribute='stdout')|list }}"
          #failed_when: false
      
    • generate and change the user passwords – finally, this is the task that actually changes the passwords. The successful password change is detected by checking the output from the sqlscript, which should produce “User altered.” The rather complex use of lookups is there for a reason: the passwordstore lookup can also generate passwords, but it’s not possible to define the character classes that the new password should contain, however the “password” lookup allows defining these. Additionally, the 1st character is generated only containing “ascii_letters”, as there are usually some applications that “don’t like” passwords that start with numbers (this is why generating the 1st letter of the password is separated from the remaining 11 characters. And lastly, the “passwordstore” lookup is used with the “userpass=” parameter to pass and store the generated password into the passwordstore (and it also keeps the previous passwords). This part could use some improvement as in some cases different rules for the generated password complexity may be required. The password change outputs are recorded in “change_output” that’s checked in the last task.
        - name: generate and change the user passwords
          shell: |
             {{ item[0].env }} && \
             sqlplus -S / as sysdba \
             @ansible/change_password.sql \
             {{ item[1] }} \"{{ lookup('passwordstore',item[0].pwdstore + item[1] + ' create=true overwrite=true userpass=' +
                                       lookup('password','/dev/null chars=ascii_letters length=1') +
                                       lookup('password','/dev/null chars=ascii_letters,digits,hexdigits length=11')) }}\"
          register: change_output
          become_user: "{{ item[0].os_user }}"
          become: "{{ item[0].become_os_user }}"
          with_subelements:
            - "{{ host_db_users |json_query('[*].{env: env, os_user: os_user, users: users, pwdstore: pwdstore, become_os_user: become_os_user}') }}"
            - users
          failed_when: false
          changed_when: "'User altered.' in change_output.stdout"
      
    • Password change errors – The “change_output” data are verified here, and failed password changes are reported.
         # fail if the password change failed.
        - name: Password change errors
          fail: msg="{{ item }}"
          with_items: "{{ change_output.results|rejectattr('stdout','equalto','\nUser altered.')|map(attribute='stdout')|list }}"
      
It really works!

Now, when you know how it’s built – it’s time to show how it works!
Please pay attention to the following:

  • The password store is empty at first
  • The whole password change playbook completes in 12 seconds
  • The tasks on both hosts are executed in parallel (see the order of execution feedback for each task)
  • The passwordstore contains the password entries after the playbook completes, and they can be retrieved by using the pass command
$ pass
Password Store

$ time ansible-playbook -i inventory/hosts chpwd.yml

PLAY [pasword change automation] *******************************************************

TASK [Gathering Facts] *****************************************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : populate host_db_users] ***********************************************
ok: [orcl1]
ok: [orcl2]

TASK [db_users : create directory for target on db hosts] ******************************
changed: [orcl1] => (item={'become_os_user': True, 'os_user': u'oracle'})
changed: [orcl2] => (item={'become_os_user': True, 'os_user': u'oracle'})

TASK [db_users : copy sql scripts to db_hosts] *****************************************
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/change_password.sql'])
changed: [orcl1] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])
changed: [orcl2] => (item=[{'become_os_user': True, 'os_user': u'oracle'}, u'files/exists_user.sql'])

TASK [db_users : verify user existance] ************************************************
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
ok: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
ok: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : User existance results] ***********************************************

TASK [db_users : generate and change the user passwords] *******************************
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'sys'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'dbsnmp'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'system'))
changed: [orcl1] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl1/TEST1/', 'env': u'. ~/.bash_profile && . ~/TEST1.env > /dev/null'}, u'system'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST2/', 'env': u'. ~/.bash_profile && . ~/TEST2.env > /dev/null'}, u'ctxsys'))
changed: [orcl2] => (item=({'become_os_user': True, 'os_user': u'oracle', 'pwdstore': u'orcl2/TEST3/', 'env': u'. ~/.bash_profile && . ~/TEST3.env > /dev/null'}, u'dbsnmp'))

TASK [db_users : Password change errors] ***********************************************

PLAY RECAP *****************************************************************************
orcl1                      : ok=6    changed=3    unreachable=0    failed=0
orcl2                      : ok=6    changed=3    unreachable=0    failed=0

real    0m12.418s
user    0m8.590s
sys     0m3.900s

$ pass
Password Store
|-- orcl1
|   |-- TEST1
|       |-- dbsnmp
|       |-- system
|-- orcl2
    |-- TEST2
    |   |-- ctxsys
    |   |-- sys
    |   |-- system
    |-- TEST3
        |-- dbsnmp

$ pass orcl1/TEST1/system
HDecEbjc6xoO
lookup_pass: First generated by ansible on 26/05/2017 14:28:50
Conclusions

For past 2 months I’ve been learning Ansible and trying it for various DBA tasks. It hasn’t always been a smooth ride, as I had to learn quite a lot, because I wasn’t exposed much to beasts like jinja2, json_query, YAML, python (very handy for troubleshooting) and Ansible itself before. I feel that my former PL/SQL coder’s experience had created some expectations from Ansible, that turned out not to be true. The biggest challenges to me were getting used to the linear execution of the playbook (while with PL/SQL I can call packages, functions, etc. to process the data “outside” the main linear code line), and the lack of execution feedback, because one has to learn creating Ansible tasks in a way that they either succeed or fail (no middle states like ‘this is a special case – process it differently’), as well as the amount of visual output is close to none – which does make sense to some degree, it’s “automation” after all, right? Nobody should be watching :)
A separate struggle for me was working with the complex data structure that I created for storing the host/database/user information. It’s a mix of yaml “dictionary” and “list”, and it turned out to be difficult to process it in a way I wanted – this is why I used the json_query at times (although not in a very complex way in this case). There are probably simpler ways I didn’t know of (didn’t manage finding), and I’d be glad if you’d let me know of possible improvements or even other approaches to such tasks that you have worked on and implemented.
Despite all the complaining above, I think it’s really worth investing time in automating tasks like this, it really works and once done it doesn’t require much attention. Happy Automating!

Categories: DBA Blogs

Introducing UEK4 and DTrace on Oracle Linux for SPARC

Wim Coekaerts - Fri, 2017-05-26 13:18

About 2 months ago we released the first version of Oracle Linux 6, Update 7 for SPARC. That was the same version of Oracle Linux used in Exadata SL6. OL6 installed on T4, T5 and T7 systems but it did not yet support the S7 processors/systems. It contained support for the various M7 processor features (DAX, ADI, crypto,...), gcc optimizations to support better code generation for SPARC, important optimizations in functions like memcpy() etc.

We also introduced support for Linux as the control domain (guest domain worked before). So this was the first time one could use Linux as the control domain with a vdiskserver, vswitch and virtual console driver. For this release we based the kernel on UEK2 (2.6.39).

The development team has been hard at work doing a number of things:

- continue to work with upstream Linux  and gcc/glibc/binutils development to submit all the code changes for inclusion. Many SPARC features have already been committed upstream and many are pending/Work in Progress.

- part of the work is  to forward port, so to speak, a lot of the uek2/sparc/exadata features into uek4, alongside upstream/mainline development.

- performance work, both in kernel and userspace (glibc, gcc in particular)

Today, we released an updated version of the ISO image that contains UEK4 QU4 (4.1.12-94.3.2). The main reason for updating the ISO is to introduce support for the S7 processor and S7-based servers. It contains a ton of improvements over UEK2,  we also added support for DTrace.

You can download the latest version of the ISO here :  http://www.oracle.com/technetwork/server-storage/linux/downloads/oracle-linux-sparc-3665558.html

The DTrace utilities can be downloaded here : http://www.oracle.com/technetwork/server-storage/linux/downloads/linux-dtrace-2800968.html

As we add more features we will update the kernel and we will also publish a new version of the software collections for Oracle Linux for SPARC with newer versions of gcc (6.x etc) so more coming!

We are working on things like gccgo, valgrind, node... and the yum repo on http://yum.oracle.com/ contains about 5000 RPMs.

Download it, play with it, have fun.

 

ora-01453 set transaction must be first statement of transaction when using 2 dblinks between 3 databases

Tom Kyte - Fri, 2017-05-26 05:06
Hi Tom, I have a stored procedure that returns ref cursor, the procedure runs in DB.1 (10g), reading from a remote view on DB.2 (11g), and that view is selecting union of 2 tables from DB.2 and the other table from remote DB.3 (11g). Th...
Categories: DBA Blogs

Getting ORA-29284 file read error

Tom Kyte - Fri, 2017-05-26 05:06
Hello Experts, I have below pl/sql function with that I am trying to upload the data from .xls file to pl/sql table but getting ora-29284 file read error. PL/SQL Function: CREATE OR REPLACE FUNCTION LOAD_data ( p_table in varchar2, p_dir i...
Categories: DBA Blogs

Findinf records that were NOT inserted from a list of accounts

Tom Kyte - Fri, 2017-05-26 05:06
I have a list of accounts that users want me to check if they got updated in my table in a refresh or not. Is there a way to check for those records in a list that were NOT inserted after a certain date? I just need the ones not inserted: SELEC...
Categories: DBA Blogs

Extract tablename from Explain Plan Output

Tom Kyte - Fri, 2017-05-26 05:06
Hello, Our DBA team has extracted the entire PLAN output and store them in a CLOB column. This PLAN is directly associated to a particular SQL_ID. My requirement is for each SQL_ID, I want to get the list of tables (preferably comma separated) th...
Categories: DBA Blogs

How to trace sql execution with "no rows selected" result

Tom Kyte - Fri, 2017-05-26 05:06
Hi a customer of mine need to retrieve all queries executed and find which are the ones that return an empty result set it means I've to catch each query, with it's bind variables values and it's results the only solution I'm thinking is to se...
Categories: DBA Blogs

2 Different Oracle DB Version on same server

Tom Kyte - Fri, 2017-05-26 05:06
we has all the applications on oracle 11g. one of the dev team want to move application to oracele 12c. can we have both the versions 11g and 12c on machine. is it recommended? what are the proc and cons involved in this. pre-requisites and post-...
Categories: DBA Blogs

calling pl/sql in unix

Tom Kyte - Fri, 2017-05-26 05:06
Dear Team, Is it possible to have a script in sql and called in unix. update table < set col1= val1, col2=val2 where <condition> insert into table2 <> Declare <variable1>, <variable2> cursor <> Begin -- logic --- ...
Categories: DBA Blogs

Oracle Security Training

Pete Finnigan - Fri, 2017-05-26 05:06
Yesterday I made a short video to talk about my two day class " How to Perform a Security audit of an Oracle database " and added the video to YouTube. This class is going to be delivered at a....[Read More]

Posted by Pete On 26/05/17 At 09:39 AM

Categories: Security Blogs

Interview with PeopleSoft Administrator Podcast: Cost-Based Optimizer Statistics in PeopleSoft

David Kurtz - Fri, 2017-05-26 04:56
I recently recorded another interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about management of Cost-Based Optimizer Statistics in PeopleSoft systems.
(19 May 2017) #81 - Database Statistics You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Pages

Subscribe to Oracle FAQ aggregator