Skip navigation.

DBA Blogs

Please look at latest Oct 2014 Oracle patching

Grumpy old DBA - Wed, 2014-10-15 11:23
This one looks like the real thing ... getting advice to "not skip" the patching process for a whole bunch of things included here.

I'm just saying ...
Categories: DBA Blogs

12c: Access Objects Of A Common User Non-existent In Root

Oracle in Action - Tue, 2014-10-14 23:56

RSS content

In a multitenant environment, a common user is a database user whose identity and password are known in the root and in every existing and future pluggable database (PDB). Common users can connect to the root and perform administrative tasks specific to the root or PDBs. There are two types of common users :

  • All Oracle-supplied administrative user accounts, such as SYS and SYSTEM
  •  User created common users- Their names  must start with C## or c##.

When a PDB having a user created common user is plugged into another CDB and the target CDB does not have  a common user with the same name, the common user in a newly plugged in PDB becomes a locked account.
To access such common user’s objects, you can do one of the following:

  • Leave the user account locked and use the objects of its schema.
  • Create a common user with the same name as the locked account.

Let’s demonstrate …

Current scenario:

Source CDB : CDB1
- one PDB (PDB1)
- Two common users C##NXISTS and C##EXISTS

Destination CDB : CDB2
- No PDB
- One common user C##EXISTS

- As user C##NXISTS, create and populate a table in PDB1@CDB1
- Unplug PDB1 from CDB1 and plug into CDB2 as PDB1_COPY
- Open PDB1_COPY and Verify that

  •  user C##NXISTS has not been created in root
  • users C##NXISTS and C##EXISTS both have been created in PDB1_COPY. Account of C##EXISTS is open whereas account of C##NXISTS is closed.

- Unlock user C##NXISTS account in PDB1_COPY.
- Try to connect to pdb1_copy as C##NXISTS  – fails with internal error.
- Create a local user  LUSER in PDB1_COPY with privileges on C##NXISTS’  table and verify that LUSER can access C##NXISTS’ table.
- Create user C##NXISTS in root with PDB1_COPY closed. Account of
C##NXISTS is automatically opened on opening PDB1_COPY.
- Try to connect as C##NXISTS to pdb1_copy – succeeds


– Setup –

CDB1>sho con_name


CDB1>sho pdbs

---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1                           READ WRITE NO

CDB1>select username, common from cdb_users where username like 'C##%';

no rows selected

- Create 2 common users in CDB1
    - C##NXISTS
    - C##EXISTS

CDB1>create user C##EXISTS identified by oracle container=all;
     create user C##NXISTS identified by oracle container=all;

     col username for a30
     col common for a10
     select username, common from cdb_users where   username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##NXISTS                      YES
C##EXISTS                      YES
C##NXISTS                      YES
C##EXISTS                      YES

- Create user C##EXISTS  in CDB2

CDB2>sho parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ -----------
db_name                        string      cdb2

CDB2>sho pdbs

---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO

CDB2>create user C##EXISTS identified by oracle container=all;
     col username for a30
     col common for a10

     select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- As user C##NXISTS, create and populate a table in PDB1@CDB1

CDB1>alter session set container=pdb1;
     alter user C##NXISTS quota unlimited on users;
     create table C##NXISTS.test(x number);
     insert into C##NXISTS.test values (1);

- Unplug PDB1 from CDB1

CDB1>alter session set container=cdb$root;
     alter pluggable database pdb1 close immediate;
     alter pluggable database pdb1 unplug into '/home/oracle/pdb1.xml';

CDB1>select name from v$datafile where con_id = 3;


- Plug in PDB1 into CDB2 as PDB1_COPY

CDB2>create pluggable database pdb1_copy using '/home/oracle/pdb1.xml'      file_name_convert =

sho pdbs

---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1_COPY                      MOUNTED

– Verify that C##NXISTS user is not visible as PDB1_COPY is closed

CDB2>col username for a30
col common for a10
select username, common from cdb_users where username like 'C##%';

USERNAME                       COMMON
------------------------------ ----------
C##EXISTS                      YES

- Open PDB1_COPY and Verify that
  . users C##NXISTS and C##EXISTS both have been created in PDB.
  . Account of C##EXISTS is open whereas account of C##NXISTS is  locked.

CDB2>alter pluggable database pdb1_copy open;
col account_status for a20
select con_id, username, common, account_status from cdb_users  where username like 'C##%' order by con_id, username;

---------- ------------------------------      ----------      --------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

– Unlock user C##NXISTS account on PDB1_COPY

CDB2>alter session set container = pdb1_copy;
     alter user C##NXISTS account unlock;
     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

---------- ------------------------------     -------------  ---------------------------
 3 C##EXISTS                      YES        OPEN
 3 C##NXISTS                      YES        OPEN

– Try to connect as C##NXISTS to pdb1_copy – fails with internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ORA-00600: internal error code, arguments: [kziaVrfyAcctStatinRootCbk: 

[C##NXISTS], [], [], [], [], [], [], [], [], [], []

- Since user C##NXISTS cannot connect pdb1_copy, we can lock the account again  

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     alter user C##NXISTS account lock;

     col account_status for a20
     select username, common, account_status from dba_users     where username like 'C##%' order by username;

USERNAME                       COMMON     ACCOUNT_STATUS
------------------------------ ---------- --------------------
C##EXISTS                      YES        OPEN
C##NXISTS                      YES        LOCKED

– Now if C##NXISTS tries to log in to PDB1_COPY, ORA-28000 is returned    instead of internal error

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ORA-28000: the account is locked

How to access C##NXISTS objects?


- Create a local user in PDB1_COPY with appropriate object privileges on C##NXISTS’ table

CDB2>conn sys/oracle@localhost:1522/pdb1_copy  as sysdba

     create user luser identified by oracle;
     grant select on c##nxists.test to luser;
     grant create session to luser;

–Check that local user can access common user C##NXISTS tables

CDB2>conn luser/oracle@localhost:1522/pdb1_copy;
     select * from c##nxists.test;

SOLUTION – II :  Create the common user C##NXISTS in CDB2

- Check that C##NXISTS has not been created in CDB$root

CDB2>conn sys/oracle@cdb2 as sysdba
     col account_status for a20
     select con_id, username, common, account_status from cdb_users    where username like 'C##%' order by con_id, username;

---------- ------------------------------   -------------     -------------------------
1 C##EXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        LOCKED

- Try to create user C##NXISTS with PDB1_COPY open – fails

CDB2>create user c##NXISTS identified by oracle;
create user c##NXISTS identified by oracle
ERROR at line 1:
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1_COPY
ORA-01920: user name 'C##NXISTS' conflicts with another user or role  name

- Close PDB1_COPY and Create user C##NXISTS in root and verify that his account is automatically unlocked on opening PDB1_COPY

CDB2>alter pluggable database pdb1_copy close;
     create user c##NXISTS identified by oracle;
     alter pluggable database pdb1_copy open;

     col account_status for a20
     select con_id, username, common, account_status from cdb_users   where username like 'C##%' order by con_id, username;

----------   ------------------------------ ----------      --------------------
1 C##EXISTS                      YES        OPEN
1 C##NXISTS                      YES        OPEN
3 C##EXISTS                      YES        OPEN
3 C##NXISTS                      YES        OPEN

– Connect to PDB1_COPY as C##NXISTS after granting appropriate privilege – Succeeds

CDB2>conn c##nxists/oracle@localhost:1522/pdb1_copy
ORA-01045: user C##NXISTS lacks CREATE SESSION privilege; logon denied
Warning: You are no longer connected to ORACLE.

CDB2>conn sys/oracle@localhost:1522/pdb1_copy as sysdba
     grant create session to c##nxists;
     conn c##nxists/oracle@localhost:1522/pdb1_copy

CDB2>sho con_name


CDB2>sho user


CDB2>select * from test;



Related Links:


Oracle 12c Index




Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Access Objects Of A Common User Non-existent In Root], All Right Reserved. 2014.

The post 12c: Access Objects Of A Common User Non-existent In Root appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 3

Pythian Group - Tue, 2014-10-14 14:59

Today’s blog post is part three of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to configure OpenStack Identity service on the controller node. We have already configured the required repo in part two of the series, so let’s get started on configuring Keystone Identity Service.

  1. Install keystone on the controller node.
    yum install -y openstack-keystone python-keystoneclient

    OpenStack uses a message broker to coordinate operations and status information among services. The message broker service typically runs on the controller node. OpenStack supports several message brokers including RabbitMQ, Qpid, and ZeroMQ.I am using Qpid as it is available on most of the distros

  2. Install Qpid Messagebroker server.
    yum install -y qpid-cpp-server

    Now Modify the qpid configuration file to disable authentication by changing below line in /etc/qpidd.conf


    Now start and enable qpid service to start on server startup

    chkconfig qpidd on
    service qpidd start
  3. Now configure keystone to use MySQL database
    openstack-config --set /etc/keystone/keystone.conf \
       database connection mysql://keystone:YOUR_PASSWORD@controller/keystone
  4. Next create keystone database user by running below queries on your mysql prompt as root.
    CREATE DATABASE keystone;
    GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
  5. Now create database tables
    su -s /bin/sh -c "keystone-manage db_sync" keystone

    Currently we don’t have any user accounts that can communicate with OpenStack services and Identity service. So we will setup an authorization token to use as a shared secret between the Identity Service and other OpenStack services and store in configuration file.

    ADMIN_TOKEN=$(openssl rand -hex 10)
    echo $ADMIN_TOKEN
    openstack-config --set /etc/keystone/keystone.conf DEFAULT \
       admin_token $ADMIN_TOKEN
  6. Keystone uses PKI tokens as default. Now create the signing keys and certificates to restrict access to the generated data
    keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
    chown -R keystone:keystone /etc/keystone/ssl
    chmod -R o-rwx /etc/keystone/ssl
  7. Start and enable the keystone identity service to begin at startup
    service openstack-keystone start
    chkconfig openstack-keystone on

    Keystone Identity service stores expired tokens as well in the database. We will create below crontab entry to purge the expired tokens

    (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
    echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone
  8. Now we will create admin user for keystone and define roles for admin user
    export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
    keystone user-create --name=admin --pass=Your_Password --email=Your_Email
    keystone role-create --name=admin
    keystone tenant-create --name=admin --description="Admin Tenant"
    keystone user-role-add --user=admin --tenant=admin --role=admin
    keystone user-role-add --user=admin --role=_member_ --tenant=admin
    keystone user-create --name=pythian --pass= Your_Password --email=Your_Email
    keystone tenant-create --name=pythian --description="Pythian Tenant"
    keystone user-role-add --user=pythian --role=_member_ --tenant=pythian
    keystone tenant-create --name=service --description="Service Tenant"
  9. Now we create a service entry for the identity service
    keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
    keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') \
    --publicurl=http://controller:5000/v2.0 \
    --internalurl=http://controller:5000/v2.0 \
  10. Verify Identity service installation
  11. Request an authentication token by using the admin user and the password you chose for that user
    keystone --os-username=admin --os-password=Your_Password \
      --os-auth-url=http://controller:35357/v2.0 token-get
    keystone --os-username=admin --os-password=Your_Password \
      --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 \
  12. We will save the required parameters in as below
    export OS_USERNAME=admin
    export OS_PASSWORD=Your_Password
    export OS_TENANT_NAME=admin
    export OS_AUTH_URL=http://controller:35357/v2.0
  13. Next Next check if everything is working fine and keystone interacts with OpenStack services. We will source the file to load the keystone parameters
    source /root/
  14. List Keystone tokens using:
    keystone token-get
  15. List Keystone users using
    keystone user-list

If all the above commands give you the output, that means your Keystone Identity Service is all set up, and you can proceed to the next steps—In part four, I will discuss on how to configure and set up Image Service to store images.

Categories: DBA Blogs

Oracle E-Business Suite Updates From OpenWorld 2014

Pythian Group - Tue, 2014-10-14 08:29

Oracle OpenWorld has always been my most exciting conference to attend. I always see high energy levels everywhere, and it kind of revs me up to tackle new upcoming technologies. This year I concentrated on attending mostly Oracle E-Business Suite release 12.2 and Oracle 12c Database-related sessions.

On the Oracle E-Business Suite side, I started off with Oracle EBS Customer Advisory Board Meeting with great presentations on new features like the Oracle EBS 12.2.4 new iPad Touch-friendly interface. This can be enabled by setting “Self Service Personal Home Page mode” profile value to “Framework Simplified”. Also discussed some pros and cons of the new downtime mode feature of adop Online patching utility that allows  release update packs ( like 12.2.3 and 12.2.4 patch ) to be applied with out starting up a new online patching session. I will cover more details about that in a separate blog post. In the mean time take a look at the simplified home page look of my 12.2.4 sandbox instance.

Oracle EBS 12.2.4 Simplified Interface

Steven Chan’s presentation on EBS Certification Roadmap announced upcoming support for Android tablets Chrome Browser, IE11 and Oracle Unified Directory etc. Oracle did not extend any support deadlines for Oracle EBS 11i or R12 this time. So to all EBS customers on 11i: It’s time to move to R12.2. I also attended a good session on testing best practices for Oracle E-Business Suite, which had a good slide on some extra testing required during Online Patching Cycle. I am planning to do a separate blog with more details on that, as it is an important piece of information that one might ignore. Also Oracle announced a new product called Flow Builder that is part of Oracle Application Testing Suite, which helps users test functional flows in Oracle EBS.

On the 12c Database side, I attended great sessions by Christian Antognini on Adaptive Query Optimization and Markus Michalewicz sessions on 12c RAC Operational Best Practices and RAC Cache Fusion Internals. Markus Cachefusion presentation has some great recommendations on using _gc_policy_minimum instead of turning off DRM completely using _gc_policy_time=0. Also now there is a way to control DRM of a object using package DBMS_CACHEUTIL.

I also attended attended some new, upcoming technologies that are picking up in the Oracle space like Oracle NoSQL, Oracle Big Data SQL, and Oracle Data Integrator Hadoop connectors. These products seem to have great future ahead and have good chances of becoming mainstream in the data warehousing side of businesses.

Categories: DBA Blogs

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

The Oracle Instructor - Tue, 2014-10-14 08:20

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

 OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.


[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:


DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:


[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:

DGMGRL> show database prima statusreport;
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*


Tagged: Data Guard, High Availability
Categories: DBA Blogs

Digital Learning – LVC: It’s the attitude, stupid!

The Oracle Instructor - Mon, 2014-10-13 06:33

The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

Data Guard Real-Time Apply

Data Guard Real-Time Apply

Attendees get all my sketches per email if they like afterwards.

In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

Some common objections against LVC I have heard from customers and my usual responses:

An LVC doesn’t deliver the same amount of interaction like a classroom course!

That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

I have no appropriate working place to attend an LVC!

You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

I cannot keep up the attention when starring the whole day on the computer screen!

Of course not, that is why we have breaks and practices in between the lessons.

Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?

Tagged: Digital Learning, LVC
Categories: DBA Blogs

Partner Webcast Special Edition – Oracle Mobile Application Framework: Developer Challenge

Win up to $6.000 by developing a mobile application with Oracle Mobile Application Framework! Mobile technology has changed the way that we live and work. As mobile interfaces take the lion’s...

We share our skills to maximize your revenue!
Categories: DBA Blogs

grumpy old dba goes to the mountains ( have to ski somewhere )?

Grumpy old DBA - Sat, 2014-10-11 14:47
So in a slight reversal of submit so many presentation abstracts get rejected so many times the Rocky Mountain training days 2015 will be featuring me!  Wow pumped yikes this should be a ton of fun!

Ok to be totally honest Maria Colgan is keynoting not me ( shocking news still trying to get over it ).

Rocky Mountain training days 2015

Looks like a top batch of speakers and topics ... I only have to present at the same time as people like Alex Gorbachev / Bill Inmon / Heli Helskyaho ... so well maybe room is a little sparse but I will be attempting to bribe people into attending my session so we will see how that goes.

Thanks RMOUG!

PS on another topic more news soon on our GLOC 2015 conference coming up in May 2015!

Categories: DBA Blogs

OCP 12C – SQL Tuning

DBA Scripts and Articles - Fri, 2014-10-10 12:23

What’s new ? Oracle 12c introduces a major update called Adaptive Query Optimization which is based on : Adaptive execution plans Adaptive Statistics These two functionnalities are used to improve execution plans by using dynamic statistics gathered during the first part of the SQL execution. This allow to create more efficient plans that those using … Continue reading OCP 12C – SQL Tuning →

The post OCP 12C – SQL Tuning appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Deploying a Private Cloud at Home — Part 2

Pythian Group - Fri, 2014-10-10 08:34

Today’s blog post is part two of seven in a series dedicated to Deploying Private Cloud at Home, where I will demonstrate how to do basic configuration setup to get started with OpenStack. In my first blog post, I explained why I decided to use OpenStack.

I am using a two-node setup in my environment, but you can still follow these steps and configure everything on single node. The below configuration reflects my setup. Kindly modify it as per your subnet and settings.

  • My home network has subnet of
  • My home PC which I am turning into controller node has IP of
  • MY KVM Hypervisor which I am turning to compute node has IP of
  1. It is advisable to have DNS setup in your intranet but just in case you don’t have it, you need to modify /etc/hosts file on both controller and compute node in order for OpenStack services to communicate to each other like below
    #Controller node controller
    #Compute node compute
  2. OpenStack services require a database to store information. You can use any database you are familiar with. I am using MySQL/MariaDB, as I am familiar with it. On the controller node, we will install the MySQL client and server packages, and the Python library.
     yum install -y mysql mysql-server MySQL-python
  3. Enable InnoDB, UTF-8 character set, and UTF-8 collation by default. To do that we need to modify /etc/my.cnf and set the following keys under [mysqld] section.
    default-storage-engine = innodb 
    collation-server = utf8_general_ci 
    init-connect = 'SET NAMES utf8' 
    character-set-server = utf8
  4. Start and enable the MySQL services
    service mysqld start
    chkconfig mysqld on
  5. Finally, set the root password for MySQL database. If you need further details about configuring the MySQL root password, there are many resources available online.
  6. On the compute node we need to install the MySQL Python library
    yum install -y MySQL-python
  7. Set up RDO repository on both controller and compute nodes
    yum install -y
  8. I am using CentOS 6.2 so I need to have epel repo as well. This step is not required if you are using distro other then RHEL, CentOS, Scientific Linux etc.
    yum install
  9. Install OpenStack utilities on both nodes and get started.
    yum install openstack-utils

Stay tuned for the remainder of my series, Deploying a Private Cloud at Home. In part three, we will continue configuring OpenStack services.

Categories: DBA Blogs

SQL Saturday Bulgaria 2014

Pythian Group - Fri, 2014-10-10 08:22


This Saturday October 11, I will be speaking at SQL Saturday Bulgaria 2014 in Sofia. It’s my first time in the country and I’m really excited to be part of another SQL Saturday :)

I will be speaking about Buffer Pool Extension, a new feature on SQL Server 2014. If you want to learn a little more about the new SQL Server version, don’t hesitate to attend the event. Looking forward to seeing you there!

Categories: DBA Blogs

Log Buffer #392, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-10-10 08:19

It seems its all about cloud these days. Even the hardware is being marketed with cloud in perspective. Databases like Oracle, SQL Server and MySQL are ahead in the cloud game and this Log Buffer Edition covers that all.


Oracle Database 12c was launched over a year ago delivering the next-generation of the #1 database, designed to meet modern business needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform.

Oracle OpenWorld 2014 Session Presentations Now Available.

Today, Oracle is using big data technology and concepts to significantly improve the effectiveness of its support operations, starting with its hardware support group.

Generating Sales Cloud Proxies using Axis? Getting errors?

How many page views can Apex sustain when running on Oracle XE?

SQL Server:

Send emails using SSIS and SQL Server instead of application-level code.

The public perception is that, when something is deleted, it no longer exists. Often that’s not really the case; the data you serve up to the cloud can be stored out there indefinitely, no matter how hard to try to delete it.

Every day, out in the various online forums devoted to SQL Server, and on Twitter, the same types of questions come up repeatedly: Why is this query running slowly? Why is SQL Server ignoring my index? Why does this query run quickly sometimes and slowly at others?

You need to set up backup and restore strategies to recover data or minimize the risk of data loss in case a failure happens.

Improving the Quality of SQL Server Database Connections in the Cloud


Low-concurrency performance for updates and the Heap engine: MySQL 5.7 vs previous releases.

Database Automation – Private DBaaS for MySQL, MariaDB and MongoDB with ClusterControl.

Removing Scalability Bottlenecks in the Metadata Locking and THR_LOCK Subsystems in MySQL 5.7.

The EXPLAIN command is one of MySQL’s most useful tools for understanding query performance. When you EXPLAIN a query, MySQL will return the plan created by the query optimizer.

Shinguz: Migration between MySQL/Percona Server and MariaDB.

Categories: DBA Blogs

Difference between 2014 and 2015 cadillac srx

Ameed Taylor - Fri, 2014-10-10 01:21
On the off chance that that you would be capable to't consider that any little Cadillac vehicles of the past, envision yourself fortunate, as not one or the other the Opel Omega-based completely Catera or Chevy Cavalier-based Cimarron offer specifically affectionate memories. serendipitously, the only thing that matters now could be the way that the Difference between 2014 and 2015 cadillac srx remains as a fantastic entrance in a class loaded with overachieving movement vehicles.

its a well known fact that the Cadillac other individuals have pointed the back wheel-drive 2015 cadillac escalade platinumsoundly at the balanced BMW 3 succession, which has laid out the portion for a considerable length of time. The 2015 cadillac fleetwood price outside measurements principally recreate those of the 3 accumulation, and the 2015 cadillac escalade redesign bargains pleasant develop quality, feisty effectiveness and an including weight together with a supple ride, much the same as the benchmark Bimmer. Cadillac's most up to date form likewise bargains an intelligent electronic interface with which to work all the nearby inside solace doohickeys, which is a vital component in this section of lavish cars.

The 2015 cadillac escalade first drive stacks up well against its opponent. On the expressway, it supplies great direction feel and a light-footed, shrewdly adjusted ride. Helping the sharp elements is reality that this Caddy is the lightest car in its classification (by utilizing 70-150 pounds, contingent upon trim). further adding to the ATS's physicality is its best 50/50 weight circulation between the passageway and back wheels.

With a trio of motor decisions close by, the 2015 cadillac hybrid productivity ranges from lukewarm to intriguing. the base 2.5-liter 4 serves as the expense and gas financial framework pioneer, actually assuming its 202-drive yield slacks in the again of the base motors found in the opposition. in the mean time, the turbocharged 2.0-liter inline-4 packs a superb midrange punch and is the main alternative inside the ATS extend that might be had with a manual gearbox. With 321 hp, the vivacious V6 offers a sweet soundtrack and is intelligently matched to a dreadfully responsive computerized transmission.

There are various minor contemplations with the ATS. aficionados may requirement for a handbook gearbox with the top motor, while the back seats and trunk are substantially less spacious than what a few adversaries give. indeed, this stage isn't exactly dispossessed of ability, either. The 2013 BMW three grouping still takes high respects by utilizing goodness of its progressed base powertrain and much more alluring driving flow, in any case it is normally ordinarily dearer. We're furthermore marginally partial to the correspondingly intelligently adjusted Audi A4, the refined Mercedes-Benz C-class and cost stuffed - if no more as cleaned - Infiniti G vehicle. however general, the 2013 Cadillac ATS is an extremely solid contender in the, exceptionally aggressive segment of reduced diversion vehicles.

2015 cadillac escalade premium
The 2015 cadillac escalade hp is a five-traveler, extravagance situated action vehicle that is given in four trim extents: base, sumptuous, productivity and top rate.

standard gimmicks on the bottom trim incorporate 17-inch combination wheels, warmed mirrors, mechanized headlights, journey control, twin-zone programmed atmosphere manage, six-way vitality front seats with vitality lumbar, leatherette premium vinyl upholstery, a tilt-and-extendable direction wheel, Onstar, Bluetooth telephone network and a seven-speaker Bose sound framework with satellite radio, an ipod/USB interface and an assistant sound jack.

the luxurious trim gives run-level tires, keyless entrance/ignition, far flung motor begin, eight-methodology force doorway seats, front and back park support, a rearview computerized cam, an auto-darkening rearview imitate, calfskin seating, driver memory works, a 60/forty part collapsing back seat (with move-thru), HD radio, Bluetooth sound streaming and the CUE infotainment interface.

The proficiency trim (no more accessible with 2.5-liter motor) further gives twin fumes outlets, a Driver cognizance bundle arrangement (forward crash caution, back cross-site guests alarm, path takeoff cautioning, programmed wipers and back seat feature airbags), a vivacious air grille, xenon headlights, an overhauled 10-speaker Bose encompass sound gadget (with a CD member), door action seats (with driver-aspect support change) and a set back seat with pass-through.

Stepping as much as the top class trim (not on hand with 2.5-liter engine) adds 18-inch wheels, a navigation machine, a color head-up display and the 60/forty cut up-folding rear seat. An 2015 cadillac escalade images top rate with rear-wheel force additionally comes with summer tires, a sport-tuned suspension, adaptive suspension dampers and a limited-slip rear differential.

among the features which can be usual for the upper trim ranges are to be had as options on the lower trims. just a few other not obligatory applications are also available. the driving force help package includes the features from the awareness package deal and provides adaptive cruise keep watch over, blind-spot monitoring, collision education with brake help, and the colour head-up show. The cold climate bundle contains heated entrance seats and a heated guidance wheel. The monitor performance bundle provides an engine oil cooler and upgraded brake pads. other options embrace totally different wheels, a sunroof and a trunk cargo organizer.

when will the 2015 cadillac escalade be available
the 2.5 fashions include a 2.5-liter 4-cylinder engine that produces 202 hp and one hundred ninety pound-toes of torque. the 2.0 Turbo fashions include a turbocharged 2.0-liter 4-cylinder rated at 272 hp and 260 lb-feet of torque. the 3.6 fashions include a three.6-liter V6 that cranks out 321 hp and 274 lb-feet of torque.

All 2015 cadillac escalade jalopnik engines come matched to a six-pace computerized transmission aside from the two.0 Turbo, which can also be had with a six-velocity handbook. Rear-wheel drive is standard across the board, with all-wheel power optional for the and 3.6-liter engines.

In Edmunds checking out, a rear-force ATS 2.0T with the manual went from zero to 60 mph in 6.3 seconds. A rear-drive ATS 3.6 top class with an automated accelerated from zero to 60 mph in 5.7 seconds. each times are reasonable among in a similar fashion powered entry-stage activity sedans.

EPA-estimated gasoline financial system for the ATS 2.5 stands at 22 mpg city/33 mpg freeway and 26 mpg blended. The V6 is estimated to succeed in 19/28/26 with rear-wheel power and Cadillac claims the Turbo will get the same with an automatic transmission. With all-wheel power, the ATS V6 drops to 18/26/21.
build 2015 cadillac escalade
standard safety features for the new 2015 cadillac escalade for sale embody antilock disc brakes, traction regulate, balance keep an eye on, energetic front head restraints, front-seat aspect and knee airbags and entire-size aspect curtain airbags. additionally usual is OnStar, which includes automated crash notification, on-demand roadside help, far off door unlocking, stolen vehicle assistance and switch-with the aid of-turn navigation. non-compulsory are the aforementioned Driver consciousness and Driver assistance applications.

In Edmunds brake testing, an ATS 3.6 premium got here to a stop from 60 mph in an impressively short 108 feet. A 2.0T stopped in an ordinary distance of 113 feet.
2015 cadillac deville price
within its cabin, the photos of 2015 cadillac escalade boasts plenty of top quality materials, together with tasteful timber and steel accents. The on hand CUE infotainment interface features huge icons and operates like an iPhone or iPad, which is to say you use it by tapping, flicking, swiping or spreading your fingers -- making it familiar for a lot of users. moreover, "Haptic" comments allows you to understand while you've pressed a virtual button by using pulsing when you contact it.

Up front, the seats do a pleasant job of protecting one in situation right through spirited drives, and it is quite straightforward to find a comfortable riding position. Oddly, the not obligatory game seats do not present rather more in the best way of lateral enhance for the motive force, despite their energy-adjustable bolsters.

Rear-seat headroom is good, but knee room is tight for taller people. regardless of a wide opening, the 2015 cadillac escalade length trunk deals just 10.2 cubic ft of capability — downright stingy for this phase. fortuitously, some trims function a 60/forty split-folding rear seat, which helps on this regard.
2015 cadillac escalade pictures
The 2015 cadillac escalade vs infiniti qx80 is an impressive all-around performer, because of a poised experience, sure-footed cornering functionality and superb response from the guidance and brakes. the two.5-liter engine is smooth, however it offers tepid acceleration compared to other entry-stage powertrains, notably that of the BMW 328i. opt for one of the crucial different ATS engines, then again, and you can have no complaint, as they supply thrust extra in line with this Cadillac's athletic personality. even supposing fans could lament the lack of a guide transmission for the V6, the six-pace automated is tricky to fault. Switched to game mode, this automated is aware of just when to hold a equipment and provides smooth, rev-matched downshifts right on time, each time.

Even with its wearing calibration, the cadillac new models 2015 takes neglected city streets in stride, absorbing the shock of potholes and damaged pavement with out upsetting the automobile or its occupants. because of this, the compact Cadillac makes for a nice day by day driver that can also provide a whole lot of leisure on a Sunday morning power.

Categories: DBA Blogs

Partner Webcast – Oracle Database 12c ( Are you ready for the Future of the Database?

Oracle Database 12c was launched over a year ago delivering the next-generation of the #1 database, designed to meet modern business needs, providing a new multitenant architecture on top of a fast,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What is Continuous Integration?

Pythian Group - Thu, 2014-10-09 10:44

Most companies want to deploy features faster, and fix bugs more quickly—at the same time, a stable product that delivers what the users expected is crucial to winning and keeping the trust of those users.  At face value, stability and Lego Trainspeed appear to be in conflict; developers can either spend their time on features or on stability.  In reality, problems delivering on stability as well as problems implementing new features are both related to a lack of visibility.  Developers can’t answer a very basic question: What will be impacted by my change?

When incompatible changes hit the production servers as a result of bug fixes or new features, they have to be tracked down and resolved.  Fighting these fires is unproductive, costly, and prevents developers from building new features.

The goal of Continuous Integration (CI) is to break out of the mentality of firefighting—it gives developers more time to work on features, by baking stability into the process through testing.

Sample Workflow
  1. Document the intended feature
  2. Write one or more integration tests to validate that the feature functions as desired
  3. Develop the feature
  4. Release the feature

This workflow doesn’t include an integration step—code goes out automatically when all the tests pass. Since all the tests can be run automatically, by a testing system like Jenkins, a failure in any test, even those outside of the developers control, constitutes a break which must be fixed before continuing.  Of course in some cases, users follow paths other than those designed and explicitly tested by developers and bugs happen.  New testing is required to validate that bugs are fixed and these contribute to a library of tests which collectively increase collective confidence in the codebase.  Most importantly, the library of tests limits the scope of any bug which increases the confidence of developers to move faster.

Testing is the Secret Sauce

As the workflow illustrates, the better the tests, the more stable the application.  Instead of trying to determine which parts of the application might be impacted by a change, the tests can prove that things still work, as designed.


Continuous Integration is just one of the many ways our DevOps group engages with clients. We also build clouds and solve difficult infrastructure problems. Does that sound interesting to you? Want to come work with us? Get in touch!

Categories: DBA Blogs

Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan; O'Reilly Media

Surachart Opun - Thu, 2014-10-09 02:37
Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. How to deliver log to Hadoop HDFS. Apache Flume is open source to integrate with HDFS, HBASE and it's a good choice to implement for log data real-time collection from front end or log data system.
Apache Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data.It uses a simple data model. Source => Channel => Sink
It's a good time to introduce a good book about Flume - Using Flume - Flexible, Scalable, and Reliable Data Streaming by Hari Shreedharan (@harisr1234). It was written with 8 Chapters: giving basic about Apache Hadoop and Apache HBase, idea for Streaming Data Using Apache Flume, about Flume Model (Sources, Channels, Sinks), and some moew for Interceptors, Channel Selectors, Sink Groups, and Sink Processors. Additional, Getting Data into Flume* and Planning, Deploying, and Monitoring Flume.

This book was written about how to use Flume. It's very good to guide about Apache Hadoop and Apache HBase before starting about Flume Data flow model. Readers should know about java code, because they will find java code example in a book and easy to understand. It's a good book for some people who want to deploy Apache Flume and custom components.
Author separated each Chapter for Flume Data flow model. So, Readers can choose each chapter to read for part of Data flow model: reader would like to know about Sink, then read Chapter 5 only until get idea. In addition, Flume has a lot of features, Readers will find example for them in a book. Each chapter has references topic, that readers can use it to find out more and very easy + quick to use in Ebook.
With Illustration in a book that is helpful with readers to see Big Picture using Flume and giving idea to develop it more in each System or Project.
So, Readers will be able to learn about operation and how to configure, deploy, and monitor a Flume cluster, and customize examples to develop Flume plugins and custom components for their specific use-cases.
  • Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers
  • Dive into key Flume components, including sources that accept data and sinks that write and deliver it
  • Write custom plugins to customize the way Flume receives, modifies, formats, and writes data
  • Explore APIs for sending data to Flume agents from your own applications
  • Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running
Book: Using Flume - Flexible, Scalable, and Reliable Data Streaming
Author: Hari ShreedharanWritten By: Surachart Opun
Categories: DBA Blogs

Index Compression Part VI: 12c Index Advanced Compression Block Dumps (Tumble and Twirl)

Richard Foote - Thu, 2014-10-09 01:01
Sometimes, a few pictures (or in this case index block dumps) is better than a whole bunch of words :) In my previous post, I introduced the new Advanced Index Compression feature, whereby Oracle automatically determines how to best compress an index. I showed a simple example of an indexed column that had sections of index entries that were […]
Categories: DBA Blogs

11 Tips To Get Your Conference Abstract Accepted

11 Ways To Get Your Conference Abstract Accepted
This is what happens when your abstract is selected!Ready for some fun!? It's that time of year again and the competition will be intense. The "call for abstracts" for a number of Oracle Database conferences are about to close.

The focus of this posting is how you can get a conference abstract accepted.

As a mentor, Track Manager and active conference speaker I've been helping DBAs get their abstracts accepted for many years. If you follow my 11 tips below, I'm willing to bet you will get a free pass to any conference you wish in any part of the world.

1. No Surprises! 
Track Manager After A SurpriseThe Track Manager wants no surprises, great content and a great presentation. Believe me when I say, they are looking for ways to reduce the risk of a botched presentation, a cancelation or a no show. Your abstract submissions is your first way to show you are serious and will help make the track incredibly awesome.

Tip: In all your conference communications, demonstrate a commitment to follow through.

2. Creative Title.
The first thing everyone sees is the title. I can personally tell you, if the title does not peak my curiosity without sounding stupid, then unless I know the speaker is popular I will not read the abstract. Why do I do this? Because as a Track Manager, I know conference attendees will do the same thing! And as a Track Manager, I want attendees to want to attend sessions in my track.

Tip: Find two people, read the title to them and ask what they think. If they say something like, "What are you going to talk about?" that's bad. Rework the title.

3. Tell A Story
The abstract must tell a compelling story. Oracle conferences are not academic conferences! There needs to be some problem along with a solution complete with drama woven into the story.

Tip: People forget bullet points, but they never forget a good story.

4. Easy To Read
The abstract must be easy to review. The abstract reviewers may have over a hundred abstracts to review. Make it a good quick read for the reviewers and your chances increase.

Tip: Have your computer read your abstract back to you. If you don't say, "Wow!" rework the abstract. 

5. Be A Grown-Up
You can increase the perception you will physically show up and put on a great show at the conference by NOT putting into your abstract emoji, bullet points, your name and title or pushing a product or service. NEVER copy/paste from a powerpoint outline into the abstract or outline. (I've seen people do this!)

Tip: Track Managers do not want to baby sit you. They want an adult who will help make their track great.

6. Submit Introductory Level Abstracts
I finally figured this out a couple years ago. Not everyone is ready for a detailed understanding of cache buffer chain architecture, diagnosis, and solution development. Think of it from a business perspective. Your market (audience) will be larger if your presentation is less technical. If this bothers you, read my next point.

Tip: Submit both an introductory level version and advanced level version of your topic.

7. Topics Must Be Filled
Not even the Track Manager knows what people will submit. And you do not know what the Track Manager is looking for. And you do not know what other people are submitting. Mash this together and it means you must submit more than one abstract. I know you really, really want to present on topic X. But would you rather not have an abstract accepted?

Tip: Submit abstracts on multiple topics. It increases your chances of being accepted.

8. Submit Abstract To Multiple Tracks
This is similar to submitting both an introductory version of your abstract. Here's an example: If there is a DBA Bootcamp track and a Performance & Internals Track, craft your abstract to Bootcamp version has a more foundational/core feel to it. And craft your Performance & Internals version to feel more technical and advanced.

Do not simply change the title and the abstract can not be the same.  If the conference managers or the Track Manager feels you are trying to game the conference, you present a risk to the conference and their track and your abstracts will be rejected. So be careful and thoughtful.

Tip: Look for ways to adjust your topic to fit into multiple tracks.

9. Great Outline Shows Commitment
If the reviewers have read your title and abstract, they are taking your abstract seriously. Now is the time to close the deal by demonstrating you will put on a great show. And this means you already have in mind an organized and well thought out delivery. You convey this with a fantastic outline. I know it is difficult to create an outline BUT the reviewers also know this AND having a solid outline demonstrates to them you are serious, you will show up, and put on a great show.

Tip: Develop your abstract and outline together. This strengthens both and develops a kind of package the reviewers like to see.

10. Learning Objectives Show Value
You show the obvious value of your topic through the learning objectives. Personally, I use these to help keep me focused on my listener, just not what I'm interested in at the moment. Because I love my work, I tend to think everyone also does... not so. I must force myself to answer the question, "Why would a DBA care about this topic?"

Tip: Develop your learning objectives by asking yourself, "When my presentation is over, what do I want the attendees to remember?"

11. Submit About Problems You Solved
Submit on the topics you have personally explored and found fascinating. Every year, every DBA has had to drill deep into at least one problem. This concentrated effort means you know the topic very well. And this means you are qualified to tell others about it! People love to hear from people who are fascinated about something. Spread the good news resulting from a "bad" experience.

Tip: Submit on topics you have explored and are fascinated with.

How Many Abstracts Should I Submit?
It depends on the conference, but for a big North America conference like ODTUG, RMOUG and IOUG I suggest at least four.

Based on what I wrote above, pick three topics, perhaps create both an introductory and advanced version and look to see if it makes sense to submit to multiple tracks. That means you'll probably submit at least four abstracts. It's not as bad as it sounds, because you will only have perhaps three core abstracts. All the others are modifications to fit a specific need. Believe when you receive the acceptance email, it will all be worth it!

See you at the conference!


Categories: DBA Blogs