Feed aggregator

Is it possible to find out the problem SQL in a procedure which was executed days ago

Tom Kyte - Wed, 2016-10-12 06:26
Hi Team, Our system was written by plenty a lot of procedures, and one of them did not performance well. This procedure includes tens of SQL statments, suppose that there's only 1 or 2 SQL statements in the procedure caused this problem. I mean, th...
Categories: DBA Blogs

dbca

Tom Kyte - Wed, 2016-10-12 06:26
CLSN00107: CRS-5017: L'action de ressource "ora.dbfor.db start" a rencontre l'erreur suivante : ORA-12546: TNS : acces refuse . Pour plus de details, reportez-vous a "(:CLSN00107:)" dans "/exec/products/oracle/grid/log/servfor/agent/ohasd/oraagent_...
Categories: DBA Blogs

Why was my job not running?

Tom Kyte - Wed, 2016-10-12 06:26
Dear all, I have tried to create a job (wizard) in Oracle SQL Developer 4.0 as follows: ----------SUMMARY------------------ Job Name - TEST Enabled - true Description - Job Class - null Type of Job - PL/SQL Block When to Execute Job - IM...
Categories: DBA Blogs

Documentum story – dm_LogPurge and dfc.date_format

Yann Neuhaus - Wed, 2016-10-12 05:05

What is the relation between dfc.date_format and dm_LogPurge? This is the question we had to answer as we hit an issue. An issue with the dm_LogPurge job.
As usual once a repository has been created we are configuring several Documentum jobs for the housekeeping.
One of them is the dm_LogPurge. It is configured to run once a day with a cutoff_days of 90 days.
So all ran fine until we did another change.
On request of an application team we had to change the dfc.date_format to dfc.date_format=dd/MMM/yyyy HH:mm:ss to allow the D2 clients to use Months in letters and not digits.
This change fulfilled the application requirement but since that day, the dm_LogPurge job started to remove too many log files (to not write ALL). :(

So let’s explain how we proceed to find out the reason of the issue and more important the solution to avoid it.
We have been informed not by seeing that too many files have been removed but by checking the repository log file. BTW, this file is checked automatically using nagios with our own dbi scripts. So in the repository log file we had errors like:

2016-04-11T20:30:41.453453      16395[16395]    01xxxxxx80028223        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213d2 failed "
2016-04-11T20:30:41.453504      16395[16395]    01xxxxxx80028223        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content of StateOfDocbase sysobject. "
2016-04-11T20:26:10.157989      14679[14679]    01xxxxxx80028220        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213c7 failed "
2016-04-11T20:26:10.158059      14679[14679]    01xxxxxx80028220        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content

 

Based on the time stamp, I saw that the issue could be related to the dm_LogPurge. So I checked the job log file as well the folders which are cleaned out. In the folder all old log files were removed:

[dmadmin@content_server_01 log]$ date
Wed Apr 13 06:28:35 UTC 2016
[dmadmin@content_server_01 log]$ pwd
$DOCUMENTUM/dba/log
[dmadmin@content_server_01 log]$ ls -ltr REPO1*
lrwxrwxrwx. 1 dmadmin dmadmin      34 Oct 22 09:14 REPO1 -> $DOCUMENTUM/dba/log/<hex docbaseID>/
-rw-rw-rw-. 1 dmadmin dmadmin 8540926 Apr 13 06:28 REPO1.log

 

To have more information, I set the trace level of the dm_LogPurge job to 10 and analyzed the trace file.
In the trace file we had:

[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,sessionconfig,r_date_format ") ==> "31/1212/1995 24:00:00 "
[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,08xxxxxx80000362,method_arguments[ 1] ") ==> "-cutoff_days 90 "

 

So why did we have 31/1212/1995 ?

Using API I confirmed an issue related to the date format

API> get,c,sessionconfig,r_date_format
...
31/1212/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
14/Apr/2016 08:36:52

(1 row affected)

 

Date format? So as all our changes are documented, I easily found that we changed the dfc_date_format for the D2 application.
By cross-checking with another installation, used by another application where we did not change the dfc.date_format, I could confirm that the issue was related to this dfc parameter change.

Without dfc.date_format in dfc.properties:

API> get,c,sessionconfig,r_date_format
...
12/31/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
4/14/2016 08:56:13

(1 row affected)

 

Just to be sure that I did not miss something, I checked also if not all log files were removed after starting manually the job. They were still there.
Now the solution would be to rollback the dfc.date_format change but this would only help the platform but not the application team. As the initial dfc.date_format change was validated by EMC we had to find a solution for both teams.

After investigating we found the final solution:
Add dfc.date_format=dd/MMM/yyyyy HH:mm:ss in the dfc.properties file of the ServerApps (in the JMS directly so!)

With this solution the dm_LogPurge job does not remove too many files and the Application Team can still use the Month written in letters in its D2 applications.

 

 

Cet article Documentum story – dm_LogPurge and dfc.date_format est apparu en premier sur Blog dbi services.

Database Crash : ORA-27300: OS system dependent operation:semctl failed with status: 22 (RedHat 7)

Online Apps DBA - Wed, 2016-10-12 04:38

Database Installation and Operation Fails if RemoveIPC=yes Is Configured for systemd If RemoveIPC=yes is configured for systemd, interprocess communication (IPC) is terminated for a non-system user’s processes when that user logs out. This setting, which is intended for laptops, can cause software problems on server systems. For example, if the user is a database software […]

The post Database Crash : ORA-27300: OS system dependent operation:semctl failed with status: 22 (RedHat 7) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

SQL Developer: Live and Let Live my db-connection

Darwin IT - Wed, 2016-10-12 03:59
At my current customer I have SQLDeveloper open the whole day, and regularly I come back to it to query my throttle-table to see if my requests have been picked up. But regularly my database connection have been broken because of being idle. Probably because of a nasty firewall between my remote development desktop and the database.

Googling on it I found an article of That Jeff Smith on busy connections. That blog is really one to follow. Feed it to your Feedly if your a recent SQL Developer user.

But in my search I found the 'keepAlive-4' extension for SQL Developer 4. Download the KeepAlive zip, go to Help->Check for Updates in SQLDeveloper and install it via the 'Install From Local File' option. Then look for the Keep Alive icon in the toolbar:
Click on it and enter a check frequency, of at least 60 seconds. I try 180 seconds.

You can disable it by clicking it again. Another click and it asks for a new interval again.

Balancing EBS JRE Requirements With Other Java Apps

Steven Chan - Wed, 2016-10-12 02:05

Java logoCustomers frequently ask me, "What's the best way of handling JRE requirements when I have other Java applications in addition to Oracle E-Business Suite?"

The short answer: you should always try to use the latest JRE release, but before you do that, you need to review your enterprise JRE requirements carefully to ensure that all of your applications are compatible with it.

Why you should always deploy the latest JRE updates

JRE updates include fixes for stability, performance, and security.  The most-important fixes are for security, of course.  Therefore, we strongly recommend that customers stay current with the latest JRE release.  This applies to all Java customers, not just E-Business Suite customers.

New JRE releases are always certified with EBS on day zero

The E-Business Suite is always certified with all new JRE releases on JRE 1.6, 1.7, and 1.8.  We test all EBS releases with beta versions of JRE updates before the latter are released, so there should be no EBS compatibility issues.  Customers can even turn on Auto-Update (which ensures that a new JRE update is automatically applied to a desktop whenever it’s available) with no concerns about EBS compatibility.

Which JRE codeline is recommended for Oracle E-Business Suite?

Oracle E-Business Suite does not have any dependencies on a specific JRE codeline.  All three JRE releases – 1.6, 1.7, 1.8 – work the same way with EBS. 

You should pick whichever JRE codeline works best with the rest of your third-party applications.

Check the compatibility of all third-party Java applications in use

Of course, Oracle cannot make any assurances about compatibility with other third-party products.  I have heard from some customers who have Java-based applications whose certifications lag behind the latest JRE release.

Organizations need to maintain a comprehensive matrix that shows the latest certified JRE releases for all of their Java applications. This matrix should include the E-Business Suite.

Take the "lowest-common denominator" approach

Customers with multiple Java-based applications generally are forced to take a “lowest-common denominator” approach.  Even if Oracle E-Business Suite and the majority of your Java-based applications are compatible with the latest JRE release, any lagging application's incompatibility with the latest JRE release will force you to remain on the earliest version that is common to all of them.

For example:

  • An organization runs Oracle E-Business Suite and four third-party Java-based applications
  • Oracle E-Business Suite is certified with the latest JRE release
  • Three of the third-party applications are certified with the latest JRE release
  • One of the third-party applications is certified only on the previous JRE release
  • The organization is forced to deploy the previous JRE release to their desktops. 

Contact all of your third-party vendors

Organizations whose JRE deployments are held back by a particular application's incompatibility should contact the vendor for that application and ask them test with Java Early Access downloads or to participate in the Oracle CAP Program.


Categories: APPS Blogs

expdp content=data_only

Learn DB Concepts with me... - Tue, 2016-10-11 23:19
[oracle@oracle1 dpump]$ expdp atest/password directory=dpump dumpfile=test_tab1.dmp content=data_only tables=test_tab1 logfile=test_tab1.log

Export: Release 11.2.0.1.0 - Production on Wed Feb 11 10:58:23 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ATEST"."SYS_EXPORT_TABLE_01":  atest/******** directory=dpump dumpfile=test_tab1.dmp content=data_only tables=test_tab1 logfile=test_tab1.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
. . exported "ATEST"."TEST_TAB1"                         5.937 KB      11 rows
Master table "ATEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ATEST.SYS_EXPORT_TABLE_01 is:
  /u01/app/oracle/dpump/test_tab1.dmp
Job "ATEST"."SYS_EXPORT_TABLE_01" successfully completed at 10:58:26

[oracle@oracle1 dpump]$ clear
[oracle@oracle1 dpump]$ impdp atest2/password directory=dpump dumpfile=test_tab1.dmp content=data_only logfile=test_tab1_imp.log TABLE_EXISTS_ACTION=truncate remap_schema=atest:atest2

Import: Release 11.2.0.1.0 - Production on Wed Feb 11 10:58:50 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ATEST2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ATEST2"."SYS_IMPORT_FULL_01":  atest2/******** directory=dpump dumpfile=test_tab1.dmp content=data_only logfile=test_tab1_imp.log TABLE_EXISTS_ACTION=truncate remap_schema=atest:atest2
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ATEST2"."TEST_TAB1"                        5.937 KB      11 rows
Job "ATEST2"."SYS_IMPORT_FULL_01" successfully completed at 10:58:52
Categories: DBA Blogs

ORA-14074: partition bound must collate higher than that of the last partition

Learn DB Concepts with me... - Tue, 2016-10-11 23:16

I have a table AUDIT_LOGONS, it has 5 partitions in it and one partition is defined as MAXVALUE. All partitions has some data (see below screen) in it except the MAXVALUE partition. Now I want to add a new partition which has date values less than 2016-05-31



 But I am getting error ORA-14074

sql :

alter table AUDIT_LOGONS add partition AUDIT_LOGONS_P1 VALUES LESS THAN (TO_DATE(' 2016-05-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS'));

and I get this error :

SQL Error: ORA-14074: partition bound must collate higher than that of the last partition
14074. 00000 -  "partition bound must collate higher than that of the last partition"
*Cause:    Partition bound specified in ALTER TABLE ADD PARTITION

Solution 1:

We can add a sub-partition to the partition that was set with MAXVALUE (AUDIT_LOGONS5 in this case). In below sql we are modifying the partition audit_logons5 adding a sub-parition audit_logons6 which will have all the data which has date below "2016-09-30"


ALTER TABLE MONTHLY_SALES MODIFY PARTITION AUDIT_LOGONS5 ADD SUB-PARTITION AUDIT_LOGONS6  VALUES LESS THAN (TO_DATE('2016-09-30 00:00:00','SYYYY-MM-DD HH24:MI:SS','NLS_CALENDAR=GREGORIAN'));

Note : the partition can be renamed anytime

Solution 2 (This will not work for all):

One solution is to drop that Maxvalue (AUDIT_LOGONS5 in this case) partition if there is no data in it and then we can recreate another partitions with defined dates like below.

ALTER TABLE monthly_sales DROP PARTITION AUDIT_LOGONS5;

ALTER TABLE monthly_sales ADD PARTITION AUDIT_LOGONS5 VALUES LESS THAN (TO_DATE('2016-09-30 00:00:00','SYYYY-MM-DD HH24:MI:SS','NLS_CALENDAR=GREGORIAN'));








Categories: DBA Blogs

Getting started with Apache Flink and Kafka

Tugdual Grall - Tue, 2016-10-11 22:33
Read this article on my new blog Introduction Apache Flink is an open source platform for distributed stream and batch data processing. Flink is a streaming data flow engine with several APIs to create data streams oriented application. It is very common for Flink applications to use Apache Kafka for data input and output. This article will guide you into the steps to use Apache Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Need classes directory to run ENCRYPT_PASSWORD on PeopleTools 8.53

Bobby Durrett's DBA Blog - Tue, 2016-10-11 18:57

I had worked on creating a Delphix virtual copy of our production PeopleTools 8.53 database and wanted to use ENCRYPT_PASSWORD in Datamover to change a user’s password. But I got this ugly error:

Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password.

What the heck! I have used Datamover to change passwords this way for 20 years and never seen this error. Evidently in PeopleTools 8.53 they increased the complexity of the encryption by adding a “salt” component. So, now when Datamover runs the ENCRYPT_PASSWORD command it calls Java for part of the calculation. For those of you who don’t know, Datamover is a Windows executable, psdmt.exe. But, now it is calling java.exe to run ENCRYPT_PASSWORD.

I looked at Oracle’s support site and tried the things the recommended but it didn’t resolve it. Here are a couple of the notes:

E-SEC: ENCRYPT_PASSWORD Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. (Doc ID 2001214.1)

E-UPG PT8.53, PT8.54: PeopleTools Only Upgrade – ENCRYPT_PASSWORD Error: Process aborted. Possibly due to JVM is not available or missing java class or empty password. (Doc ID 1532033.1)

They seemed to focus on a situation during an upgrade when you are trying to encrypt all the passwords and some have spaces in their passwords. But that wasn’t the case for me. I was just trying to change one user’s password and it wasn’t spaces.

Another recommendation was to put PS_HOME/jre/bin in the path. This totally made sense. I have a really stripped down PS_HOME and had the least number of directories that I need to do migrations and tax updates. I only have a 120 gig SSD C: drive on my laptop so I didn’t want a full multi-gigabyte PS_HOME. So, I copied the jre directory down from our windows batch server and tried several ways of putting the bin directory in my path and still got the same error.

Finally, I ran across an idea that the Oracle support documents did not address, probably because no one else is using partial PS_HOME directories like me. I realized that I needed to download the classes directory. I found a cool documentation page about the Java class search path for app servers in PeopleTools 8.53. It made me guess that psdmt.exe would search the PS_HOME/classes directory for the classes it needed to do the ENCRYPT_PASSWORD command. So, I copied classes down from the windows batch server and put the jre/bin directory back in the path and success!

Password hashed for TEST
Ended: Tue Oct 11 16:36:55 2016
Successful completion
Script Completed.

So, I thought I would pass this along in the unusual case that someone like myself needs to not only but the jre/bin directory in their path but is also missing the classes directory.

Bobby

Categories: DBA Blogs

OTN Appreciation Day: External tables

Yann Neuhaus - Tue, 2016-10-11 16:33

As part of the OTN Appreciation Day (see https://oracle-base.com/blog/2016/09/28/otn-appreciation-day/) I’m writing about one of my favorite Oracle features: External tables.

Traditionally people loaded data in an Oracle database using SQL*Loader. With the introduction of external tables, SQL*Loader became obsolete (in my view ;-)), because external tables provide the same loading capabilities and so much more than SQL*loader. Why? Because external tables can be accessed through SQL. You have all possibilities SQL-queries offer. Prallelism, difficult joins with internal or other external tables and of course all complex operations SQL allows. ETL became much easier using external tables, because it allowed to process data through SQL joins and filters already before it was loaded in the database.

For more info see
– http://docs.oracle.com/database/121/CNCPT/tablecls.htm#CNCPT88821
– or search on Ask Tom about external tables. You’ll be surprised :-)

 

Cet article OTN Appreciation Day: External tables est apparu en premier sur Blog dbi services.

OTN Appreciation Day : APEX

Dimitri Gielis - Tue, 2016-10-11 15:39
If you're following some Oracle blogs or Twitter, you'll see many blog posts starting with "OTN Appreciation Day : " today. You can read the story behind this initiative on Tim Hall's blog. "The blog post content should be short, focusing more on why you like the feature, rather than technical content."
In my life Oracle played (and is still playing) an important role... and it all started because I love working with data - which lead me to the Oracle database, the *best* database in the world.

So I just have to write about a feature of the Oracle Database; but which one to pick? The way Oracle implemented SQL, or the programming language inside the database PL/SQL or the tools and options that make the database awesome?... I thought some time about it and for me personally next to the database itself, it was really APEX that changed my life, so I just have to write about it.

In this post I want to share why I love Oracle Application Express (APEX) and why I consider this the best feature of the Oracle Database *ever*.

The goal, I believe, of a database is to capture data and do something with it; either to get insight in your data or share it again in different formats with others... and Oracle Application Express is just the easiest way to do this! In no time you create a web application with some forms that capture data directly in your database. And in even less time you share and get insight in your data through beautiful reports and charts. You just need a browser... it's secure, fast, scalable and you can use the full power and features of the database - APEX is the window to your data!


#ThanksOTN
Categories: Development

#ThanksOTN OTN Appreciation Day: Recovery Appliance - Database Recovery on Steroids

Fuad Arshad - Tue, 2016-10-11 15:36
+ORACLE-BASE.com Tim hall came up with a brilliant idea to appreciate OTN  for all the ways it helped shape the Oracle Community. I have to say thati  whole heartedly agree and here is my contribution for #ThanksOTN.

Recovery Appliance or RA or ZDLRA is something I've been very passionate about since its release and thus this very biased post on RA.  Recovery Appliance is Database Backup and Recovery on Steroids . The ability to do fulls and incremental backups is something that every product boasts, so whats special about ZDLRA. Its the Ability to sleep in peace, its the ability to know my backups are good.
To Quote this Article from DBTA  which is for Sqlserver and 2009
 "To summarize, data deduplication is a great feature for backing up desktops, collaboration systems, web applications, and email systems. If I were a DBA or storage administrator, however, I'd skip deduplicating databases files and backups and devote that expensive technology to the areas of my infrastructure where it can offer a strong ROI


This notion really hasn't changed much though de-duplication software has come a long way.
Why de-dup when you dont even send what you dont need , and thats what the Recovery Appliance brings to the table. Send less data and recover as whole , no more restoring L0's then applying L1's and redo . Just ask to recover a virtual Full and redo needed to get to that point will be sent . This makes the Restore and recovery Process automated much faster than traditional backups.
This couple with automatic block checking , built in validation makes the RA something i'm personally proud of a product that i work with and it truly makes my database recovery on steroids.

Mirantis OpenStack 9.0 installation using VirtualBox – part 2

Yann Neuhaus - Tue, 2016-10-11 14:36

The  last blog ended up with a successful installation of the Fuel Master.

In this blog, I will deploy Openstack on the three Fuel slave nodes.

The first node will be the controller node and all these components run on it:

  • a MySQL database
  • RabbitMQ : which is the message broker that OpenStack uses for inter-communication (asynchronous) between the OpenStack components
  • Keystone
  • Glance
  • OpenStack API’s
  • Neutron agents

There are several other components but not used in this lab.

 

The  second node will be the hypervisor node one named compute node in Mirantis, it will create and host the virtual instances created within the OpenStack cloud.

The third one will be the storage node and will provide persistent storage (LVMs) to the virtual instances.

 

Now let’s connect to the Fuel Master node and see what’s going on.. the password by default is : r00tme (two zeros)

$ ssh root@10.20.0.2 The authenticity of host '10.20.0.2 (10.20.0.2)' can't be established.
ECDSA key fingerprint is 20:56:7b:99:c4:2e:c4:f9:79:a8:d2:ff:4d:06:57:4d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.20.0.2' (ECDSA) to the list of known hosts.
root@10.20.0.2's password:
Last login: Mon Oct 10 14:46:20 2016 from 10.20.0.1
[root@fuel ~]#

 

Let’s check if all service are ready and running on the Fuel Master :

[root@fuel ~]# fuel-utils check_all | grep 'ready'
nailgun is ready.
ostf is ready.
cobbler is ready.
rabbitmq is ready.
postgres is ready.
astute is ready.
mcollective is ready.
nginx is ready.
keystone is ready.
rsyslog is ready.
rsync is ready.
[root@fuel ~]#

So all the services are running, let’s continue..

Where are the Fuel slave nodes ?

[root@fuel ~]# fuel2 node list
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| id | name | status | os_platform | roles | ip | mac | cluster | platform_name | online |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| 2 | Untitled (85:69) | discover | ubuntu | [] | 10.20.0.4 | 08:00:27:cc:85:69 | None | None | True |
| 3 | Untitled (b0:77) | discover | ubuntu | [] | 10.20.0.3 | 08:00:27:35:b0:77 | None | None | True |
| 1 | Untitled (04:e8) | discover | ubuntu | [] | 10.20.0.5 | 08:00:27:80:04:e8 | None | None | True |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+

Here they are! All of the Fuel slave nodes will run on an Ubuntu (14.04). The Fuel slave nodes received an IP address (attributed by the Fuel Master) from the 10.20.0.0/24 range which is the PXE network (see last blog)

Now I want to access the Fuel Web Interface which listens to the 8443 port (https) and the 8000 one (http). Let’s see if the Fuel Master is listening to these ports:

[root@fuel ~]# netstat -plan | grep '8000\|8443'
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      868/nginx: master p
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      868/nginx: master p
[root@fuel ~]#

Type https://10.20.0.2:8443 in your  browser. The username & password is : admin

fuelportal

Then click on Start Using Fuel. You can have a look at this Web Interface but I am going to use the Command Line Interface

Let’s change the name of the nodes in order to not confuse them

[root@fuel ~]# fuel2 node update --name Controller 3
numa_nodes is not found in the supplied data.
[root@fuel ~]# fuel2 node list
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| id | name             | status   | os_platform | roles | ip        | mac               | cluster | platform_name | online |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
|  3 | Controller       | discover | ubuntu      | []    | 10.20.0.3 | 08:00:27:35:b0:77 | None    | None          | True   |
|  1 | Untitled (04:e8) | discover | ubuntu      | []    | 10.20.0.5 | 08:00:27:80:04:e8 | None    | None          | True   |
|  2 | Untitled (85:69) | discover | ubuntu      | []    | 10.20.0.4 | 08:00:27:cc:85:69 | None    | None          | True   |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
[root@fuel ~]# fuel2 node update --name Compute 2
numa_nodes is not found in the supplied data.
[root@fuel ~]# fuel2 node update --name Storage 1
numa_nodes is not found in the supplied data.
[root@fuel ~]# fuel2 node list
+----+------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| id | name       | status   | os_platform | roles | ip        | mac               | cluster | platform_name | online |
+----+------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
|  2 | Compute    | discover | ubuntu      | []    | 10.20.0.4 | 08:00:27:cc:85:69 | None    | None          | True   |
|  3 | Controller | discover | ubuntu      | []    | 10.20.0.3 | 08:00:27:35:b0:77 | None    | None          | True   |
|  1 | Storage    | discover | ubuntu      | []    | 10.20.0.5 | 08:00:27:80:04:e8 | None    | None          | True   |
+----+------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+

The slave nodes are not member to any environment, so let’s create one

[root@fuel ~]# fuel2 env create -r 2 -nst vlan Mirantis_Test_Lab
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| id | 1 |
| status | new |
| fuel_version | 9.0 |
| name | Mirantis_Test_Lab |
| release_id | 2 |
| is_customized | False |
| changes | [{u'node_id': None, u'name': u'attributes'}, {u'node_id': None, u'name': u'networks'}, {u'node_id': None, u'name': u'vmware_attributes'}] |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+

You can check in the Fuel Web Interface that one environment was created

The next step is to attribute one role to each nodes. I  want :

  • 1 controller node
  • 1 compute node
  • 1 storage node

Let’s list all the available roles in this release :

[root@fuel ~]# fuel role --release 2
name               
-------------------
compute-vmware     
compute            
cinder-vmware      
virt               
base-os            
controller         
ceph-osd           
ironic             
cinder             
cinder-block-device
mongo


We set a role to each slave nodes

[root@fuel ~]# fuel node
id | status   | name       | cluster | ip        | mac               | roles | pending_roles | online | group_id
---+----------+------------+---------+-----------+-------------------+-------+---------------+--------+---------
 1 | discover | Storage    |         | 10.20.0.5 | 08:00:27:80:04:e8 |       |               |      1 |         
 3 | discover | Controller |         | 10.20.0.3 | 08:00:27:35:b0:77 |       |               |      1 |         
 2 | discover | Compute    |         | 10.20.0.4 | 08:00:27:cc:85:69 |       |               |      1 |         
[root@fuel ~]#
[root@fuel ~]# fuel node set --node 1 --role cinder  --env 1
Nodes [1] with roles ['cinder'] were added to environment 1
[root@fuel ~]# fuel node set --node 2 --role compute  --env 1
Nodes [2] with roles ['compute'] were added to environment 1
[root@fuel ~]# fuel node set --node 3 --role controller  --env 1
Nodes [3] with roles ['controller'] were added to environment 1
[root@fuel ~]#
[root@fuel ~]# fuel node
id | status   | name       | cluster | ip        | mac               | roles | pending_roles | online | group_id
---+----------+------------+---------+-----------+-------------------+-------+---------------+--------+---------
 1 | discover | Storage    |       1 | 10.20.0.5 | 08:00:27:80:04:e8 |       | cinder        |      1 |        1
 2 | discover | Compute    |       1 | 10.20.0.4 | 08:00:27:cc:85:69 |       | compute       |      1 |        1
 3 | discover | Controller |       1 | 10.20.0.3 | 08:00:27:35:b0:77 |       | controller    |      1 |        1
[root@fuel ~]#



Mirantis (via Fuel) provides the ability to check the network configuration.. Go to the Environement tab / Networks / Connectivity Check and click Verify Networks

networksucceed

All seems good,  the deployment  can be started

[root@fuel ~]# fuel2 env deploy 1
Deployment task with id 5 for the environment 1 has been started.

 

Now it is time to wait because the deployment gonna take some time. Indeed, the Fuel master node will install Ubuntu on the Fuel slave nodes and install the right OpenStak packages to the right node via Puppet. The installation can be follow via the log tab

installingubuntu

You can follow the installation via the Fuel Dashboard or via CLI

[root@fuel ~]# fuel task list
id | status  | name                               | cluster | progress | uuid                                
---+---------+------------------------------------+---------+----------+-------------------------------------
1  | ready   | verify_networks                    | 1       | 100      | 4fcff1ad-6b1e-4b00-bfae-b7bf904d15e6
2  | ready   | check_dhcp                         | 1       | 100      | 2c580b79-62e8-4de1-a8be-a265a26aa2a9
3  | ready   | check_repo_availability            | 1       | 100      | b8414b26-5173-4f0c-b387-255491dc6bf9
4  | ready   | check_repo_availability_with_setup | 1       | 100      | fac884ee-9d56-410e-8198-8499561ccbad
9  | running | deployment                         | 1       | 0        | e12c0dec-b5a5-48b4-a70c-3fb105f41096
5  | running | deploy                             | 1       | 3        | 4631113c-05e9-411f-a337-9910c9388477
8  | running | provision                          | 1       | 12       | c96a2952-c764-4928-bac0-ccbe50f6c

Fuel is installing OpenStack on the nodes :

OpenStack_Deploying

PS : If the deployment fails (thing that could happen), do not hesitate to redeploy it

deploymentsuccess

And welcome to OpenStack

horizonwelcome

This ended the second part of this blog. For the next one, I will show how to create an instance and get into more details in OpenStack.

 

Cet article Mirantis OpenStack 9.0 installation using VirtualBox – part 2 est apparu en premier sur Blog dbi services.

OTN Appreciation Day: Easy Execution Plans

Complete IT Professional - Tue, 2016-10-11 13:54
As part of the #ThanksOTN idea on Twitter, my favourite Oracle feature is the ability to easily view and analyse execution plans for queries. Time and time again I’ve needed to see how a query is running, and Oracle databases make it easy to view the execution plan. You can view it in either a text format […]
Categories: Development

#ThanksOTN

Jonathan Lewis - Tue, 2016-10-11 12:57

To mark the OTN Appreciation Day I’d like to offer this thought:

“Our favourite feature is execution plans … execution plans and rowsource execution statistics … rowsource execution statistics and execution plans …  our two favourite features and rowsource execution stats and execution plans … and ruthless use of SQL monitoring …. Our three favourite features are rowsource execution stats, execution plans, ruthless use of SQL monitoring and an almost fanatical devotion to the Cost Based Optimizer …. Our four … no … amongst our favourite features  are such elements as rowsource execution statistics, execution plans …. I’ll come in again.”

With apologies to Monty Python.

 

 

 


Unable to get rid of bitmap joins

Tom Kyte - Tue, 2016-10-11 12:26
Hi Tom For a while I have struggled to optimize response time of SQL below. When executing SQL I get response time above 500 secs and more than 1 mio. consistents gets in auto trace. select * from sag s join table2 ms on (ms.f...
Categories: DBA Blogs

how to trap unique id of record with error using <SQL%BULK_EXCEPTIONS/>

Tom Kyte - Tue, 2016-10-11 12:26
From <SQL%BULK_EXCEPTIONS/> we can find out the error_index. This does not allow us to identify the the particular record with error. We should be able to trap the unique id (primary key or whatever) of the record. Only this will allow us to pinpoint...
Categories: DBA Blogs

Capturing DDL changes on a Table

Tom Kyte - Tue, 2016-10-11 12:26
I am thinking of creating a utility Proc that will capture all Development DDL changes from the Database. This utility will baseline all DDl's for a given release say R1 and while we are developing for release R2 The utility will create the increme...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator