Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 17 hours 51 min ago

2017.pgconf.eu, some impressions

Fri, 2017-10-27 11:00

After we survived the EDB Postgres Rocks cafe on Tuesday, Wednesday was packed with interesting sessions. Especially listening to Robert Haas is always fun and interesting. Getting information directly from the people who work on the core code is one of the beauties of the PostgreSQL community. Several other core developers had sessions as well, and all of them were great. Thanks for all of that.

Selection_031

On Thursday morning, finally, Jan (EDB) and me had the pleasure to talk about “What happens when 30 years of Oracle experience hit PostgreSQL”. As far as I can tell the session was well accepted and we had interesting discussions afterwards. The main goal was to highlight that working in the PostgreSQL area can be very confusing at the very beginning when your only background is Oracle. Seems we hit the goal and the people attending had fun.

DNDPk9zWkAA9fnu

A very big thanks to the organizers of the event: Everything, from the registration, the rooms, the food, the drinks and of the course the sessions was great. I do not doubt that next year will be great as well.

Another big thanks to the EDB people (especially Anja and Jan) who let me drop my jacket and notebook at their booth when required. Another big thanks to Devrim for announcing the rpm packages for SLES 12 during the lightning talks which is what I need for a customer project.

Once uploaded all the slides should be available on the PostgreSQL wiki. Check them out, there is really great content.

Btw: There are interesting choices of beer in Poland:
large

 

Cet article 2017.pgconf.eu, some impressions est apparu en premier sur Blog dbi services.

Max PDBs in Standard Edition

Thu, 2017-10-26 14:26

Here is a small warning. In Standard Edition, you may expect that the features that are not available for your edition are blocked, but in 12.2 you can create more than one PDB in Standard Edition, and you should set MAX_PDBS to prevent that.

12cR1

In 12.1 Standard Edition, when you try to create more than one PDB (i.e with CON>ID > 3) you get an error:

ORA-65010: maximum number of pluggable databases created

12cR2

But it seems that this has been lost in 12.2:


oracle@SE122 ~$ sqlplus / as sysdba
 
SQL*Plus: Release 12.2.0.1.0 Production on Thu Oct 10 11:41:56 2017
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
 
SQL> create pluggable database PDB2 admin user admin identified by me;
 
Pluggable database created.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED
SQL> alter pluggable database PDB2 open;
 
Pluggable database altered.

So, if you are in Standard Edition, don’t forget to set MAX_PDBS to 1:

SQL> alter pluggable database PDB2 close immediate;
 
Pluggable database altered.
 
SQL> drop pluggable database PDB2 including datafiles;
 
Pluggable database dropped.
 
SQL> alter system set max_pdbs=1;
 
System altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
 
12:18:28 SQL> create pluggable database PDB2 admin user pdbadmin identified by oracle;
create pluggable database PDB2 admin user pdbadmin identified by oracle
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

 

Cet article Max PDBs in Standard Edition est apparu en premier sur Blog dbi services.

StandbyFileManagement is set to MANUAL : Don’t panic

Tue, 2017-10-24 12:59

As you may know the parameter STANDBY_FILE_MANAGEMENT enables or disables automatic standby file management. When automatic standby file management is enabled, operating system file additions and deletions on the primary database are replicated on the standby database.
It is generally recommended to set this parameter to AUTO, but it can happen that this parameter is set to MANUAL for x reasons. If this parameter is set to MANUAL, the replication will stop if we create a new tablespace, or add new data files in the primary until we manually create the same in the standby.
In the article we are going to show an example of tablespace creation in our oracle 12c Dataguard environment.
Below our configuration.

DGMGRL> show configuration;
Configuration - prod
Protection Mode: MaxPerformance
Members:
prod - Primary database
proddr - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 30 seconds ago)

StandbyFileManagement is set to manual for both primary and standby databases

DGMGRL> show database PROD StandbyFileManagement;
StandbyFileManagement = 'MANUAL'
DGMGRL> show database PRODDR StandbyFileManagement;
StandbyFileManagement = 'MANUAL'
DGMGRL>

Let’s show data files on the primary

SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD READ WRITE
.
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PROD/datafile/o1_mf_system_dymg6h55_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_sysaux_dymg6loy_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_undotbs1_dymg6nsx_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_users_dymg6o9o_.dbf
/u01/app/oracle/oradata/PROD/datafile/o1_mf_data_dymwrtph_.dbf

And in the standby

SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD MOUNTED
.
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_sysaux_dymtytz6_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_undotbs1_dymtzbbr_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_users_dymtzko2_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_data_dymz1b8y_.dbf

Now let’s create a tablespace in the primary database PROD. In this tablespace we create a table and insert some data inside.

SQL> create tablespace TABDATA datafile size 100M;
Tablespace created.
.
SQL> create table test1(id number) tablespace data;
Table created.
.
SQL> insert into test1 values(1);
1 row created.
.
SQL> commit;
Commit complete.

And then let’s do some switches

SQL> alter system switch logfile;
System altered.
.
SQL> alter system switch logfile;
System altered.

As we may expect, this new tablespace is not automatically replicated on the standby as our StandbyFileManagement is set to MANUAL and our configuration should return errors.

DGMGRL> show configuration;
Configuration - prod
Protection Mode: MaxPerformance
Members:
prod - Primary database
proddr - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 9 seconds ago)

The primary database is fine

DGMGRL> show database PROD;
Database - prod
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
PROD
Database Status:
SUCCESS

But the standby is not synchronized

DGMGRL> show database PRODDR;
Database - proddr
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 2 minutes 14 seconds (computed 1 second ago)
Average Apply Rate: 2.00 KByte/s
Real Time Query: OFF
Instance(s):
PROD
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
DGMGRL>

In the standby alert log file we can see some ORA-

File #6 added to control file as 'UNNAMED00006' because
the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
The file should be manually created to continue.

MRP0: Background Media Recovery terminated with error 1274
2017-10-20T15:49:43.368202+02:00
Errors in file /u01/app/oracle/diag/rdbms/proddr/PROD/trace/PROD_mrp0_7182.trc:
ORA-01274: cannot add data file that was originally created as
'/u01/app/oracle/oradata/PROD/datafile/o1_mf_tabdata_dymzqmt4_.dbf'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
2017-10-20T15:49:43.448802+02:00
Errors in file /u01/app/oracle/diag/rdbms/proddr/PROD/trace/PROD_m000_7250.trc:
ORA-01110: data file 1: '/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf'
Recovery stopped due to failure in applying recovery marker (opcode 17.30).
Datafiles are recovered to a consistent state at change 380449 but controlfile could be ahead of datafiles.
2017-10-20T15:49:43.618264+02:00

If we check data files in the standby, we can see that a UNAMED new file is created in the $ORACLE_HOME/dbs

SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- --------------------
PROD MOUNTED
.
SQL> select name from v$datafile;
NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_sysaux_dymtytz6_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_undotbs1_dymtzbbr_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_users_dymtzko2_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_data_dymz1b8y_.dbf
/u01/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00006
6 rows selected.

How can I solve the issue and restart the replication. It’s easy. As I am using Oracle-Managed Files
(OMF), I first have to move the data file in the correct directory using following command.

SQL> alter database create datafile '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00006' as new;
Database altered.
.
SQL> select name from v$datafile;
NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_system_dymtycrm_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_sysaux_dymtytz6_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_undotbs1_dymtzbbr_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_users_dymtzko2_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_data_dymz1b8y_.dbf
/u01/app/oracle/oradata/PRODDR/datafile/o1_mf_tabdata_dyn06zm2_.dbf
6 rows selected.
SQL>

If OMF is not used we have to use something like

alter database create datafile '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00006' as 'datafile_location_and_name';

After I decide to update the parameter standbyfilemanagement to AUTO on both primary and standby.

DGMGRL> edit database PROD set property StandbyFileManagement = 'AUTO';
Property "standbyfilemanagement" updated
DGMGRL> edit database PRODDR set property StandbyFileManagement = 'AUTO';
Property "standbyfilemanagement" updated

And finally I have to restart the redo apply on the standby

DGMGRL> edit database PRODDR set STATE='APPLY-ON';
Succeeded.
DGMGRL>

The synchronization should be fine now and the command show configuration should return success

DGMGRL> show configuration;
Configuration - prod
Protection Mode: MaxPerformance
Members:
prod - Primary database
proddr - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 15 seconds ago)

On the primary database

DGMGRL> show database PROD;
Database - prod
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
PROD
Database Status:
SUCCESS

On the standby database

DGMGRL> show database PRODDR;
Database - proddr
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 83.00 KByte/s
Real Time Query: OFF
Instance(s):
PROD
Database Status:
SUCCESS
DGMGRL>

If we want to verify the new created table, we can open the standby database in a read only mode.
But if we don’t have the Active Dataguard option, the redo apply must be stopped for the standby database before opening it.

DGMGRL> edit database PRODDR set STATE='APPLY-OFF';
Succeeded.

And then open the database

SQL> alter database open read only;
Database altered.
.
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
DATA
TABDATA
7 rows selected.
.
SQL> select * from test1;
ID
----------
1

 

Cet article StandbyFileManagement is set to MANUAL : Don’t panic est apparu en premier sur Blog dbi services.

2017.pgconf.eu started, and look who is there …

Tue, 2017-10-24 03:19

So, finally 2017.pgconf.eu started today and while checking the schedule I noticed something that I could not even imagine some years ago. There is a session from Microsoft: Latest update on Azure managed service for PostgreSQL. Personally I really like to see that Microsoft is more and more present at Open Source conferences and starts engaging in the community. But, of course this is not the only session that will be interesting. There is an impressive list of sessions from well known community members and hackers, full list here.

Selection_031

There are plenty of speaker interviews you might want to read as an introduction, here.

A final highlight this evening will be the EDB Postgres Rocks Cafe.

Selection_030

I am already curious who will be there and what discussions we’ll have. Not much more to tell right now, stay tuned.

 

Cet article 2017.pgconf.eu started, and look who is there … est apparu en premier sur Blog dbi services.

Database is more than persistence. It is shared. My thoughts about Microservice Database.

Sun, 2017-10-22 14:36

The development teams often see the database as a ‘persistence layer’ only. And sometimes it is mentioned that the database is always the cause of the problems, especially when implementing continuous integration and trying to be agile. Then cames the idea to have this ‘persistence layer’ in an isolated environment, such as a docker container, with the database for each developer. However, this overlooks the real cause of the problems, which is not the persistence function of the database, but the fact that it is shared. And when you share something, in a multi-user environment, you reach another level of complexity. And if you are not prepared for that, you perceive it as a problem.

This philosophical blog post contains some cool words which, in my opinion, must be carefully considered when dealing database: agile, persistence, stateless, docker, microservices, stateless, NoSQL, containers, lake, cloud,…


When I hear about ‘persistence layer’ it always reminds me a meeting a few years ago where the architects of a new application draw on the whiteboard something that they could have called microservices a few years later. Dedicated web services reading data, processing it, and writing the result to the ‘persistence layer’. And when the data was only a transient interface between two of those services, they’ve written ‘temporary persistence’ on the whiteboard diagram. When you come with two antonyms to define something, there is probably something wrong in the design.

Commitment

In order to understand what’s beyond persistence, you should think about what happens when you:
SQL> commit work;
Commit complete.

First, the commit statement closes a transaction that has been started by the first DML. That’s the first purpose of a database management system: process transactions. This is already something that is not very clear for some development architects. They came with stateless services, just because thinking about transactions is not easy when there is no one-to-one relationship between the database session and the end-user (as in client/server on dedicated terminal). So they designed stateless services. Then they encountered issues such as lost updates as soon as they had more than one user. And then implemented optimistic locking in their application server, not always in the right layer. I recommend this talk from Vlad Mihalcea about transactions.

Second, the commit statement ensures that changes are persistent and durable. The changes are written to disk, to backups, and to standby databases. This could be as simple as a write call, but is a bit more complex for performance reasons: random writes done on cache, written to disk asynchronously, redo transaction log written sequentially, synced on commit. Here, the developer do not need to care about the underlying mechanisms, beyond just trying to change only what is needed and commit only when needed.

Third, the commit marks the changes as publicly visible. Because the data is shared among multiple users. This is why developing and testing on your own personal database is limited to unit tests. As soon as you have to work on implementing a real use-case, you must work on a database shared by different services. I have seen enterprises going early on agile Database as a Service for agile development where each developer was working on his own copy (thin clone) of the database. They are now asking for common environments where multiple developers can work and test their different part of the application. Back to this pre-agile idea, and back to the same problem: the database is shared.

Finally, I think that some developers like to see the database as only a ‘persistence layer’ just because it is easier. You can be lazy and let the database system and the database administrators manage the durability of the data. The non-optimal performance will be compensated by software and hardware. And ignoring the two other properties of a database system is just walking away from the complexity. NoSQL to ignore transactions and consistencies, and containers to ignore the sharing concern.

Impedance Mismatch

This is not new. ‘Object Oriented’ was the buzzword before ‘buzzword’ itself was a buzzword. And OO development was ok until it had to manipulate data from the database. They called ‘impedance mismatch’ the problems encountered when trying to match the object-oriented model with the relational model. And they built frameworks to do this matching as if it were simple, such as matching the Object Identity concept with the Primary Key concept. And my experience is that this was nice to build good Proof or Concepts, but failed in production on consistency, performance, and scalability. Object Oriented development is good for non-shared transient objects. A GUI is based on objects, such as a window or a text field, where the object identity is the address in memory. If you restart the application, it is another object, with different address memory. And the GUI on your colleague computer is again another object. It is not persistent but transient. And it is not shared but all different. Applying this model to data doesn’t work. You can simulate persistency with an Object-Relational mapping (ORM) but sharing will be a problem. Those ORM usually work on proxy cached objects in the application server, trying to re-invent the management of concurrent changes, without using the database system which is built for that.

RDBMS

The current shift from ‘database technology’ to ‘developer community’ is probably a good idea, but only if we do not do the same errors such as using the database as a black box to persist objects. We must keep in mind the reasons why Relational Database Management Systems were built for.

Before RDBMS, data was modeled hierarchically. This was good for performance (because data is clustered) but was good for only one use-case. Other use-cases had to replicate the data into another physical design. The relational modeling stores data in a way it can be used by all use-cases. For example, You can look at the orders from one customer, or for the orders on one product. The business entities are in different tables and are joined dynamically when queried. This is a data point of view. The developer builds something different because the developer works on one use-case. This was the trend for XML a few years ago, and JSON now, bringing back the hierarchical model that failed decades ago. Very good to work on your use-case, but very bad when data is shared with other use cases. You have good Proof of Concept and good unit test. But integration will be impossible. I think that we will see the same with microservices: each one designed for its own usage without thinking about other (including future) use-cases.

Before RDBMS, data definition was included in the code for each use-case. A COBOL program had a data division describing the structure of the data that will be accessed by the program. This was not good for shared databases because one structural change had to change all programs. And this is why the relational model was introduced with physical and logical independence. The logical model is designed for the application, and it is the common interface between the developers and the database administrators. Here again, I’ve seen some application architects going backward, using ORM mapping used in the same way the COBOL data division was used in the past.

Microservice Database

Today, developers are running their code in containers. This has the same properties as the OO development I described before: isolated and transient. Docker starts a few processes in its own image of the disk. This is perfect. But, like they did with OO, they try to bring this idea to the database. And that will fail again because, except in early stage of development, you need a database that is shared and persistent.

In my opinion, running the Oracle Database in a Docker container gives a wrong message to the developer community. A database involves a lot of resources and is highly coupled with the OS. My lab databases on my laptop are VirtualBox VMs. Easy to build, easy to start, and easy to share. I’ve tested database on docker 3 years ago to see how we can build data virtualization, and thin clones for development, with a standby database on Docker. I’ve written an article about that for DOAG. The technology has evolved but I think that it is not a good idea except for the fun of trying something new.

Today, we have containers on the database itself, with multitenant pluggable databases and application containers. You can provision a pluggable database for each docker container running the application. And you have easy ways to copy, clone, refresh or have common data. This is ok for development.

Application containers have been introduced for SaaS: each application tenant has its own pluggable database. The data in the PDB is still shared by all application use-cases, and metadata (and some reference tables) is shared in application root by several applications.

However, we also see in 12cR2, and probably more in 18c, some cross-PDB DML which allows queries and modification among several pluggable databases. It is not yet stable, using database links for modifications (see here). But I can imagine that it will be fixed in 18c or later being able to update different PDBs within the same transaction.

And then, I foresee how this will be perceived by some developer architects (those ignoring that the database must be shared). They will build microservices, with small application servers running in a Docker container. And they will map one PDB for each service, doing something like a Microservice Database. Some will store data in XML, others in JSON, and some in relational tables. All those will be consolidated into a multi-tenant database to be managed easily. And my fear is that we will see a large bowl of spaghetti to exchange data between those Microservice PDBs using database links, common views, cross-PDB DML, and maybe some logical replication.

In my opinion, microservices can be an interesting design for future applications, but trying to map the same idea to the database will be a failure if developers don’t realize that the database is not there only to persist data but also to share it. But that’s just my own current opinion and I’ve no idea about the future. Maybe all this data modified by microservices will be shared in real time in a big Data Lake universally available on the cloud, and all IT problems about concurrency and latency will be solved.

 

Cet article Database is more than persistence. It is shared. My thoughts about Microservice Database. est apparu en premier sur Blog dbi services.

Archivelog deletion policy on Data Guard configuration

Sun, 2017-10-22 14:02

The deletion policy on a dataguard configuration should be:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
for the site where you don’t backup. It can be the standby or the primary.

and:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
for the site where you do the backups. It can be the primary or the standby.

I’ve always configured it in this way, but I recently discovered that the order of the subclause matters. Do not CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY; because no archivelogs will be reclaimable, and your recovery area will be full. This is probably a bug. I’ll update this post when I have more information about this.

Test case

I’m on the standby database where I do the backups:

 
Recovery Manager: Release 12.2.0.1.0 - Production on Sun Oct 22 17:37:18 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
RMAN-06568: connected to target database: CDB2 (DBID=693620717, not open)
 
RMAN>
RMAN-03029: echo set on
 

I start with all default configuration:

RMAN> show all;
RMAN-06607: RMAN configuration parameters for database with db_unique_name CDB2B are:
RMAN-01005: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
RMAN-01005: CONFIGURE BACKUP OPTIMIZATION OFF; # default
RMAN-01005: CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
RMAN-01005: CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
RMAN-01005: CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
RMAN-01005: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
RMAN-01005: CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
RMAN-01005: CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
RMAN-01005: CONFIGURE MAXSETSIZE TO UNLIMITED; # default
RMAN-01005: CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
RMAN-01005: CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
RMAN-01005: CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
RMAN-01005: CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
RMAN-01005: CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/snapcf_CDB2B.f'; # default

I have one full backup and then some archivelog backups:

RMAN> list backup summary;
RMAN-06345: List of Backups
RMAN-06346: ===============
RMAN-06347: Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
RMAN-06348: ------- -- -- - ----------- --------------- ------- ------- ---------- ---
RMAN-06349: 9 B F A DISK 22-OCT 17:05:16 1 1 NO TAG20171022T170250
RMAN-06349: 10 B F A DISK 22-OCT 17:06:23 1 1 NO TAG20171022T170250
RMAN-06349: 11 B F A DISK 22-OCT 17:07:18 1 1 NO TAG20171022T170250
RMAN-06349: 12 B F A DISK 22-OCT 17:07:35 1 1 NO TAG20171022T170732
RMAN-06349: 13 B A A DISK 22-OCT 17:15:09 1 1 NO TAG20171022T171502
RMAN-06349: 14 B F A DISK 22-OCT 17:15:19 1 1 NO TAG20171022T171518
RMAN-06349: 15 B A A DISK 22-OCT 17:27:28 1 1 NO TAG20171022T172721
RMAN-06349: 16 B F A DISK 22-OCT 17:27:39 1 1 NO TAG20171022T172737

I have no obsolete backups:

RMAN> report obsolete;
RMAN-06524: RMAN retention policy will be applied to the command
RMAN-06511: RMAN retention policy is set to redundancy 1
RMAN-06147: no obsolete backups found

APPLIED ON ALL STANDBY

I have 1.8% of the recovery area that has been applied:

RMAN> configure archivelog deletion policy to applied on all standby;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.08 3

BACKED UP 1 TIMES TO DISK

I have 1.3% of the recovery area that has been backed up:

RMAN> configure archivelog deletion policy to backed up 1 times to disk;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.3 3

This looks good. I have some archived logs that have been applied but not backed up yet.

Both in the ‘bad’ order

But now I want to combine both:

RMAN> configure archivelog deletion policy to backed up 1 times to disk applied on all standby;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 0 3

Nothing is recoverable here, wich is in my opinion a bug.

Both in the ‘right’ order

Trying the same but with different order:

RMAN> configure archivelog deletion policy to applied on all standby backed up 1 times to disk;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.08 3

This look good. 1.08% have been applied and backed up.

I’ve verified the same behavior when backups are done on primary, or on standby, and same in 11.2.0.4, 12.1 and 12.2 with latest RU.
When the deletion policy is starting with the backup clause before the standby clause, the files are not marked as reclaimable. However, they are deleted with a ‘delete archivelog’ statement without the ‘force’ option.
The behavior is the same with ‘shipped to’ instead of ‘applied on’.

So what?

Be careful with the syntax: the ‘applied’ or ‘shipped’ clause must be written before the ‘backup’ one.
Check that archived logs are reclaimable. The query I use for that is in: https://blog.dbi-services.com/archivelog-deletion-policy-for-standby-database-in-oracle-data-guard/. When more archived log detail is required, I’ve also this query: https://blog.dbi-services.com/drilling-down-vrecoveryareausage/
Always have a look at the recovery area usage after you change the deletion policy, the site where you backup, or after a switchover.

 

Cet article Archivelog deletion policy on Data Guard configuration est apparu en premier sur Blog dbi services.

VirtualBox 5.2 exports the VM to the Oracle Cloud

Sat, 2017-10-21 10:32

The new release of Oracle VM VirtualBox (aka VirtualBox) is there with a new functionality to export a VM to the Oracle Cloud Compute (aka Oracle Cloud Infrastructure). That can be interesting to prepare a VM on my laptop and move it to the Cloud to get it accessible from everywhere. Here’s my first try. In my opinion, it’s idea but probably need further evolution.

VirtualBox

Here is what is new: in addition to .ova you can export to an Oracle Public Cloud image:
CaptureVboxCloud000

This takes some time, as it compresses and writes all the disk images

CaptureVboxCloud002

The result is a .tar.gz for each disk attached to my VM. It is actually the image of the disk (.img) that is tar-ed and then gzipped. My VM (called VM101) had two disks (VM101-disk1.vdi and VM101-disk2.vdi). The export generated: VM101.tar.gz (containing VM101-disk002.img which looks like my first disk) and VM101-disk003.tar.gz (VM101-disk003.img which looks like my second disk)

Here is the content:


$ tar -ztvf VM101.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 4294967296 2017-10-19 21:23 VM101-disk002.img
 
$ tar -ztvf VM101-disk003.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 27917287424 2017-10-19 21:25 VM101-disk003.img

The .img is the image of the disk, with the partition and boot sector.

Compute Cloud

In the Oracle Public Cloud I can import this image: Compute Classic -> Images -> Upload Image

CaptureVboxCloud003

I upload only the image of the first disk, which contains the root filesystem:

CaptureVboxCloud004

CaptureVboxCloud005

And then I create the compute instance with the ‘Associate Image’ button:

CaptureVboxCloud006

Now, I’m ready to create an instance for it: Instance -> Customize -> Private Images

CaptureVboxCloud010

Then, I can define the shape (OCPU and memory), upload my SSH public key, and add storage (I could add my second disk here) and create the instance.

Here I’ve started it:

CaptureVboxCloud008

Unfortunately, my VM still has the network interface defined for my VirtualBox environment and then I have no way to connect to it. I hope that this feature will evolve to also export virtual network interfaces.

I have not seen any way to open a terminal on console. The only thing I can do is take snapshots of it:

CaptureVboxCloud009

Ok, so there’s a problem way before the network interfaces. My VM from Oracle VM VirtualBox (aka VirtualBox) now starts on Oracle VM (aka OVM) and besides the similar marketing name, they are different hypervisors (OVM running XEN). Probably a driver is missing to access block devices and maybe this Bug 21244825.

That’s probably all my tests on this until the next version. It is currently not easy to have a VM that can be started on different hypervisors and network environment.

So what?

Nothing very special here. Moving a VM from one hypervisor to the other is not an easy thing, but it is a good idea. And I hope that the integration into Oracle Cloud will be easier in the future with virtual disk and network interfaces. For the Oracle Cloud, it will be nice to have access to the console, but at least a screenshot may help to troubleshoot.

 

Cet article VirtualBox 5.2 exports the VM to the Oracle Cloud est apparu en premier sur Blog dbi services.

PostgreSQL Index Suggestion With Powa

Fri, 2017-10-20 09:21

A few time ago my colleague Daniel did a blog about POWA. In a nice article he shown how this tool can be used to monitor our PostgreSQL.
In this present article I am going to show how this powerful tool can help by suggesting indexes which can optimize our queries.
I am using postgeSQL 9.6

[root@pgservertools extension]# yum install postgresql96-server.x86_64
[root@pgservertools extension]# yum install postgresql96-contrib.x86_64

And Then I initialize a cluster

[root@pgservertools extension]# /usr/pgsql-9.6/bin/postgresql96-setup initdb
Initializing database ... OK

POWA require following extensions:
pg_qualstats: gathers statistics on predicates found in WHERE statements and JOIN clauses
pg_stat_kcache : gathers statistics about real reads and writes done by the filesystem layer
hypopg : extension adding hypothetical indexes in PostgreSQL. This extension can be used to see if PostgreSQL will use the index or no
btree_gist : provides GiST index operator classes that implement B-tree equivalent behavior for various data types
powa_web : will provide access to powa via a navigator

Just we will note that following packages are installed to resolve some dependencies during the installation of these extensions.

yum install python-backports-ssl_match_hostname.noarch
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm
rpm -ivh python-tornado-2.2.1-8.el7.noarch.rpm

And then extensions are installed using yum

yum install powa_96.x86_64 pg_qualstats96.x86_64 pg_stat_kcache96.x86_64 hypopg_96.x86_64 powa_96-web.x86_64

After the installation the postgresql.conf is modified to load the extensions

[root@pgservertools data]# grep shared_preload_libraries postgresql.conf | grep -v ^#
shared_preload_libraries = 'pg_stat_statements,powa,pg_stat_kcache,pg_qualstats' # (change requires restart)
[root@pgservertools data]#

And then restart the PostgreSQL

[root@pgservertools data]# systemctl restart postgresql-9.6.service

For POWA configuration, the first step is to create a user for powa

postgres=# CREATE ROLE powa SUPERUSER LOGIN PASSWORD 'root';
CREATE ROLE

and the repository database we will use.

postgres=# create database powa;
CREATE DATABASE

The extensions must be created in the repository database and in all databases we want to monitor

postgres=#\c powa
powa=# CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION
powa=# CREATE EXTENSION btree_gist;
CREATE EXTENSION
powa=# CREATE EXTENSION powa;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_qualstats;
CREATE EXTENSION
powa=# CREATE EXTENSION pg_stat_kcache;
CREATE EXTENSION
powa=# CREATE EXTENSION hypopg;
CREATE EXTENSION

We can verify that extensions are loaded in the database using

powa=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
powa=#

Now let’s create a database named mydb for our tests and let’s create all extensions inside the database.

[postgres@pgservertools ~]$ psql
psql (9.6.5)
Type "help" for help.
postgres=# create database mydb;
CREATE DATABASE
postgres=#

Let’s again verify extensions into the database mydb

mydb=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+------------+-----------------------------------------------------------
btree_gist | 1.2 | public | support for indexing common datatypes in GiST
hypopg | 1.1.0 | public | Hypothetical indexes for PostgreSQL
pg_qualstats | 1.0.2 | public | An extension collecting statistics about quals
pg_stat_kcache | 2.0.3 | public | Kernel statistics gathering
pg_stat_statements | 1.4 | public | track execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
powa | 3.1.1 | public | PostgreSQL Workload Analyser-core
(7 rows)
mydb=#

In mydb database we create a table mytab and insert in it some rows

mydb=# \d mytab
Table "public.mytab"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
val | text |
.
mydb=# select count(*) from mytab;
count
-----------
100000000
(1 row)

The last step is to configure the powa-web configuration file. Below is our file

[root@pgservertools etc]# pwd
/etc
[root@pgservertools etc]# cat powa-web.conf
servers={
'main': {
'host': 'localhost',
'port': '5432',
'database': 'powa',
'query': {'client_encoding': 'utf8'}
}
}
cookie_secret="secret"
[root@pgservertools etc]#

And then powa-beb can be started by following command

[root@pgservertools etc]# powa-web &
[1] 5600
[root@pgservertools etc]# [I 171006 13:54:42 powa-web:12] Starting powa-web on http://0.0.0.0:8888

We can now log with the user powa we created at http://localhost:8888/
powa1

And then we can choose mydb database to monitor it
powa2

Now let’s run some queries. As my load is very low I set my pg_qualstats.sample_rate=1 in the postgresql.conf file (thanks to Julien Rouhaud)

[postgres@pgservertools data]$ grep pg_qualstats.sample_rate postgresql.conf
pg_qualstats.sample_rate = 1


mydb=# select * from mytab where id in (75,25,2014,589);
id | val
------+-----------
25 | line 25
75 | line 75
589 | line 589
2014 | line 2014
(4 rows)

Time: 9472.525 ms
mydb=#

Using the tab Index suggestions, we click on Optimize the database. We can see that an index creation is recommended with the potential gain.
powa3
powa4
powa5
We will just note that PostgreSQL uses the extension hypopg to see if the index will be used or no. Let’s see how this extension works. Hypothetical indexes are useful to know if specific indexes can increase performance of a query. They do not cost CPU as they don’t exist.
Let’s create a virtual index in mydb database

mydb=# select * from hypopg_create_index('create index on mytab (id)');
indexrelid | indexname
------------+-----------------------
55799 | btree_mytab_id
(1 row)
mydb=#

We can verify the existence of the virtual index by

mydb=# SELECT * FROM hypopg_list_indexes();
indexrelid | indexname | nspname | relname | amname
------------+-----------------------+---------+---------+--------
55799 | btree_mytab_id | public | mytab | btree
(1 row)

Using explain, we can see that PostgreSQL will use the index.

mydb=# explain select * from mytab where id in (75,25,2014,589);
QUERY PLAN
-------------------------------------------------------------------------------------
Index Scan using btree_mytab_id on mytab (cost=0.07..20.34 rows=4 width=17)
Index Cond: (id = ANY ('{75,25,2014,589}'::integer[]))
(2 rows)

Just not that explain analyze will not use the virtual index
Conclusion
In this article we see how POWA can help for optimizing our PostgreSQL database.

References: https://pgxn.org/dist/hypopg/; http://powa.readthedocs.io/en/latest/

 

Cet article PostgreSQL Index Suggestion With Powa est apparu en premier sur Blog dbi services.

Managing Oracle Big Data Cloud – CE with REST API

Thu, 2017-10-19 10:29

In this blog post, we will see how to manage Oracle Public Cloud Big Data service Compute Edition with REST API. Scheduling the start/stop/restart of a metered PaaS in the Oracle cloud can be interesting for managing your cloud credits consumptions.

We need first consult the official documentation, to understand how the API is composed. https://docs.oracle.com/en/cloud/paas/big-data-compute-cloud/csbdp/QuickStart.html 

Use the following URL composition for accessing to REST endpoint:
https://region-prefix.oraclecloud.com/resource-path

According to Oracle documentation, the following information should be taken into account.

Connection Information:

  • Identity Domain: axxxxxx
  • REstFull URL: https://psm.europe.oraclecloud.com/
  • username -password

Terminology:

  • {instanceName} = Name of the BDCS-CE service (= Cluster Name)
  • {identityDomainId} = “X-ID-TENANT-NAME: axxxxxx”
  • {function} = start, stop, restart
  • {allServiceHosts} = the entire cluster VMs (all instances which composed the cluster)
  • “Accept: <value>” = Media Type (default value = application/json)

Before starting an automation script to manage your Big Data cluster, execute single GET/POST commands to understand how the API is working.

GET request: View all Service BDCS-CE instances

/paas/api/v1.1/instancemgmt/{identityDomainId}/services/BDCSCE/instances

curl -i -X GET \
        -u "username:password" \
        -H "X-ID-TENANT-NAME: axxxxxx" \
        -H "Accept: application/json" \
        "https://psm.europe.oraclecloud.com/paas/api/v1.1/instancemgmt/axxxxxx/services/BDCSCE/instances"

Result:

HTTP/1.1 200 OK
Server: Oracle-Application-Server-11g
Strict-Transport-Security: max-age=31536000;includeSubDomains
Content-Language: en
...

{"services":{"cluster-iot":{"...

According to the HTTP status code, the command was successful.

GET request: View a specific Service BDCS-CE instances

Add the instance name to get the status of a specific cluster. Note that a BDCS-CE instance is your Big Data cluster.

/paas/api/v1.1/instancemgmt/{identityDomainId}/services/BDCSCE/instances/{instanceName}

curl -i -X GET \
        -u "username:password" \
        -H "X-ID-TENANT-NAME: axxxxxx" \
        -H "Accept: application/json" \
        "https://psm.europe.oraclecloud.com/paas/api/v1.1/instancemgmt/axxxxxx/services/BDCSCE/instances/cluster-iot"

Then use the same requests structure to start/stop/restart your Big Data cluster.

POST request: Start / Stop / Restart Service Instances BDCS-CE: cluster-iot

/paas/api/v1.1/instancemgmt/{identityDomainId}/services/BDCSCE/instances/{instanceName}/hosts/{function}

As it’s specified in the documentation, you need to change the media type to application/vnd.com.oracle.oracloud.provisioning.Service+json and use a body parameter to specify which hosts you want to manage. In our case, we want to manage all cluster hosts.

curl -i -X POST -u "username:password" \
-H "X-ID-TENANT-NAME: axxxxxx" \
-H "Content-Type: application/vnd.com.oracle.oracloud.provisioning.Service+json" \
-d '{"allServiceHosts":"true"}' "https://psm.europe.oraclecloud.com/paas/api/v1.1/instancemgmt/axxxxxx/services/BDCSCE/instances/cluster-iot/hosts/stop"

You can now, start to develop an automation script to manage your Oracle Big Data Compute Edition cluster.

Python prerequistes:

Install Python-PIP before:

dbi@host:~/$ sudo apt-get install python-pip

Install Requests module with PIP:

dbi@host:~/$ sudo pip install requests

Code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
__author__ = "Mehdi Bada"
__company__= dbi services sa
__version__ = "1.0"
__maintainer__ = "Mehdi Bada"
__email__ = ""
__status__ = "Dev"

"""

import os, sys, getopt
import requests
import simplejson
import json

# Variables defintion.

identityDomainId="axxxxxx"
instanceName="cluster-iot"
server = "https://psm.europe.oraclecloud.com"
commands = ['start', 'stop', 'restart']


def usage():
    print "\nScript Usage \n"
    print "Usage:", sys.argv[0], "-c [start|stop|restart] | -h \n"

if len(sys.argv) < 3:
    usage()
    sys.exit(2)

try:
    opts, args = getopt.getopt(sys.argv[1:], "ch", ["command", "help"])
except getopt.GetoptError:
    usage()
    sys.exit(2)

for opt, arg in opts:
    if opt in ("-h", "--help"):
        usage()
        sys.exit()
    elif opt in ("-c", "--command"):
        icommand=sys.argv[2]
        if icommand in commands:
                icommand=sys.argv[2]
        else:
                usage()
                sys.exit(2)


url = server + "/paas/api/v1.1/instancemgmt/%s/services/BDCSCE/instances/%s/hosts/%s" % (identityDomainId,instanceName,icommand)

payload = "{\"allServiceHosts\":\"true\"}"

headers = {
    'x-id-tenant-name': "%s" %(identityDomainId),
    'accept': "application/vnd.com.oracle.oracloud.provisioning.Service+json",
    'content-type': "application/json",
    'authorization': " ",
    }

response = requests.request("POST", url, data=payload, headers=headers)

# Print the status code of the response.
print("\n")
print(response.status_code)

# Json Parsing
content=response.content
j = simplejson.loads(content)
print (j['details']['message'])

Usage:

dbi@host:~/$ ./bdcsce_start_stop_test.py -h

Script Usage

Usage: ./bdcsce_start_stop_test.py -c [start|stop|restart] | -h

 

Oracle REST API is not very well documented, that why multiple tests should be performed before understanding how it works.

 

Cet article Managing Oracle Big Data Cloud – CE with REST API est apparu en premier sur Blog dbi services.

Documentum: IndexAgent uninstalled continues to queue requests

Wed, 2017-10-18 03:15

We had a strange behavior by a customer regarding the indexing queue. We used to have two IA configured and we uninstalled one.
I figured out that we still had indexing queue requests for the old index agent while it was totally uninstalled.

I checked the following objects to see if the agent was still configured somewhere: dm_fulltext_index, dm_ftengine_config, dm_ftindex_agent_config. But the old IA was not declared anymore.

The main problem is that it continued to queue all changes in the indexing queue and nothing cleaned it up, so we got like 2 million requests, filling up the db table.

I finally found out where the old IA was declared: in the registry events.
select * from dmi_registry where user_name = ‘dm_fulltext_index_user_01′;

r_object_id          user_name                     registered_id        event
2601b86480001d03     dm_fulltext_index_user_01     0301b86480000104     dm_save
2601b86480001d04     dm_fulltext_index_user_01     0301b86480000104     dm_destroy
2601b86480001d05     dm_fulltext_index_user_01     0301b86480000105     dm_save
2601b86480001d06     dm_fulltext_index_user_01     0301b86480000105     dm_readonlysave
2601b86480001d07     dm_fulltext_index_user_01     0301b86480000105     dm_checkin
...

In order to unregister the events, use the following:
unregister,c,<registered_id>,<event>,<queue_name>

So for me:

unregister,c,0301b86480000104,dm_save,dm_fulltext_index_user_01
unregister,c,0301b86480000104,dm_destroy,dm_fulltext_index_user_01
unregister,c,0301b86480000105,dm_save,dm_fulltext_index_user_01
unregister,c,0301b86480000105,dm_readonlysave,dm_fulltext_index_user_01
...

If you want to check if the old IA still queues requests, you can use:
select distinct name from dmi_queue_item where name like ‘dm_fulltext%';

If you see the old queue name, that means you still have the registered events.

 

Cet article Documentum: IndexAgent uninstalled continues to queue requests est apparu en premier sur Blog dbi services.

SQL Server Management Studio New Features

Thu, 2017-10-12 09:24

regularly, Microsoft SQL Server comes up with its new features, and in particular SSMS 2017 17.3

in this post, i present you a new feature : Import flat files

How does it works :

Open SSMS, and right click on a selected database

Photo1

Choose Import Flat File

Photo2

 

Click Next

Photo3

 

Tip the Path of your file to import and the new target table’s name

Photo4

this is an overview of the file content

Click Next

Photo5

As you can see a data type conversion is purposed , click Next

Photo6

 

Click Finish and import task start

Photo7

The import task is complet

Photo8

 

to verify, go to your database and execute a select on the brand new table

 

CONCLUSION:

This is a small tool that will make life easier for us to insert flat files quickly, the only drawback is that you can not insert the data on an existing table, the tool will ask you to create a new table.

 

Cet article SQL Server Management Studio New Features est apparu en premier sur Blog dbi services.

Oracle Database Multilingual Engine (MLE)

Wed, 2017-10-11 01:35

My ODC appreciation blog post was about Javascript in the database running in the beta of the Oracle Database Multilingual Engine (MLE). Here I’ll detail my first test which is a comparison, in performance, between a package written in Javascript, running in the MLE, and one written and running in PL/SQL.

I’ve downloaded the 12GB .ova from OTN, installed the latest SQLcl, and I’m ready to load my first Javascript procedure. I want something simple that I can run a lot of times because I want to test my main concern when running code in a different engine: the context switch between the SQL engine and the procedural one.

My kid’s maths exercises were about GCD (greatest common divisor) this week-end so I grabbed the Euclid’s algorithm in Javascript. This algorithm was the first program I ever wrote long time ago, on ZX-81, in BASIC. Now in Javascript it can use recursion. So here is my gcd.js file:

module.exports.gcd = function (a, b) {
function gcd(a, b) {
if (b == 0)
{return a}
else
{return gcd(b, a % b)}
}
return gcd(a,b)
}

We need strong typing to be able to load it as a stored procedure, so here is the TypeScript definition in gcd.d.ts

export function gcd(a:number, b:number ) : number;

I load it with the dbjs utility, which I run in verbose mode:

[oracle@dbml MLE]$ dbjs deploy -vv gcd.js -u demo -p demo -c //localhost:1521/DBML
deploy: command called /media/sf_share/MLE/gcd.js oracle
Oracle backend: starting transpiler
gcd: processed function
Oracle backend: opening connection to database
gcd.js: retrieving functions
dropModule: called with gcd.js
loadModule: called with gcd.js
BEGIN
EXECUTE IMMEDIATE 'CREATE PACKAGE GCD AS
FUNCTION GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER AS LANGUAGE JS LIBRARY "gcd.js" NAME "gcd" PARAMETERS("p0" DOUBLE, "p1" DOUBLE);
END GCD;';
END;
: generated PLSQL
+ gcd.js
└─┬ gcd
└── SCALAR FUNCTION GCD.GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER

As it is mentioned in the verbose log, the Javascript code is transpiled. My guess is that the Javascript is parsed by the Oracle Truffle framework and compiled by Oracle GaalVM. More info in the One VM to Rule Them All paper.

This has loaded the package, the library and an ‘undefined’ object of type 144 (this MLE is in beta so not all dictionary views have been updated):


SQL> select * from dba_objects where owner='DEMO';
 
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
----- ----------- -------------- --------- -------------- ----------- ------- ------------- --------- ------ --------- --------- --------- --------- ------------ ------- ----------- -----------------
DEMO GCD 93427 PACKAGE 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93426 LIBRARY 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93425 UNDEFINED 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 129 NONE N
 
 
SQL> select * from sys.obj$ where obj# in (select object_id from dba_objects where owner='DEMO');
 
OBJ# DATAOBJ# OWNER# NAME NAMESPACE SUBNAME TYPE# CTIME MTIME STIME STATUS REMOTEOWNER LINKNAME FLAGS OID$ SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 SIGNATURE SPARE7 SPARE8 SPARE9
---- -------- ------ ---- --------- ------- ----- ----- ----- ----- ------ ----------- -------- ----- ---- ------ ------ ------ ------ ------ ------ --------- ------ ------ ------
93427 284 GCD 1 9 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 51713CBD7509C7BDA23B4805C3E662DF 0 0 0
93426 284 gcd.js 1 22 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 8ABC0DDB16E96DC9586A7738071548F0 0 0 0
93425 284 gcd.js 129 144 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 0 0 0

MLE Javascript

So, I’ve executed the function multiple times for each one of 10 millions rows:

SQL> select distinct gcd(rownum,rownum+1),gcd(rownum,rownum+2),gcd(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:17.64

The execution on 30 million took 17 seconds

PL/SQL function

In order to compare, I’ve created the same in PL/SQL:

SQL> create or replace function gcd_pl(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd_pl.gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd_pl.gcd(a,b);
12 end;
13 /

Here is the execution:

SQL> select distinct gcd_pl(rownum,rownum+1),gcd_pl(rownum,rownum+2),gcd_pl(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:21.05

PL/SQL UDF function

In 12c we can declare a function with the pragma UDF so that it is optimized for calling from SQL

SQL> create or replace function gcd_pl_udf(a number, b number) return number as
2 pragma UDF;
3 function gcd(a number, b number) return number is
4 begin
5 if b = 0 then
6 return a;
7 else
8 return gcd_pl_udf.gcd(b,mod(a,b));
9 end if;
10 end;
11 begin
12 return gcd_pl_udf.gcd(a,b);
13 end;
14 /

Here is the execution:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:51.85

Native compilation

We can also improve PL/SQL runtime by compiling it in native, rather than being interpreted on p-code

SQL> alter session set plsql_code_type=native;
Session altered.
 
SQL> alter function gcd_pl_udf compile;
Function altered.
 
SQL> alter function gcd_pl compile;
Function altered.

and here is the result:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:10.31
 
SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:45.54

Inline PL/SQL

Finally, similar to an UDF function, we can declare the function in the query, inlined in a WITH clause:


SQL> with function gcd_pl_in(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd(a,b);
12 end;
13 select distinct gcd_pl_in(rownum,rownum+1),gcd_pl_in(rownum,rownum+2),gcd_pl_in(rownum,rownum+3) from xmltable('1 to 10000000')
14 /

And here is the result:

Elapsed: 00:00:48.92

Elapsed time summary

Here is a recap of the elapsed time:
CaptureMLE

Elapsed: 00:00:17.64 for MLE Javascript
Elapsed: 00:00:45.54 for PL/SQL UDF function (native)
Elapsed: 00:00:48.92 for Inline PL/SQL
Elapsed: 00:00:51.85 for PL/SQL UDF function (interpreted)
Elapsed: 00:01:10.31 for PL/SQL function (native)
Elapsed: 00:01:21.05 for PL/SQL function (interpreted)

The top winner is Javascript!

Perfstat Flame Graph

My tests were deliberately doing something we should avoid for performance and scalability: call a function for each row, because this involves a lot of time spent in switching the context between the SQL and the procedural engine. But this is however good for code maintainability. This overhead is not easy to measure from the database. We can look at the call stack to see what happens when the process is evaluating the operand (evaopn2) and switches to PL/SQL (evapls), and what happens besides running the PL/SQL itself (pfrrun). I have recorded perf-stat for the cases above to display the Flame Graph on the call stack. When looking for more information I remembered that Frits Hoogland already did that so I let you read Frits part1 and part2

You can download my Flame Graphs and here is a summary of .svg name and call stack from operand evaluation to PL/SQL run:

PL/SQL UDF function (native) perf-gcd_pl_UDF_native.svg evaopn2>evapls>peidxrex>penrun
Inline PL/SQL perf-gcd_pl_inline.svg evaopn2>evapls>kkxmss_speedy_stub>peidxrex>pfrrun>pfrrun_no_tool
PL/SQL UDF function (interpreted) perf-gcd_pl_UDF_interpreted.svg evaopn2>evapls>peidxexe>pfrrun>pfrrun_no_tool
PL/SQL function (native) perf-gcd_pl_native.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>penrun
PL/SQL function (interpreted) perf-gcd_pl_interpreted.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>pfrrun>pfrrun_no_tool

But more interesting is the Flame Graph for the JavaScript execution:
CaptureMLEFlame

My interpretation on this is limited but I don’t see a stack of context switching function before calling the MLE engine, which is probably the reason why it is fast. Besides the ‘unknown’ which is probably the run of the JavaScript itself (the libwalnut.so library has no symbols) we can see that most of the time is in converting SQL data types into JavaScript types at call, and the opposite on return:

  • com.oracle.walnut.core.types.OraNumberUtil.doubleToNumber
  • com.oracle.walnut.core.types.OraNumberUtil.numberToDouble

This is the price to pay when running a different language, with different data types.

So what?

This MultiLingual Engine looks promising, both for functionalities (choose the language to run in the database) and performance (same address space than the SQL, and context switching is minimal). Of course, this is only in beta. There may be more things to implement, with more overhead. For example, we can imagine that if it goes to production there will be some instrumentation to measure time and record it in the Time Model. It may also be optimized further. You can test it (download from the MLE home and give feedback about it (on the MLE forum).

This post was about to measuring performance when switching from SQL to PL/SQL. In next post, I’ll look at callbacks when running SQL from MLE.

 

Cet article Oracle Database Multilingual Engine (MLE) est apparu en premier sur Blog dbi services.

ODC Appreciation Day : Javascript in the database

Mon, 2017-10-09 23:00

Tim Hall has launched the idea to post small blogs this day, from all the Oracle community, about an Oracle feature. I choose one feature that is only released in beta test for the moment: the Multilingual Engine (MLE) which is able to run Javascript stored procedures in the database.

Why?

When I first heard about this idea, last year before OOW16, I didn’t understand the idea at all. But the good thing at Oracle Open World, is that we can discuss with Oracle product managers, and with other Oracle DBAs or Developers, rather than relying on rumors or wrong ideas. My perception of Javascript was narrowed to the language used at client-side in thin clients, in the browser, to manage the presentation layer. It is interpreted by the browser, has no type checking, and errors are not easy to understand. Clearly, the opposite if something that I want to run in my database, on my data. PL/SQL is obviously the best choice: compiled and run into the database, strong typing to avoid runtime errors, directly integrated with SQL for better performance, etc.

So that idea of JS in the database made me smile, but I was wrong. What I didn’t get is that Javascript is just a language, and running Javascript does not mean that it has to be interpreted like when it is running on a browser.

Multilingual Engine (MLE)

Actually, what Oracle is developing in its lab goes far beyond just running Javascript in the database. They are building an execution engine, like PL/SQL or SQL execution engine, but this one being able to run programs written in different languages. They start with Javascript and TypeScript (and then strong typing here) but this can be extended in the future (Python, and why not PL/SQL one day running there). The programs will be loaded into the database as stored procedures/functions/packages and compiled into an intermediate representation, like bytecode. This code is optimized to access efficiently to data, like the PL/SQL engine.

Actually, I’ll show in a future post that this new engine can run faster than PL/SQL for some processing and that it looks like the context switching with the SQL engine is highly efficient.

Javascript

So, why would you write your stored procedure in Javascript? The first reason is that there are a lot of existing libraries available and you may not want to re-write one. For example, I remember when working on an airline company application that I had to write in PL/SQL the function to calculate the orthodromic distance (aka great circle). This is a very simple example. But if you can get the formula in Javascript, then why not compile from this rather than translate it into another language? Currently, you can find pretty everything in Javascript or Python.

The second reason is that your application may have to use the same function at different layers. For example, you can check that a credit card number is correctly formed in the presentation layer, in order to show quickly to the user if it is correct or not. That may be Javascript in the browser. But the database should also verify that in case the rows are inserted with a different application, or in case the number has been corrupt in between. That may be PL/SQL in the database. Then you have to maintain two libraries in two different languages, but doing the same thing. Being able to run Javascript in the database let us re-use exactly the same library in the client and in the database.

Finally, one reason why some enterprise architects do not want to write procedures in the database is that the language for that, PL/SQL, can only run on Oracle. If they can write their business logic in a language that can run everywhere, then there is no vendor lock-in anymore. They have the possibility to run on another RDBMS if needed, and still get the optimal performance of processing data in the database.

Public Beta

Currently, this is a lab project from Oracle in Zurich. They have released a public beta downloadable as a VM. Just go to the download page at http://www.oracle.com/technetwork/database/multilingual-engine/overview/index.html

Capture;LE

And stay tuned to this blog to see some performance comparison with PL/SQL User-Defined Function.

 

Cet article ODC Appreciation Day : Javascript in the database est apparu en premier sur Blog dbi services.

Autonomous Database

Sun, 2017-10-08 12:09

Larry Ellison has announced Oracle 18c to be the Autonomous Database, the no-human labor, and self-driven, database. Here is a quick recap of what it is behind the marketing words. My opinion only.

Autonomous

Since Oracle decided to be a public cloud PaaS provider, they announced ‘unmanaged’ and ‘managed’ services. The managed service for DBaaS is where you are not the Database Administrator. Autonomous Database 017-10-04 15.51.53 You have full administration right to deploy and develop your application, but the system administration is done by Oracle: provisioning, patching, upgrade, system tuning, availability. The first managed service was announced last year at OOW16: the Oracle Exadata Express Cloud Service, for developers. And the second managed service is the Oracle Data Warehouse Cloud Service, for data warehousing. It is planned for December 2017 and will be based on new version: Oracle 18c.
And let’s be clear, the ‘autonomous’ label came at the last minute, as the marketing message for Open World, to show that the managed service is fully automated, because Cloud is about automation.

So, is that only marketing? I don’t think so. There are real features behind it. And some of them exist for a long time, and had just to be enhanced further in 18c.

Availability

The features are there for a long time for full automation. RAC protects from instance or server crash and Data Guard protects from all other failures. Both are fully automated with no-human intervention: service relocation in RAC, Fast-Start FailOver in Data Guard. And both can failover transparently with Application Continuity. Oracle also recently introduced Sharding, to link data availability to different datacenter. And this technique will also be used for RAC instance affinity in Oracle 18c.

Patch and Upgrade

You can apply patches in a rolling manner with RAC. The problem was OJVM, but this will be also rolling in 18c. You can do rolling upgrade with Data Guard. And 18c will have automated and faster upgrade at PDB plug-in. From a system point of view, all is fully automated. However, we will still need human intervention for testing, and planning it at the right time, and of course for troubleshooting when something goes wrong. The autonomous database is not incompatible with that. With logical replication (Golden Gate, Transient logical standby) or versioning (Edition Based Redefinition, Workspace Manager) Oracle has the tools to automatically provide the environment to test the upgrade before it is opened in production.

We can also imagine that other features may help to avoid regression. For example, SQL Plan Management can prevent execution plan regressions on common use-cases, and let the (human) user accept evolved plans later. This can also be done automatically (but humans still have to define the capture). Of course, we all know the limits of the advisors and automatic implementation. But there are also some applications where it can be fine. This ‘autonomous’ Cloud Service is a possibility, not a universal solution.

Tuning

Again, we all know that the database cannot be optimized without the knowledge of the data and the design of the application. But Oracle has also a lot of features to automate some common practices. Automatic Data Optimization, Segment Advisor, SQL Access Advisor, Auto DOP, automatic choice of buffered or direct reads,… We have seen a lot of drawbacks with SQL Plan Directives, but that was mainly in OLTP. This new managed service is for DWH where dynamic sampling is not a bad idea.

The idea is to ‘load data and run’ and Oracle takes care of index, partitioning, etc. You create the table and declare Primary keys and Foreign key (RELY DISABLE NOVALIDATE see a previous blog post about that). Then I suppose that Oracle can guess which are the dimension tables and the fact tables. And then do some common things to do on that: partition on the date dimension (if there is only one – maybe it has to detect some load/query patterns), create bitmap indexes on all fact foreign key. Online statistics gathering will be extended in 18c to incrementally maintain statistics on bulk-insert, and this may include histograms.

I’m very skeptical on that point, because I’ve seen lot of datawarehouse databases where, even on big hardware, there is always a need for optimization. But my point of view may be biased. I’m a consultant, and then I see only the databases where people think they need human analysis and troubleshooting. There may be some non-critical datawarehouse databases where nobody is doing any optimisation, and then implementing some default optimization may be sufficient to make it a bit more efficient. This autonomous elastic cloud service may be a good start for some projects, when it is difficult to plan the hardware and human resources that will be needed. But I’m quite sure that after a while, designing an efficient and scalable infrastructure and data model will still require our DBA skills and human intelligence.

Load

This service is provisioned as a PDB where what we can do is limited by the multitenant lockdown profiles. We can connect easily (with a credentials .zip) from SQL Developer, and we can load data from an object store using the DBMS_CLOUD package to define the credentials (Swift) and load text files. Once again, it is an evolution of existing features like external tables and preprocessors.

Scalability

This service is Elastic: it can scale the CPU resource up and down without stopping the service. Again this can use existing features: OVM for the host, and Resource Manager for the CDB and PDB.

So what?

Some will say that it is only marketing with nothing behind, and the same announcement that was made by each previous version. Others will say that it is really autonomous, self-driving, self-securing, self-repairing. This ‘self-driven’ idea is an analogy with Tesla (Elon Musk is a big friend of Larry Ellison), but for the moment, there’s still a human inside a Tesla. Autonomous is a vision for the future, not the present. The present is a new managed service, more online operations, and easier upgrades.

What I really like is the integration of existing features to serve a new marketing trend. Do you remember when the flashback features came out? Flashback query existed internally since the invention of rollback segments (and maybe even before with the Before Image). It was exposed in 9i with a dbms package, and 10g in the SQL ‘as of’. All was already there to bring a feature that no other RDBMSs are capable of. The existing features support the marketing message at the right time, and this marketing message encourages to develop new features again, like new online operations. And those do not benefit only to the managed cloud services as they go to the main branch of Oracle Database.

The other thing I like is the idea to have a configuration dedicated to specific needs. In the OOW17 hands-on lab, there was even a “_cloud_service_type” parameter set to DWCS. Here this service is for DWH, and there will be one dedicated to OLTP mid-2018. This is something I would like to see in future versions. For example, there was a lot of discussion about 12.1 enabling adaptive statistics by default, and 12.2 disabling them. This kind of default settings could be set depending on the database type: more conservative for OLTP upgrades, more adaptive for new datawarehouse projects.

 

Cet article Autonomous Database est apparu en premier sur Blog dbi services.

Changing the LOCALE in CentOS/RedHat 7

Fri, 2017-10-06 07:04

What I really don’t like is this: Someone wants me to work on a Linux system for whatever purpose and then I get messages like this:

postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permission non accordée

or:

postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permiso denegado

It is not that I don’t like French or Spanish but usually (well, not in that simple case) I am not able to translate that without the help from others. So how can you change that to English (which should be the default anyway nowadays)? Lets go:

A quick fix for getting the above messages in English would be:

postgres@pgbox:/home/postgres/ [pg960final] export LANG="en_EN.UTF8"
postgres@pgbox:/home/postgres/ [pg960final] cat /proc/sysrq-trigger 
cat: /proc/sysrq-trigger: Permission denied

The good thing with this approach is, that the setting is temporary and valid only for my current session. The default for that particular user and the system default will not be touched. But what when you want to make this persistent for this user? Easy as well:

postgres@pgbox:/home/postgres/ [pg960final] echo "LANG=\"en_EN.UTF8\"" >> ~/.bash_profile 
postgres@pgbox:/home/postgres/ [pg960final] echo "export LANG" >> ~/.bash_profile 

Once you have that every new session will have that set. The system default is defined in /etc/locale.conf:

postgres@pgbox:/home/postgres/ [pg960final] cat /etc/locale.conf 
LANG="en_US.UTF-8"

So when you want to make it the persistent default for the whole system then change it there. Not a big deal, but good to know.

 

Cet article Changing the LOCALE in CentOS/RedHat 7 est apparu en premier sur Blog dbi services.

And finally it is there…PostgreSQL 10

Thu, 2017-10-05 08:45

Selection_001

What are you waiting for?

 

Cet article And finally it is there…PostgreSQL 10 est apparu en premier sur Blog dbi services.

udev rules for ASM devices on RHEL 7

Thu, 2017-10-05 06:28

Preparing the storage devices for use with ASM is one of the first tasks you should do. When you do not want to use ASMLib then udev is what you should use for this. As the order/naming of the devices in the device tree is not guaranteed (e.g. dm-1 is not guaranteed to be the same device after a reboot of the system) you need a way to fix this and that is what udev can be used for.

The first step is to get the UUIDs of all the devices you want to use with ASM (dm-1 to dm-3, in this case). This assumes that multi pathing is already setup:

[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-1 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a819ff0000018559bf68d7
[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-2 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a81955000001b359bf6920
[root@xxx ~]$ udevadm info --query=all --name=/dev/dm-3 | grep -i DM_UUID
E: DM_UUID=mpath-3600a098000a819ff0000018359bf68ce

Having this the udev rules should look like this (of course the name does not need to be same as below):

[root@xxx ~]$ cat /etc/udev/rules.d/30-oracle.rules 
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a81955000001b359bf6920", NAME="asm-crs",  OWNER="oracle", GROUP="asmadmin", MODE="0600"
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a819ff0000018359bf68ce", NAME="asm-fra",  OWNER="oracle", GROUP="asmadmin", MODE="0600"
ACTION=="add|change", ENV{DM_UUID}=="mpath-3600a098000a819ff0000018559bf68d7", NAME="asm-data", OWNER="oracle", GROUP="asmadmin", MODE="0600"

Once the system rebooted the permissions on the devices should be fine and naming persistent:

[root@xxx ~]$ ls -al /dev/mapper/
total 0
drwxr-xr-x  2 root root     280 Oct  4 08:50 .
drwxr-xr-x 20 root root    4180 Oct  4 08:50 ..
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-crs -> ../dm-2
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-data -> ../dm-1
lrwxrwxrwx  1 root root       7 Oct  4 13:10 asm-fra -> ../dm-3
crw-------  1 root root 10, 236 Oct  4 08:50 control
lrwxrwxrwx  1 root root       7 Oct  4 12:51 disk00 -> ../dm-4
lrwxrwxrwx  1 root root       7 Oct  4 08:50 vg_root-lv_openafs -> ../dm-8
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_root -> ../dm-0
lrwxrwxrwx  1 root root       8 Oct  4 12:51 vg_root-lv_swap -> ../dm-10
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_tmp -> ../dm-7
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var -> ../dm-6
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var_log -> ../dm-5
lrwxrwxrwx  1 root root       7 Oct  4 12:51 vg_root-lv_var_log_audit -> ../dm-9
[root@xxx ~]$ ls -la /dev/dm*
brw-rw---- 1 root   disk     253,  0 Oct  4 12:51 /dev/dm-0
brw------- 1 oracle asmadmin 253,  1 Oct  4 13:11 /dev/dm-1
brw-rw---- 1 root   disk     253, 10 Oct  4 12:51 /dev/dm-10
brw------- 1 oracle asmadmin 253,  2 Oct  4 13:11 /dev/dm-2
brw------- 1 oracle asmadmin 253,  3 Oct  4 13:11 /dev/dm-3
brw-rw---- 1 root   disk     253,  4 Oct  4 12:51 /dev/dm-4
brw-rw---- 1 root   disk     253,  5 Oct  4 12:51 /dev/dm-5
brw-rw---- 1 root   disk     253,  6 Oct  4 12:51 /dev/dm-6
brw-rw---- 1 root   disk     253,  7 Oct  4 12:51 /dev/dm-7
brw-rw---- 1 root   disk     253,  8 Oct  4 08:50 /dev/dm-8
brw-rw---- 1 root   disk     253,  9 Oct  4 12:51 /dev/dm-9

Hope this helps.

 

Cet article udev rules for ASM devices on RHEL 7 est apparu en premier sur Blog dbi services.

Going from SLES12 SP2 to SLES12 SP3, online

Wed, 2017-10-04 05:59

SLES 12 SP3 was released some time ago and as we have customers running on that I thought it might be a good idea to test the upgrade from SP2 to SP3. Actually it turned out this is quite easy and can be done online. The supported methods for doing this is either by using YaST or Zypper directly. As I wanted to stay on the command line I’ll use Zypper for the scope of this post. Lets go…

As said above I’ll be upgrading from SLES 12 SP2:

postgres@sles12sp2:/home/postgres/ [pg963] cat /etc/os-release 
NAME="SLES"
VERSION="12-SP2"
VERSION_ID="12.2"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP2"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp2"

When you have another setup make sure that the upgrade path is supported, which you can check here.

My current kernel version is:

postgres@sles12sp2:/home/postgres/ [pg963] uname -a
Linux sles12sp2 4.4.74-92.38-default #1 SMP Tue Sep 12 19:43:46 UTC 2017 (545c055) x86_64 x86_64 x86_64 GNU/Linux

The first thing you should do is to apply the latest patches:

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper patch
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Software_Development_Kit_12_SP2_x86_64'.
Retrieving repository 'SLES12-SP2-Updates' metadata ...............................................................................................[done]
Building repository 'SLES12-SP2-Updates' cache ....................................................................................................[done]
Retrieving repository 'SLE-SDK12-SP2-Updates' metadata ............................................................................................[done]
Building repository 'SLE-SDK12-SP2-Updates' cache .................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 4 NEW patches are going to be installed:
  SUSE-SLE-SERVER-12-SP2-2017-1597 SUSE-SLE-SERVER-12-SP2-2017-1602 SUSE-SLE-SERVER-12-SP2-2017-1606 SUSE-SLE-SERVER-12-SP2-2017-1622

The following 8 packages are going to be upgraded:
  autofs gtk3-data gtk3-lang gtk3-tools libgtk-3-0 typelib-1_0-Gtk-3_0 xinetd yast2-xml

8 packages to upgrade.
Overall download size: 6.4 MiB. Already cached: 0 B. After the operation, additional 12.0 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package autofs-5.0.9-28.3.5.x86_64                                                                       (1/8), 407.0 KiB (  2.0 MiB unpacked)
Retrieving delta: ./x86_64/autofs-5.0.9-27.2_28.3.5.x86_64.drpm, 160.5 KiB
Retrieving: autofs-5.0.9-27.2_28.3.5.x86_64.drpm ..................................................................................................[done]
Applying delta: ./autofs-5.0.9-27.2_28.3.5.x86_64.drpm ............................................................................................[done]
Retrieving package gtk3-data-3.20.10-17.3.20.noarch                                                                 (2/8), 162.7 KiB ( 10.8 KiB unpacked)
Retrieving: gtk3-data-3.20.10-17.3.20.noarch.rpm ..................................................................................................[done]
Retrieving package xinetd-2.3.15-8.8.1.x86_64                                                                       (3/8), 126.6 KiB (286.4 KiB unpacked)
Retrieving delta: ./x86_64/xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm, 17.8 KiB
Retrieving: xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm .................................................................................................[done]
Applying delta: ./xinetd-2.3.15-8.5.1_8.8.1.x86_64.drpm ...........................................................................................[done]
Retrieving package yast2-xml-3.1.2-2.3.1.x86_64                                                                     (4/8),  48.2 KiB (127.7 KiB unpacked)
Retrieving delta: ./x86_64/yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm, 10.3 KiB
Retrieving: yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm ..........................................................................................[done]
Applying delta: ./yast2-xml-3.1.1_3.1.2-1.46_2.3.1.x86_64.drpm ....................................................................................[done]
Retrieving package gtk3-tools-3.20.10-17.3.20.x86_64                                                                (5/8), 237.8 KiB (294.2 KiB unpacked)
Retrieving: gtk3-tools-3.20.10-17.3.20.x86_64.rpm .................................................................................................[done]
Retrieving package libgtk-3-0-3.20.10-17.3.20.x86_64                                                                (6/8),   2.5 MiB (  8.4 MiB unpacked)
Retrieving delta: ./x86_64/libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm, 175.6 KiB
Retrieving: libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm ...........................................................................................[done]
Applying delta: ./libgtk-3-0-3.20.10-16.2_17.3.20.x86_64.drpm .....................................................................................[done]
Retrieving package typelib-1_0-Gtk-3_0-3.20.10-17.3.20.x86_64                                                       (7/8), 335.2 KiB (879.3 KiB unpacked)
Retrieving delta: ./x86_64/typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm, 159.9 KiB
Retrieving: typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm ..................................................................................[done]
Applying delta: ./typelib-1_0-Gtk-3_0-3.20.10-16.2_17.3.20.x86_64.drpm ............................................................................[done]
Retrieving package gtk3-lang-3.20.10-17.3.20.noarch                                                                 (8/8),   2.6 MiB ( 18.8 MiB unpacked)
Retrieving delta: ./noarch/gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm, 178.6 KiB
Retrieving: gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm ............................................................................................[done]
Applying delta: ./gtk3-lang-3.20.10-16.2_17.3.20.noarch.drpm ......................................................................................[done]
Checking for file conflicts: ......................................................................................................................[done]
(1/8) Installing: autofs-5.0.9-28.3.5.x86_64 ......................................................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/autofs...


(2/8) Installing: gtk3-data-3.20.10-17.3.20.noarch ................................................................................................[done]
(3/8) Installing: xinetd-2.3.15-8.8.1.x86_64 ......................................................................................................[done]
Additional rpm output:
Updating /etc/sysconfig/xinetd...


(4/8) Installing: yast2-xml-3.1.2-2.3.1.x86_64 ....................................................................................................[done]
(5/8) Installing: gtk3-tools-3.20.10-17.3.20.x86_64 ...............................................................................................[done]
(6/8) Installing: libgtk-3-0-3.20.10-17.3.20.x86_64 ...............................................................................................[done]
(7/8) Installing: typelib-1_0-Gtk-3_0-3.20.10-17.3.20.x86_64 ......................................................................................[done]
(8/8) Installing: gtk3-lang-3.20.10-17.3.20.noarch ................................................................................................[done]

Once all is patched make sure you have the zypper-migration-plugin installed on your system (which is already there in my case):

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper install zypper-migration-plugin
Refreshing service 'Containers_Module_12_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_12_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Software_Development_Kit_12_SP2_x86_64'.
Loading repository data...
Reading installed packages...
'zypper-migration-plugin' is already installed.
No update candidate for 'zypper-migration-plugin-0.10-12.4.noarch'. The highest available version is already installed.
Resolving package dependencies...

Nothing to do.
postgres@sles12sp2:/home/postgres/ [pg963] 

Ready to migrate to SP3:

postgres@sles12sp2:/home/postgres/ [pg963] sudo zypper migration

Executing 'zypper  refresh'

Retrieving repository 'SLE-Module-Containers12-Pool' metadata .....................................................................................[done]
Building repository 'SLE-Module-Containers12-Pool' cache ..........................................................................................[done]
Repository 'SLE-Module-Containers12-Updates' is up to date.                                                                                              
Repository 'SLES12-SP2-12.2-0' is up to date.                                                                                                            
Retrieving repository 'SLES12-SP2-Pool' metadata ..................................................................................................[done]
Building repository 'SLES12-SP2-Pool' cache .......................................................................................................[done]
Repository 'SLES12-SP2-Updates' is up to date.                                                                                                           
Retrieving repository 'SLE-SDK12-SP2-Pool' metadata ...............................................................................................[done]
Building repository 'SLE-SDK12-SP2-Pool' cache ....................................................................................................[done]
Repository 'SLE-SDK12-SP2-Updates' is up to date.                                                                                                        
All repositories have been refreshed.

Executing 'zypper  --no-refresh patch-check --updatestack-only'

Loading repository data...
Reading installed packages...

0 patches needed (0 security patches)

Available migrations:

    1 | SUSE Linux Enterprise Server 12 SP3 x86_64
        SUSE Linux Enterprise Software Development Kit 12 SP3 x86_64
        Containers Module 12 x86_64 (already installed)
       

[num/q]: 1

Executing 'snapper create --type pre --cleanup-algorithm=number --print-number --userdata important=yes --description 'before online migration''

Upgrading product SUSE Linux Enterprise Server 12 SP3 x86_64.
Found obsolete repository SLES12-SP2-12.2-0
Disable obsolete repository SLES12-SP2-12.2-0 [y/n] (y): y
... disabling.
Upgrading product SUSE Linux Enterprise Software Development Kit 12 SP3 x86_64.
Upgrading product Containers Module 12 x86_64.

Executing 'zypper --releasever 12.3 ref -f'

Warning: Enforced setting: $releasever=12.3
Forcing raw metadata refresh
Retrieving repository 'SLE-Module-Containers12-Pool' metadata ..............................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-Module-Containers12-Pool' cache ...................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-Module-Containers12-Updates' metadata ...........................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-Module-Containers12-Updates' cache ................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLES12-SP3-Pool' metadata ...........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLES12-SP3-Pool' cache ................................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLES12-SP3-Updates' metadata ........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLES12-SP3-Updates' cache .............................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-SDK12-SP3-Pool' metadata ........................................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-SDK12-SP3-Pool' cache .............................................................................................................[done]
Forcing raw metadata refresh
Retrieving repository 'SLE-SDK12-SP3-Updates' metadata .....................................................................................................[done]
Forcing building of repository cache
Building repository 'SLE-SDK12-SP3-Updates' cache ..........................................................................................................[done]
All repositories have been refreshed.

Executing 'zypper --releasever 12.3  --no-refresh  dist-upgrade --no-allow-vendor-change '

Warning: Enforced setting: $releasever=12.3
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Loading repository data...
Reading installed packages...
Computing distribution upgrade...

The following 21 NEW packages are going to be installed:
  blog crash-kmp-default-7.1.8_k4.4.73_5-3.9 kernel-default-4.4.82-6.9.1 libefivar1 libfastjson4 libfreebl3 libibverbs libibverbs1 libmlx5-1 librados2 libsnapper4
  libsoftokn3 libtidyp-1_04-0 lttng-ust mozilla-nss mozilla-nss-certs patterns-sles-yast2 perl-HTML-Tidy python-talloc rdma-core sles-release-POOL

The following NEW pattern is going to be installed:
  yast2

The following 2 packages are going to be REMOVED:
  libsnapper3 pytalloc

The following 168 packages are going to be upgraded:
  aaa_base aaa_base-extras augeas augeas-lenses autoyast2 autoyast2-installation binutils cpupower crash cups-filters-ghostscript dbus-1 dbus-1-x11 dracut
  efibootmgr ethtool filesystem gdb grub2 grub2-i386-pc grub2-snapper-plugin grub2-systemd-sleep-plugin hwinfo ipmitool iscsiuio kdump kexec-tools kpartx
  libaugeas0 libblkid1 libcpupower0 libdbus-1-3 libdcerpc0 libdcerpc-binding0 libdrm2 libdrm_amdgpu1 libdrm_intel1 libdrm_nouveau2 libdrm_radeon1 libfdisk1
  libgbm1 libgnutls28 libldb1 libLLVM libmount1 libndr0 libndr-krb5pac0 libndr-nbt0 libndr-standard0 libnetapi0 libp11-kit0 libparted0 libsamba-credentials0
  libsamba-errors0 libsamba-hostconfig0 libsamba-passdb0 libsamba-util0 libsamdb0 libsmartcols1 libsmbclient0 libsmbconf0 libsmbldap0 libsolv-tools libstorage7
  libstorage-ruby libtalloc2 libtasn1 libtasn1-6 libtdb1 libtevent0 libtevent-util0 libuuid1 libwbclient0 libx86emu1 libyui7 libyui-ncurses7 libyui-ncurses-pkg7
  linux-glibc-devel logrotate makedumpfile mcelog mdadm Mesa Mesa-libEGL1 Mesa-libGL1 Mesa-libglapi0 multipath-tools open-iscsi openslp openslp-server openssh
  openssh-helpers p11-kit p11-kit-tools parted patterns-sles-base patterns-sles-laptop patterns-sles-Minimal perl-Bootloader perl-solv postfix python-solv
  release-notes-sles rollback-helper rsyslog ruby2.1-rubygem-cfa ruby2.1-rubygem-cfa_grub2 samba-client samba-libs samba-winbind sle-sdk-release
  sle-sdk-release-POOL sles-release snapper snapper-zypp-plugin sudo SUSEConnect SuSEfirewall2 systemd-presets-branding-SLE sysvinit-tools util-linux
  util-linux-lang util-linux-systemd yast2 yast2-add-on yast2-bootloader yast2-ca-management yast2-core yast2-country yast2-country-data yast2-dhcp-server
  yast2-dns-server yast2-firewall yast2-ftp-server yast2-hardware-detection yast2-http-server yast2-installation yast2-iscsi-client yast2-kdump yast2-ldap
  yast2-mail yast2-migration yast2-network yast2-nis-client yast2-ntp-client yast2-packager yast2-pam yast2-perl-bindings yast2-pkg-bindings yast2-printer
  yast2-registration yast2-ruby-bindings yast2-samba-client yast2-schema yast2-services-manager yast2-slp yast2-smt yast2-snapper yast2-storage yast2-support
  yast2-theme-SLE yast2-trans-en_US yast2-update yast2-users yast2-vm yast2-ycp-ui-bindings zypper zypper-lifecycle-plugin zypper-log

The following 3 patterns are going to be upgraded:
  base laptop Minimal

The following 2 products are going to be upgraded:
  "SUSE Linux Enterprise Server 12 SP2" "SUSE Linux Enterprise Software Development Kit 12"

The following 19 packages are going to be downgraded:
  branding-SLE grub2-branding-SLE iptables kernel-firmware libapparmor1 libICE6 libiptc0 libjasper1 libldap-2_4-2 libxtables10 libzypp nfs-client
  nfs-kernel-server openldap2-client openldap2-devel plymouth-branding-SLE wallpaper-branding-SLE xtables-plugins yast2-auth-client

The following package is going to change architecture:
  yast2-network  x86_64 -> noarch


The following 5 packages are not supported by their vendor:
  libtidyp-1_04-0 openldap2-devel perl-HTML-Tidy sle-sdk-release sle-sdk-release-POOL

168 packages to upgrade, 19 to downgrade, 21 new, 2 to remove, 1 to change arch.
Overall download size: 171.1 MiB. Already cached: 0 B. After the operation, additional 184.6 MiB will be used.
Continue? [y/n/...? shows all options] (y): y

As I am fine with the summary I’ll proceed:

(Use arrows or pgUp/pgDown keys to scroll the text by lines or pages.)

In order to install 'SUSE Linux Enterprise Software Development Kit 12 SP3' (product), you must agree to terms of the following license agreement:

SUSE(R) Linux Enterprise Software Development Kit 12 Service Pack 3
 
SUSE End User License Agreement 

Of course you need to agree to the license agreement and then the migration starts. At the end you should see something like this:

    dracut: *** Creating image file '/boot/initrd-4.4.82-6.9-default' ***
    dracut: *** Creating initramfs image file '/boot/initrd-4.4.82-6.9-default' done ***

There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.

Executing 'snapper create --type post --pre-number 29 --cleanup-algorithm=number --print-number --userdata important=yes --description 'after online migration''

That’s all:

postgres@sles12sp2:/home/postgres/ [pg963] cat /etc/os-release 
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp3"

Of course the kernel is still the old one and the new one will become active after the next reboot:

postgres@sles12sp2:/home/postgres/ [pg963] sudo systemctl reboot
...
...
postgres@sles12sp2:/home/postgres/ [pg963] uname -a
Linux sles12sp2 4.4.82-6.9-default #1 SMP Tue Sep 19 06:38:15 UTC 2017 (d4a2d80) x86_64 x86_64 x86_64 GNU/Linux

Quite easy.

 

Cet article Going from SLES12 SP2 to SLES12 SP3, online est apparu en premier sur Blog dbi services.

ODA X7

Tue, 2017-10-03 14:58
ODA X6-2

So, the ODA X6 End of Life seems to be planned for October, and be careful, as far as I know, this concerns also the storage expansion.

The main feedback from the X6-2 series were:

  • not enough cores when consolidating multiple databases in M, L and HA
  • not enough storage on HA when compared with X5-2
  • Standard Edition on HA would be great for cheaper RAC
  • The position of the ‘Large’ version of ODA Lite was not clear
ODA X7-2S

The Small version of the ODA Lite has the same number of cores as in X6-2: 10 cores in one socket. Those are Intel Xeon Silver 4114 processor (2.2GHz). Some more memory: 6x32GB instead of 4x32GB, so 192GB expandable to 384GB as before.

And the big new thing is that it can run virtualized (X6-2 was Bare Metal only). Even if it is still OVM, the hypervisor is KVM. OVM is accepted by Oracle to license per vCPU, but for the moment the database cannot run on a VM so you need to get database licences for all the cores enabled on the ODA.

The storage for database is two 6.4TB Flash NVMe, which means 12.8TB raw capacity, 4.8TB double-mirrored, 3.2TB triple-mirrored.

ODA X7-2 M

You can look at it in the 3D view

The larger version of ODA Lite has more cores: 36 cores instead of 20 before: two sockets with Intel Xeon Gold 6140 processor (2.3GHz). Some more memory: 12x32GB=384GB instead of 256GB. Expandable to 768GB as before.

As the Small one, it can be virtualized with KVM.

The storage for database is two 6.4TB Flash NVMe, and you can add 6 additional ones to bring the storage to 51.2TB raw capacity.

Because of this new capacity in the ‘Medium’ one, and the large possibility to add disks, there is no ‘Large’ version of ODA X7-2.

ODA X7-2 HA

You can look at it in the 3D view

The HA version of ODA has more cores: 72 cores instead of 40 before: two 18-cores sockets with Intel Xeon Gold 6140 processor (2.3GHz), in two nodes. Some more memory: 768GB instead of 256GB. Expandable to 1.5GB as before.

The big new thing is that you are allowed to run Standard Edition, and then RAC with no option. In the latter case, with SE2 only one socket per node is enabled.

The second important thing is that the storage which was limited (12TB expandable to 24TB or 48TB with a second shelf) is now declined into:

  • High Performance 3.2GB SSD for 16TB (raw, which is 8TB in normal redundancy or 5.3 TB in normal redundancy) expandable to 64TB, or 128TB with additional shelf
  • High Capacity starts the same with 4 SSD, but you add mechanical 10TB disks for additional 150TB to 300TB

For the REDO, there are 4 additional 800GB disks (4 additional in the expansion shelf).

In the initial release of X7 the virtualization on HA still used OVM/Xen.

Common new features

You will be able to install 12cR2 (12.2.0.1).

You have the choice for the network between:

  • two 10 GbE ports (RJ45)
  • or two 10/25 GbE ports (SFP128)

OAKCLI will be replaced everywhere by ODACLI

CaptureODA-X7-2

 

Cet article ODA X7 est apparu en premier sur Blog dbi services.

SQL Server 2017 RTM is available: time to try!

Tue, 2017-10-03 09:47

Now SQL Server 2017 RTM  is available. :-D
The first step is to go to the Evaluation Center here

Install_00

After completing the registration with your contact information, you download the file “SQL Server 2017-SSEI-Eval.exe
Install_02
Run SQL Server 2017-SSEI-Eval.exe and now it begins!

A first screen gives me 3 choices to install SQL Server

  • Basic: easy and fast to download and install SQL Server 2017
  • Custom: Download the setup package and launch the SQL Installation Center
  • Download Media: choose between an ISO or a CAB

Install_03

Basic Installation

In this mode, you have 2 steps to follow:

    • Accept the license

Install_basic_01

 

    • Choose the destination folder

Install_basic_02

And now the installation begins…
Install_basic_03
At the end, you have a SQL Server installed with a default instance as you can see in this summary window.
Install_basic_04
If you click on the button “Connect now”, the interface run a SQLCMD
Install_basic_05_connectnow

The second button “Customize”, run the SQL Server setup that you know to install a SQL Server instance
Install_basic_05_customize

The “Install SSMS” button retursn you to the web page to download SQL Server management Studio (SSMS)

Install_basic_05_ssms

Here the direct link to SSMS

At the end, you can see also the installation folder from your default instance…
Install_basic_06Be very carefuly with this installation type, at the end you have a default instance installed!

The last point to notice is that on C:\SQLServer2017Media, you retrieve the full Setup from SQL Server 2017
Install_basic_07

Custom installation

This installation asks the target location to dowload the installation files (Default is C:\SQLServer2017Media).

installation_custom

 

Remarks: If you have like me installed the Basic before and click the custom installation, it will detect that you have already the installation files (download phase is shunted!).

After the download, it will run the “Classical” setup (SQL Server Installation Center) to install an instance

Install_custom02

Download Media

This type of installation gives you the language choice, the package choice (.iso or .cab) and the folder for the download file.
Install_04
Click Download and let’s go!
Install_05
After few minutes, the file (iso in my case) is here.
Install_06

This last installation type is the best for me because I can mount the iso file on VMs to install and test SQL Server 2017!

Now, It’s time to test this new version for my Team and write a lot of blogs for you! 8-)

 

Cet article SQL Server 2017 RTM is available: time to try! est apparu en premier sur Blog dbi services.

Pages