Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 13 hours 31 min ago

La Suisse concernée par le RGPD ?

Tue, 2018-02-13 05:23

Le 25 mai 2018 entretra en application le Règlement Général sur la Protection des Données (RGPD). Il impactera toutes les entreprises opérant du traitement de données à caractère personnel sur des résidents européens.

A l’aube de cette échéance, beaucoup de fausses informations circulent au sein des entreprises suisse. L’objectif de ce blog est d’apporter un peu de lumière à certaines questions qui se posent. Pour ce faire, nous avons demandé à Me Metille, avocat spécialisé en protection des données de répondre aux questions que peut se poser une entreprise Suisse face à ce règlement.

1. Q. Est-ce qu’un sous-traitant d’une entreprise soumise au RGPD est obligatoirement soumis au RGPD, ceci en lien avec la chaîne de solidarité entre le responsable du traitement et ses fournisseurs ?
A. > Certains spécialistes considèrent que le sous-traitant est soumis, mais la plupart retiennent que le sous-traitant n’est pas soumis du seul fait qu’il est sous-traitant. En revanche, le sous-traitant qui vise de manière intentionnelle des clients européens sera soumis. Dans tous les cas le sous-traitant aura des obligations liées au RGPD, notamment celles de l’article 28.

2. Q. Est-ce qu’une entreprise Suisse qui emploie des personnes dans l’Union Européenne est systématiquement soumise au RGPD ?
A. > Non, une entreprise n’est pas soumise au RGPD sous prétexte qu’elle emploie des ressortissants de l’union européenne.

3. Q. Est-ce qu’une entreprise suisse qui emploie des personnes dans l’Union Européenne est soumise au RGPD si ses employés ont une activité commerciale dans l’Union Européenne ?
A. > Cela dépend de plusieurs facteurs tel que le lieu d’émission des factures, le lieu depuis lequel est initié l’activité ou encore une volonté de développer une activité en Europe, mais il y a un risque que les personnes employés soient qualifiées d’établissement et que ‘entreprise suisse soit assimilée à une entreprise présente dans l’Union Européenne.

4. Q. En tant que sous-traitant, est-ce qu’il y a une obligation de conseil par rapport aux données qui sont traitées ?
A > Le sous-traitant doit principalement s’assurer de la sécurité des données traitées et du respect du contrat. Si le sous-traitant à connaissance d’un élément qui peut potentiellement mettre en péril la sécurité des données il se doit de l’annoncer. (ex. Connexion VPN non sécurisée permettant d’accéder à des données sensibles). Dans certains cas, il sera de son devoir de refuser d’exécuter les tâches qui lui sont mandatées (ex. faire un export de données sensibles non crypté et l’envoyer à un destinataire hors de l’Union Européenne) La responsable de traitement est celui qui a le plus de responsabilités et qui doit s’assurer que le traitement est licite.

5. Q. Est-ce que le droit à l’effacement ou droit à l’oubli spécifié dans l’article 17 de la RGPD est contradictoire avec des articles de loi nécessitant de conserver des données pour une durée déterminée?
A. > Si une loi suisse indique qu’il est nécessaire de conserver des informations pour une durée déterminée c’est cette loi qui prévaut.

6. Q. Comment concilier droit à l’oubli et politique de sauvegarde de données ?
A. > La sauvegarde ne doit pas être utilisée dans un autre but, et en particulier pas pour retrouver des données que l’on aurait pas le droit de conserver. Il faut prendre les mesures nécessaires afin que d’une part seules les personnes autorisées puissent accéder aux médias de sauvegarde et que d’autre part en cas de restauration d’un jeu de sauvegarde seule les données pouvant l’être soient exploitées (on retirera des données privées, des données qui ont été effacées du support original après la sauvegarde ou que le droit à l’oubli implique de ne pas conserver.)

7. Q. Combien de temps peut-on conserver des données personnelles ?
A. > Il ne faut pas les conserver  plus longtemps que le temps nécessaire à atteindre le but visé au moment de leur collecte. On peut néanmoins ajouter une marge de sécurité raisonnable. Un délai de quelques mois peut être considéré comme une marge de sécurité raisonnable

8. Q. Peut-on conserver ad vitam aeternam la postulation d’un employé ?
A. > Non, si le candidat n’est pas retenu, il faut retourner ou détruire son dossier de candidature. Une conservation durant 3-4 mois, pour pouvoir se justifier en cas de reproches liés au non-engagment, est admise.
Si on veut conserver le dossier en vue d’un autre poste, il faut explicite du candidat (et dans ce cas on se limitera à 2-3 ans).

9. Q. Quels moyens techniques peuvent participer concrètement à la protection des données ?
A. > Logiciel d’encryption, de disponibilité et d’intégrité des données. Tous ce qui permet de retracer ce qui a été fait (contrôle d’accès, log, documentation, hardening, etc..)

10. Q. Est-ce que les sanctions liées au RGPD s’appliquent uniquement en cas de faille de sécurité ou de vol de données ?
A. > Les sanctions s’appliquent en cas de violation des règles du RGPD et donc pas uniquement en cas de faille de sécurité. Il pourrait même y avoir une double sanction en cas de faille de sécurité si l’entreprise n’annonce pas une faille. Dans un premier temps l’entreprise soumise au RGPD pourrait être sanctionnée pour n’avoir pas assuré la sécurité des données et dans un deuxième temps car l’entreprise qui a subi un vol de données ne l’a pas annoncé. Même sans se faire voler les données, elle pourrait déjà être amendée pour ne pas avoir pris des mesures élémentaires visant à assurer la sécurité.

11. Q. Quel pays de l’Union Européenne va appliquer les sanctions sur le territoire suisse ?
A. > Vraisemblablement chaque autorité nationale d’un pays de l’Union européenne ou des résidents sont concernés. Il n’est pas certain que les actes d’enquêtes et la sanction soient valablement reconnus par un juge suisse, ce qui rendrait l’encaissement difficile.

12. Q. Est-ce qu’il y a des assurances pour couvrir les risques liés au RGPD ?
A. > Oui il existe des produits pour couvrir certains risques mais on trouve de tout et certains produits sont fantaisistes. Il est important de distinguer ce qui est couvert de ce qui ne l’est pas, les conditions d’exclusion, le franchises et les limites de couverture.

13. Q. Est-ce qu’une entreprise soumise au RGPD à l’obligation d’informer un sous-traitant qu’il travaille sur des données personnelles ?
A.> L’entreprise soumise au RGPD doit non seulement indiquer à son sous-traitant qu’il travaille sur des données personnelles, mais en plus lui indiquer précisément ce qu’il doit faire.

14. Q. Faut-il avoir un DPO externe ou interne ?
>A. Le DPO est exigé dans des cas limités (autorité publique, suivi systématique à grande échelles, données sensibles). Il peut être interne ou externe.
Indépendamment de l’obligation légale, l’important est que le DPO dispose d’une bonne connaissance de l’entreprise et soit disponible pour les employés ou chefs de projets. Si le DPO n’a pas les connaissances juridiques ou techniques, il pourra demander de l’aide extérieure, mais si il ne connaît pas bien l’entreprise, il aura de la peine à se faire aider sur les aspects métiers.
>A. Le rôle de DPO peut être assumé par plusieurs personnes (un comité) mais il faut que les responsabilités soient clairement définies au préalable et qu’il y ait un seul point de contact externe.

15. Q. Quel lien doit-il y avoir entre le DPO et la Direction ?
>A. Le DPO doit faire rapport au plus haut niveau de la Direction. Il exerce ses tâches de manière indépendante et ne reçoit pas d’instruction. Il peut aussi assumer d’autres tâches et être rattachés administrativement à un service et pas directement à la Direction.

16. Q. Existe-t-il un/des liens entre GDPR et ISO 27000 ?
A. > Les thèmes tel que Identification des traitements, audit, data maping, contrôle d’accès et documentation sont très semblables.

Vous trouverez d’autres informations en lien avec le RGPD sur le blog de Me Metille à l’adresse suivante : https://smetille.ch/2017/12/14/la-suisse-et-le-rgpd/ ou encore dans l’article du temps, Le RGPD, la révolution du consentement, daté du 12 février 2018.

En espérant que ce blog ait pu répondre à certaines de vos questions.

 

Cet article La Suisse concernée par le RGPD ? est apparu en premier sur Blog dbi services.

V$MYSTAT delta values

Mon, 2018-02-12 14:32

Here is a little script I use from time to time to look at V$MYSTAT values and displaying on one line a set of statistics with their delta value between two calls.

The first script, _mystat_init.sql, initializes the variables. The second one displays the values, such as:

SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
57,371 15,445,852 6,111,608 37,709

Those two scripts are generated by defining the statistics:

define names="'redo size','redo entries','undo change vector size','db block changes'"

abd running the following to spool the two scripts:

sqlplus -s / as sysdba <<'END'
set pagesize 0 feedback off linesize 1000 trimspool on verify off echo off
with stats as (
select rownum n,stat_id,name from (select stat_id,name from v$statname where name in (&names) order by stat_id)
)
select 'define LAG'||stat_id||'=0' from stats
union all
select 'column "CUR'||stat_id||'" new_value '||'LAG'||stat_id||' noprint' from stats
union all
select 'column "DIF'||stat_id||'" heading '''||name||''' format 999G999G999G999' from stats
.
spool _mystat_init.sql
/
spool off
with stats as (
select rownum n,stat_id,name from (select stat_id,name from v$statname where name in (&names) order by stat_id)
)
select 'set termout off verify off' from dual
union all
select 'select ' from dual
union all
select ' '||decode(n,1,' ',',')||'"CUR'||stat_id||'" - '||'&'||'LAG'||stat_id||' "DIF'||stat_id||'"' from stats
union all
select ' '||',nvl("CUR'||stat_id||'",0) "CUR'||stat_id||'"' from stats
union all
--select ','''||'&'||'1'' comments' from dual
--union all
select q'[from (select stat_id,value from v$mystat join v$statname using(statistic#) where name in (&names)) pivot (avg(value)for stat_id in (]' from dual
union all
select ' '||decode(n,1,' ',',')||stat_id||' as "CUR'||stat_id||'"' from stats
union all
select '))' from dual
union all
select '.' from dual
union all
select 'set termout on' from dual
union all
select '/' from dual
.
spool _mystat_diff.sql
/
spool off
END

Then, in sqlplus or SQLcl, you run:

SQL> _mystat_init.sql

to initialize the values to 0 and:

SQL> @ _mystat_diff.sql

each time you want to display the difference from last call.

 

Cet article V$MYSTAT delta values est apparu en premier sur Blog dbi services.

Index seek operator and residual IO

Mon, 2018-02-12 09:00

This blog post draws on the previous article about index seek and gotchas. I encountered another interesting case but it is not so much because of the nature of the problem I decided to write this article but rather the different ways that exist to troubleshoot it.

Firstly, let’s set the scene:

A simple update query and its corresponding execution plan that tend to say the plan is efficient  in terms of performance

declare @P0 datetime2
       ,@P1 int,
       @P2 nvarchar(4000)

set @P0 = GETDATE()
set @P1 = 1
set @P2 = '005245'

UPDATE
	[TABLE]
SET
	[_DATE] = @P0
WHERE
	[_ID] = @P1
    AND
    [_IDENTIFIER] = @P2
;

 

blog 129 - 1- query plan before optimization

An index Seek – with a cardinality of 1 – and then a clustered index update operation. A simple case as you may notice here. But although the customer noticed a warning icon in the execution plan, he was confident enough about this plan to take care about it.

But if we look closer at the warning icon it concerns a CONVERT_IMPLICT operation (meaning hidden operation done by the optimizer) that may affect the SeekPlan as stated by the pop up:

blog 129 - 2- query plan warning

In fact, in the context of this customer, this query became an issue when it was executed a thousand times in a very short period leading to trigger lock time out issues and to consume one entire VCPU of the server as well.

Let’s jump quickly to the root cause and the solution. Obviously the root cause concerned the @P2 parameter type here – NVARCHAR(4000) – that forced the optimizer to use an CONVERT_IMPLICT operation which makes the predicate non sargable and an index seek operation inefficient. This is particularly true when the CONVERT_IMPLICIT operation concerns the column in the predicate rather than the leading parameter. Indeed in this case @P2 type is NVARCHAR(4000) whereas the _IDENTIFIER column type is VARCHAR(50) and the optimizer has to convert first the _IDENTIFIER column (with lower precedence type) to match with the @P2 parameter (with the higher precedence type). Let’s say also that the @P1 parameter is not selective enough to prevent a range scan operation in this case.

Although the actual execution plan is showing index seek with a cardinality of 1, I was able to confirm quickly to my customer that the real number was completely overshadowed by using IO and TIME statistics.

  CPU time = 0 ms,  elapsed time = 0 ms.
Table 'TABLE'. Scan count 1, logical reads 17152, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 609 ms,  elapsed time = 621 ms.

 

Well, do you really think that a true index seek may lead to read such number of pages and to consume this amount of CPU per execution? :) … I let you image how intensive this query may be in terms of CPU when executed a lot of times in a very short period …

Another point that puts us quickly on the right track is that the comparison is done in the predicate section of the index seek operation meaning it will be checked after the seek predicate. Thus, the index seek operator acts as an index scan by starting at the first page and keeps running until satisfying the seek predicate.

blog 129 - 3- query plan index seek operator

Referring to the IO statistics output above we may notice the operator read 17152 pages and a quick look at the corresponding index physical stats allowed us to assume safely the storage engine read a big part of the index in this case.

SELECT 
	index_id,
	partition_number,
	index_level,
	page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('TABLE'), NULL, NULL, 'DETAILED')
WHERE index_id = 10

 

Ultimately, we may also benefit from an undocumented trace flag 9130 that highlights in an obvious way how to address this issue by showing a filter operator (predicate section of index seek operator pop up) after the index seek operator itself (reading from the right to the left). From a storage perspective we are far from the first exposed reality.

blog 129 - 5- TF9130

SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.
Table 'table'. Scan count 1, logical reads 6, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 1 ms.

 

Ok my problem was fixed but let’s jump quickly to another interesting point. Previously I used different tools / tricks to identify the “residual” seek operation. But shall I go through all this stuff to identify this kind of issue? Is there an easier way for that? The good news is yes if you run on SQL Server 2012 SP3+ / SQL Server 2014 SP2+ or SQL Server 2016+ because the SQL Server team added runtime information to showplan.

In my context, it may simplify drastically the analysis because runtime statistics are directly exposed per operator. I got interesting information as ActualRowsRead (3104706 rows) and ActualLogicalReads (17149 pages).

Note that if we refer to the IO statistics from the first query where I got 17152 pages for the table TABLE, the missing 3 pages are in fact on the update index operator.

<RelOp AvgRowSize="36" 
						   EstimateCPU="3.28721" 
						   EstimateIO="12.1472" 
						   EstimateRebinds="0" 
						   EstimateRewinds="0" 
						   EstimatedExecutionMode="Row" 
						   EstimateRows="1" 
						   LogicalOp="Index Seek" 
						   NodeId="3" 
						   Parallel="false" 
						   PhysicalOp="Index Seek" 
						   EstimatedTotalSubtreeCost="15.4344" 
						   TableCardinality="5976470">
                      <OutputList>
                        <ColumnReference 
							Database="[DB]" 
							Schema="[dbo]" 
							Table="[TABLE]" 
							Column="_DELIVERED" />
                      </OutputList>
                      <RunTimeInformation>
                        <RunTimeCountersPerThread Thread="0" 
												  ActualRows="1" 
												  ActualRowsRead="3104706" 
												  ActualEndOfScans="1" 
												  ActualExecutions="1" 
												  ActualElapsedms="1209" 
												  ActualCPUms="747" 
												  ActualScans="1" 
												  ActualLogicalReads="17149" 
												  ActualPhysicalReads="1" 
												  ActualReadAheads="17141" 
												  ActualLobLogicalReads="0" 
												  ActualLobPhysicalReads="0" 
												  ActualLobReadAheads="0" />
                      </RunTimeInformation>

 

Finally, if you’re an enthusiastic of the famous Plan Explorer tool like me in order to troubleshoot execution plans, the addition of runtime statistics was also included to this tool accordingly. The warning icons and corresponding popups alert us very quickly about the residual IO issue as shown below:

blog 129 - 6- PlanExplorer execution plan

blog 129 - 6- PlanExplorer

Happy troubleshooting!

 

Cet article Index seek operator and residual IO est apparu en premier sur Blog dbi services.

Windows Server – Service not starting with ‘Error 1067: The process terminated unexpectedly’

Sat, 2018-02-10 02:30

Some time ago, we were installing a Migration Center (from fmeAG) on a Windows Server and at the end of the installation, the service named Migration Center Job Server is configured and finally started. Unfortunately this didn’t go well and the start command wasn’t working at all. We were using a dedicated technical account (AD account) to do the installation and to run this service. This is the error we got:

fmeAG_MC_Service_Start_Fail_Error

 

The error code 1067 means ‘ERROR_PROCESS_ABORTED’. This was the first time I saw this error, ever, so I wanted to know more about it. I performed a lot of tests on this environment to try to narrow down the issue and the following tests in particular were quite relevant:

  • Changing the Log On As user to something else
    • The service is able to start
    • It means that the issue is somehow linked to this particular technical user
  • Checking the ‘Log On As a service’ Local Policies
    • The technical user is listed and allowed to start the service so there is no issue there
  • Using the technical user to start a service on another Windows Server
    • The service is able to start
    • It means that the issue is not linked to the technical user globally

 

So with all these information, it appeared that the issue was linked to this particular technical user but only locally to this Windows Server… When working with AD accounts, it is always possible to face some issues with local profile VS domain profile (thanks Stéphane Haby for pointing that out to me, I’m not a Windows expert ;)) so I tried to work on that and after some time, I found a workaround to this issue. The workaround is simply to delete the local profile…

Since this is an AD account (that I will call <SYS_AD_ACCOUNT> below), it is possible to remove the local profile, it will just be recreated automatically when needed, you just need to remove it properly and for that, there are particular steps:

  1. Login to the Windows Server with another administrator account
  2. Log out all <SYS_AD_ACCOUNT> sessions: stop all services running with this account, all processes (from the task manager, there should be nothing on the “users” tab), aso… Alternatively, you can also disable the services (or manual mode) and then reboot the Windows Server
  3. Delete the complete folder C:\Users\<SYS_AD_ACCOUNT>. If you already tried this but did not remove the registry properly, you might end up with a C:\Users\TEMP folder… If this is the case, remove it now too.
  4. Delete the registry key matching <SYS_AD_ACCOUNT> (check the “ProfileImagePath” parameter) under “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList”
  5. Reboot the Windows Server
  6. Login to the Windows Server with the administrator account used for the step 1.
  7. Set <SYS_AD_ACCOUNT> for the “Log On As” of the target Service
  8. Start the target Service => It should now be working
  9. Login to the Windows Server with the <SYS_AD_ACCOUNT> account
  10. Open a cmd prompt => The current folder should be C:\Users\<SYS_AD_ACCOUNT> and not C:\Users\TEMP

 

I am not saying that this workaround will fix each and every issues linked to an error code 1067… Sometimes, this error will be linked to the fact that the service cannot find the executable for example. But in my case, it fixed the issue so if you are running out of options, maybe you can just give it a try ;).

 

 

Cet article Windows Server – Service not starting with ‘Error 1067: The process terminated unexpectedly’ est apparu en premier sur Blog dbi services.

Documentum – DA 7.3 showing java.lang.NullPointerException on every actions

Sat, 2018-02-10 01:45

Last year, we started to upgrade some Documentum Administrator from 7.2 to 7.3 and directly after, we started to see some NullPointerException on the log files. We are using DA 7.3 on some recent WebLogic Servers versions (12.1.3, 12.2.1.2). We usually deploy DA as a WAR file (so not exploded) with just the dfc.properties, keystores and logs outside of it. This is the kind of errors we started to see as soon as it was upgraded to 7.3 in the startup log file (nohup log file in our case):

java.lang.NullPointerException
        at java.io.FileInputStream.<init>(FileInputStream.java:130)
        at java.io.FileInputStream.<init>(FileInputStream.java:93)
        at com.documentum.web.form.WebformTag.fetchExtnNativeVersion(WebformTag.java:282)
        at com.documentum.web.form.WebformTag.renderExtnJavaScript(WebformTag.java:268)
        at com.documentum.web.form.WebformTag.doStartTag(WebformTag.java:159)
        at jsp_servlet._custom._jsp.__loginex._jsp__tag3(__loginex.java:1687)
        at jsp_servlet._custom._jsp.__loginex._jspService(__loginex.java:272)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.ServletStubImpl.onAddToMapException(ServletStubImpl.java:489)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:376)
        ...

 

The stack on the line 7 and 8 changed, depending on the action that was being performed. For example an access to the login page would print the following:

        at jsp_servlet._custom._jsp.__loginex._jsp__tag3(__loginex.java:1687)
        at jsp_servlet._custom._jsp.__loginex._jspService(__loginex.java:272)

 

Then once logged in, you would see each component printing the same kind of stack, with the following lines 7 and 8:

        at jsp_servlet._webtop._main.__mainex._jsp__tag0(__mainex.java:286)
        at jsp_servlet._webtop._main.__mainex._jspService(__mainex.java:116)

 

        at jsp_servlet._webtop._classic.__classic._jsp__tag0(__classic.java:408)
        at jsp_servlet._webtop._classic.__classic._jspService(__classic.java:112)

 

        at jsp_servlet._webtop._titlebar.__titlebar._jsp__tag0(__titlebar.java:436)
        at jsp_servlet._webtop._titlebar.__titlebar._jspService(__titlebar.java:175)

 

        at jsp_servlet._webtop._messagebar.__messagebar._jsp__tag0(__messagebar.java:145)
        at jsp_servlet._webtop._messagebar.__messagebar._jspService(__messagebar.java:107)

 

aso…

We were working with OpenText on this issue. As mentioned in the stack trace, this is actually because DA is trying to fetch the “ExtnNativeVersion”. This is a property file located there: wdk/extension/client/EMC/ContentXfer/com.emc.wdk.native/1. Unfortunately, when DA 7.3 is trying to locate this file, it does not work but the file is really present… It does not work because the DA is deployed as a WAR file (an archive) and therefore the path to the file is wrong. I suspect this is something that Documentum changed recently, using the getRealPath(). To change the behavior of the getRealPath function, you have to set the property “Archived Real Path Enabled” to true so it returns the canonical path of the file…

So to remove these exceptions, you have two options:

I. At the domain level:
  1. Login to the WebLogic Administration Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Configuration > Web Applications
  3. Click on the ‘Lock & Edit’ button
  4. Check the ‘Archived Real Path Enabled’ checkbox ( = set it to true)
  5. Click on the ‘Save’ and then ‘Activate Changes’ buttons

This will add the configuration to the global config.xml file so it is enabled for the whole domain. As often, I would not recommend that but rather configuring this at the application level because you might have other applications that do NOT want this setting to be set to true… So that is why you have the option number two:

II. At the application level:

The high level steps to do that would be the following ones:

  1. Extract the weblogic.xml file from the application war file
  2. Add the ‘Archived Real Path Enabled’ property in it and set it to true
  3. Repackage the war file and redeploy it

This is pretty simple:

[weblogic@weblogic_server_01 ~]$ cd $APPS_HOME
[weblogic@weblogic_server_01 apps]$ jar -xvf da.war WEB-INF/weblogic.xml
 inflated: WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ vi WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ tail -6 WEB-INF/weblogic.xml

   <container-descriptor>
      <show-archived-real-path-enabled>true</show-archived-real-path-enabled>
   </container-descriptor>

</weblogic-web-app>
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ jar -uvf da.war WEB-INF/weblogic.xml
adding: WEB-INF/weblogic.xml(in = 989) (out= 398)(deflated 59%)
[weblogic@weblogic_server_01 apps]$

 

Then, you just need to update the deployment in the WebLogic Administration Console and that’s it, the exceptions should be gone now. As far as I’m aware of, these exceptions did not have any impact on the proper behavior of Documentum Administrator but it is still very ugly to have hundreds of them in the log file…

 

 

Cet article Documentum – DA 7.3 showing java.lang.NullPointerException on every actions est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – Infinite loop

Sat, 2018-02-10 01:00

This blog will be the last of my series around the WebLogic SSO (how to enable logs, 403 N°1, 403 N°2) which I started some weeks ago. Several months ago on a newly built High Availability Environment (2 WebLogic Servers behind a Load Balancer), an application team was deploying their D2 Client as always. After the deployment, it appeared to the tester that the Single Sign-On was not working when using the SSO URL through the Load Balancer. For this user, the URL was going crazy on an infinite loop between the LB URL and the SAML2 Partner URL. When I tried to replicate the issue, the SSO was working fine for me…

The only possible reason for the infinite loop for some users but not for all is that there is an issue with the SSO setup/configuration on one of the two WebLogic Servers only… Indeed, the Load Balancer (with sticky session) probably redirected the tester on the WLS with the issue while my session was redirected to the other one. Enabling the debug logs (on both WLS) quickly confirmed that all communications that were going through the first WebLogic Server were working properly (with SSO) while the second WebLogic Server had this infinite loop issue.

The logs from the WebLogic Server 1 and 2 were identical up to a certain point (basically all the SAML2 part), except some elements such as the local hostname, the date/hour and some IDs (like java class instances, aso…). This is the common content on both WLS logs:

<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Filter: Processing request on URI '/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is not a service URI>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'SPinitiator'>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: processing>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: partner id is null>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: use partner binding HTTP/POST>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <post from template url: null>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <get SAMLResponse from http request:PBvbnNSJ1cXMlFtZzaXM6bmHhtbG5zm46NhwOlJlOnNhbWxHbWxIb2Fc3wP6dGM6
U0FiB4bWxuczp4NTAwPSJ1cm46b2FzNTDoyLjA6YXNzZXJ0aW9uIaXM6bmFtZXM6
U0FdG9jb2wiIHhtbG5zOmRzaWc9Imh0dHA6Ly93NTDoyLjA6cHJvd3cudzMub3Jn
aHR0cDoa5vcmcvMjAwMS9W5zdGFuY2vL3d3dy53MyYTUxTY2hlbWEtUiIERlc3Rp
MWNxM2FjNzI1ZDjYmIVhNDM1Zlzc3VlSW5zdGFudD0ijhlNjc3OTkiIEMjAxNy0x
LzINpZyMiIHhtwMDAvMDkveG1sZHbG5DovL3d3dy53My5vczOmVuYz0iaHR0cmcv
...
YWRpb24+P1sOkF1ZGllbHJpY3mPHNhNlUmVzdC9zYW1sOkNvbmRpdGlvbnM+bWw6
YXNzUzpjbYW1YXNpczpuIuMlmVmPnVybjpvDphYxzp0YzpTQU1MOjGFzc2VzOlBh
OjAXh0Pj0OjUyWiI+PHNsOkF1dGhuQhxzYW129udGV4bWw6QXV0aG5Db250ZdENs
UmVnRlepBdXRobkNvbzYWmPjwvc2FtbDF1dGhuU1sOkHQ+PC93RhdGVtZW50Pjwv
c2F9zY25zZtbDpBcW1scDpSZXNwb3NlcnRpb24+PCT4=
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <BASE64 decoded saml message:<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-21T9:25:55Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-21T9:25:55Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-21T9:30:55Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/></saml:SubjectConfirmation></saml:Subject><saml:Conditions NotBefore="2017-11-21T9:25:55Z" NotOnOrAfter="2017-11-21T9:30:55Z"><saml:AudienceRestriction><saml:Audience>SAML2_Entity_ID_01</saml:Audience></saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement AuthnInstant="2017-11-21T9:25:55Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-21T9:40:55Z"><saml:AuthnContext><saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef></saml:AuthnContext></saml:AuthnStatement></saml:Assertion></samlp:Response>>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <<samlp:Response> is signed.>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: start assert SAML2 token>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: SAML2IdentityAsserter: tokenType is 'SAML2.Assertion.DOM'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion signature>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: The assertion is signed.>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion signature>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion attributes>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion attributes>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion issuer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion issuer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion conditions>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion conditions>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion subject>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion subject>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2NameMapperCache.getNameMapper: Found name mapper in the cache>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert.processAttributes - processAttrs: false, processGrpAttrs: true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <DefaultSAML2NameMapperImpl: mapName: Mapped name: qualifier: null, name: PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAMLIACallbackHandler: SAMLIACallbackHandler(true, PATOU_MORGAN, null)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAMLIACallbackHandler: callback[0]: NameCallback: setName(PATOU_MORGAN)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity returning PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.getCachedIdentity(idm:_def_Idmusr:PATOU_MORGAN)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.getCachedIdentity(idm:_def_Idmusr:PATOU_MORGAN) returning null>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity did not find a cached identity.>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.constructor>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl.getJAASIdentityAssertionConfigurationName()>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login ClassLoader=sun.misc.Launcher$AppClassLoader@7d4991ad>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login ThreadContext ClassLoader Original=com.bea.common.security.utils.SAML2ClassLoader@625ecb90>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl.getAppConfigurationEntry(com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl0)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl.getAppConfigurationEntry>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getAssertionModuleConfiguration>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl$JAASProviderImpl.getProviderAppConfigurationEntry returning LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl, ControlFlag=LoginModuleControlFlag: sufficient>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getClassLoader>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ClassLoader=java.net.URLClassLoader@108c7fe5>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ControlFlag=LoginModuleControlFlag: sufficient>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getAssertionModuleConfiguration>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl$JAASProviderImpl.getProviderAppConfigurationEntry returning LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl, ControlFlag=LoginModuleControlFlag: required>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getClassLoader>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ClassLoader=java.net.URLClassLoader@108c7fe5>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ControlFlag=LoginModuleControlFlag: required>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login created LoginContext>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login ThreadContext ClassLoader Current=com.bea.common.security.utils.SAML2ClassLoader@625ecb90>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize ClassLoader=java.net.URLClassLoader@108c7fe5>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize created delegate login module>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP ATN LoginModule initialized>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize delegated>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.login>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Login>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle callbcacks[0] will be delegated>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle callbcacks[0] will use NameCallback to retrieve name>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle will delegate all callbacks>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAMLIACallbackHandler: callback[0]: NameCallback: setName(PATOU_MORGAN)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle delegated callbacks>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle got username from callbacks[0], UserName=PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Login username: PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getUserDNName? user:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getDNForUser search("ou=people,ou=internal,dc=company,dc=com", "(&(uid=PATOU_MORGAN)(objectclass=companyperson))", base DN & below)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getConnection return conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <ConnSetupMgr><Connecting to host=ldap.company.com, ssl port=3636>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <ConnSetupMgr><Successfully connected to host=ldap.company.com, ssl port=3636>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved guid:c6e6f021-2b29d761-8284c0aa-df200d0f>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <DN for user PATOU_MORGAN: uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <found user from ldap, user:PATOU_MORGAN, userDN=uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved username from LDAP :PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <returnConnection conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Asserted Identity for PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <List groups that member: PATOU_MORGAN belongs to>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <DN for user PATOU_MORGAN: uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved dn:uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com for user:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <search("ou=bind_user,ou=applications,ou=internal,dc=company,dc=com", "(&(uniquemember=uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com)(objectclass=groupofuniquenames))", base DN & below)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getConnection return conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Result has more elements: false>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <search("ou=bind_user,ou=applications,ou=internal,dc=company,dc=com", "(objectclass=groupofURLs)", base DN & below)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <returnConnection conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <DN for user PATOU_MORGAN: uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved dn:uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com for user:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <login succeeded for username PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.login delegated, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.commit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Commit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Principals Added>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.commit delegated, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login logged in>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login subject=Subject:
        Principal: PATOU_MORGAN
        Private Credential: PATOU_MORGAN
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSIdentityServiceImpl.getIdentityFromSubject Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principals)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) Principal=PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) PrincipalClassName=weblogic.security.principal.WLSUserImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) trying PrincipalValidator for interface weblogic.security.principal.WLSPrincipal>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) PrincipalValidator handles this PrincipalClass>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Generated signature and signed WLS principal PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) PrincipalValidator signed the principal>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) All required PrincipalValidators signed this PrincipalClass, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate authenticate succeeded for user PATOU_MORGAN, Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate login succeeded and PATOU_MORGAN was not previously locked out>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.cachedIdentity(Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
),cacheKe y= idm:_def_Idmusr:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Using redirect URL from request cache: 'https://d2.company.com:443/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Redirecting to URL: https://d2.company.com:443/D2/X3_Portal.jsp>

 

At this point, the SAML2 exchange is successful and the user has been found in the LDAP successfully. So the only thing remaining is the validation of the user on WebLogic side as well as the access to the Application (D2 in this case). That’s where the logs of the WLS started to diverge.

On the WebLogic Server 1 (no issue), we can see WebLogic validating the user and finally granting access to the application:

<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.cachedIdentity(Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
),cacheKe y= idm:_def_Idmusr:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Using redirect URL from request cache: 'https://d2.company.com:443/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Redirecting to URL: https://d2.company.com:443/D2/X3_Portal.jsp>

<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <PrincipalAuthenticator.validateIdentity>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <PrincipalAuthenticator.validateIdentity will use common security service>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principals)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) Principal=PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) PrincipalClassName=weblogic.security.principal.WLSUserImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) trying PrincipalValidator for interface weblogic.security.principal.WLSPrincipal>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) PrincipalValidator handles this PrincipalClass>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Validate WLS principal PATOU_MORGAN returns true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) PrincipalValidator said the principal is valid>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) One or more PrincipalValidators handled this PrincipalClass, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principals) validated all principals>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Subject: 1
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Roles:Anonymous, consumer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/x3_portal.nocache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Subject: 1
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Roles:Anonymous, consumer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Resource: type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/x3_portal.nocache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Direction: ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Context Handler: >
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/79E87564DA9D026D7ADA9326E414D4FC.cache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Subject: 1
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Roles:Anonymous, consumer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Resource: type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/79E87564DA9D026D7ADA9326E414D4FC.cache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Direction: ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Context Handler: >
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>

 

On the other hand, the WebLogic Server 2 (with issue) didn’t even start to validate the user, it just started again the full SAML2 process:

<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.cachedIdentity(Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
),cacheKe y= idm:_def_Idmusr:PATOU_MORGAN>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <Using redirect URL from request cache: 'https://d2.company.com:443/D2/X3_Portal.jsp'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <Redirecting to URL: https://d2.company.com:443/D2/X3_Portal.jsp>

<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Filter: Processing request on URI '/D2/X3_Portal.jsp'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/D2/X3_Portal.jsp'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is not a service URI>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'SPinitiator'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: processing>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: partner id is null>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: use partner binding HTTP/POST>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <post from template url: null>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>

 

This WebLogic Server 2 is acting just like if it was a completely new request and not an existing one… And this is the reaction I had when I saw this log so I checked the session identifier!

Whenever you configure an Application to support SAML2 SSO, then the session identifier must be the default one. If you set something else, then you will have this infinite loop of authentication because WebLogic will consider a successful authentication as a non-authenticated one. This is described in the Oracle documentation.

So to solve this issue, that’s pretty simple… Either comment the session identifier or set it to the default value (below I’m just resetting it to the default value):

[weblogic@weblogic_server_01 ~]$ cd $APPS_HOME
[weblogic@weblogic_server_01 apps]$ jar -xvf D2.war WEB-INF/weblogic.xml
 inflated: WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ sed -i 's,<cookie-name>[^<]*<,<cookie-name>JSESSIONID<,' WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ grep -A3 -B2 cookie-name WEB-INF/weblogic.xml

  <session-descriptor>
    <cookie-name>JSESSIONID</cookie-name>
    <cookie-http-only>false</cookie-http-only>
  </session-descriptor>

[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ jar -uvf D2.war WEB-INF/weblogic.xml
adding: WEB-INF/weblogic.xml(in = 901) (out= 432)(deflated 52%)
[weblogic@weblogic_server_01 apps]$

 

Then you obviously need to redeploy your application and it will be working. :)

 

 

Cet article WebLogic – SSO/Atn/Atz – Infinite loop est apparu en premier sur Blog dbi services.

Server process name in Postgres and Oracle

Fri, 2018-02-09 16:01

Every database analysis should start with system load analysis. If the host is in CPU starvation, then looking at other statistics can be pointless. With ‘top’ on Linux, or equivalent such as process explorer on Windows, you see the process (and threads). If the name of the process is meaningful, you already have a clue about the active sessions. Postgres goes further by showing the operation (which SQL command), the state (running or waiting), and the identification of the client.

Postgres

By default ‘top’ displays the program name (like ‘comm’ in /proc or in ‘ps’ format), which will be ‘postgres’ for all PostgreSQL processes. But you can also display the command line with ‘c’ in interactive mode, or directly starting with ‘top -c’, which is the same as the /proc/$pid/cmdline or ‘cmd’ or ‘args’ in ‘ps’ format.


top -c
 
Tasks: 263 total, 13 running, 250 sleeping, 0 stopped, 0 zombie
%Cpu(s): 24.4 us, 5.0 sy, 0.0 ni, 68.5 id, 0.9 wa, 0.0 hi, 1.2 si, 0.0 st
KiB Mem : 4044424 total, 558000 free, 2731380 used, 755044 buff/cache
KiB Swap: 421884 total, 418904 free, 2980 used. 2107088 avail Mem
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20347 postgres 20 0 394760 11660 8696 S 7.6 0.3 0:00.49 postgres: demo demo 192.168.56.125(37664) DELETE
20365 postgres 20 0 393816 11448 8736 S 6.9 0.3 0:00.37 postgres: demo demo 192.168.56.125(37669) idle
20346 postgres 20 0 393800 11440 8736 S 6.6 0.3 0:00.37 postgres: demo demo 192.168.56.125(37663) UPDATE
20356 postgres 20 0 396056 12480 8736 S 6.6 0.3 0:00.42 postgres: demo demo 192.168.56.125(37667) INSERT
20357 postgres 20 0 393768 11396 8736 S 6.6 0.3 0:00.40 postgres: demo demo 192.168.56.125(37668) DELETE waiting
20366 postgres 20 0 394728 11652 8736 S 6.6 0.3 0:00.35 postgres: demo demo 192.168.56.125(37670) UPDATE
20387 postgres 20 0 394088 11420 8720 S 6.6 0.3 0:00.41 postgres: demo demo 192.168.56.125(37676) UPDATE
20336 postgres 20 0 395032 12436 8736 S 6.3 0.3 0:00.37 postgres: demo demo 192.168.56.125(37661) UPDATE
20320 postgres 20 0 395032 12468 8736 R 5.9 0.3 0:00.33 postgres: demo demo 192.168.56.125(37658) DROP TABLE
20348 postgres 20 0 395016 12360 8736 R 5.9 0.3 0:00.33 postgres: demo demo 192.168.56.125(37665) VACUUM
20371 postgres 20 0 396008 12708 8736 R 5.9 0.3 0:00.40 postgres: demo demo 192.168.56.125(37673) INSERT
20321 postgres 20 0 396040 12516 8736 D 5.6 0.3 0:00.31 postgres: demo demo 192.168.56.125(37659) INSERT
20333 postgres 20 0 395016 11920 8700 R 5.6 0.3 0:00.36 postgres: demo demo 192.168.56.125(37660) UPDATE
20368 postgres 20 0 393768 11396 8736 R 5.6 0.3 0:00.43 postgres: demo demo 192.168.56.125(37671) UPDATE
20372 postgres 20 0 393768 11396 8736 R 5.6 0.3 0:00.36 postgres: demo demo 192.168.56.125(37674) INSERT
20340 postgres 20 0 394728 11700 8736 S 5.3 0.3 0:00.40 postgres: demo demo 192.168.56.125(37662) idle
20355 postgres 20 0 394120 11628 8672 S 5.3 0.3 0:00.32 postgres: demo demo 192.168.56.125(37666) DELETE waiting
20389 postgres 20 0 395016 12196 8724 R 5.3 0.3 0:00.37 postgres: demo demo 192.168.56.125(37677) UPDATE
20370 postgres 20 0 393768 11392 8736 S 4.6 0.3 0:00.34 postgres: demo demo 192.168.56.125(37672) DELETE
20376 postgres 20 0 393816 11436 8736 S 4.6 0.3 0:00.37 postgres: demo demo 192.168.56.125(37675) DELETE waiting
20243 postgres 20 0 392364 5124 3696 S 1.0 0.1 0:00.06 postgres: wal writer process

This is very useful information. Postgres changes the process title when it executes a statement. In this example:

  • ‘postgres:’ is the name of the process
  • ‘demo demo’ are the database name and the user name
  • ‘192.168.56.125(37664)’ are the IP address and port of the client.
  • DELETE, UPDATE… are the commands. They are more or less the command name used in the feed back after the command completion
  • ‘idle’ is for sessions not currently running a statement
  • ‘waiting’ is added when the session is waiting on a blocker session (enqueued on a lock for example)
  • ‘wal writer process’ is a background process

This is very useful information, especially because we have, on the same sampling, the Postgres session state (idle, waiting or running an operation) with the Linux process state (S when sleeping, R when runnable or running, D when in I/O,… ).

Oracle

With Oracle, you can have ASH to sample session state, but being able to see it at OS level would be great. It would also be a safeguard if we need to kill a process.

But, the Oracle processes do not change while running. They are set at connection time.

The background processes mention the Oracle process name and the Instance name:

[oracle@VM122 ~]$ ps -u oracle -o pid,comm,cmd,args | head
 
PID COMMAND CMD COMMAND
1873 ora_pmon_cdb2 ora_pmon_CDB2 ora_pmon_CDB2
1875 ora_clmn_cdb2 ora_clmn_CDB2 ora_clmn_CDB2
1877 ora_psp0_cdb2 ora_psp0_CDB2 ora_psp0_CDB2
1880 ora_vktm_cdb2 ora_vktm_CDB2 ora_vktm_CDB2
1884 ora_gen0_cdb2 ora_gen0_CDB2 ora_gen0_CDB2

The foreground processes mention the instance and the connection type, LOCAL=YES for bequeath, LOCAL=NO for remote via listener.


[oracle@VM122 ~]$ ps -u oracle -o pid,comm,cmd,args | grep -E "[ ]oracle_|[ ]PID"
 
PID COMMAND CMD COMMAND
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)

You need to join V$PROCESS with V$SESSION on (V$PROCESS.ADDR=V$SESSION.PADDR) to find the state, operation and client information

For the fun, you can change the program name (ARGV0) and arguments (ARGS).

The local connections can change the name in the BEQueath connection string:


sqlplus -s system/oracle@"(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=$ORACLE_HOME/bin/oracle)(ARGV0=postgres)(ARGS='(DESCRIPTION=(LOCAL=MAYBE)(ADDRESS=(PROTOCOL=BEQ)))')(ENVS='OLE_HOME=$ORACLE_HOME,ORACLE_SID=CDB1'))" <<< "host ps -u oracle -o pid,comm,cmd,args | grep -E '[ ]oracle_|[ ]PID'"
 
PID COMMAND CMD COMMAND
21155 oracle_21155_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21176 oracle_21176_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)
22593 oracle_22593_cd postgres (DESCRIPTION=(LOCA postgres (DESCRIPTION=(LOCAL=MAYBE)(ADDRESS=(PROTOCOL=BEQ)))

The remote connection can have the name changed from the static registration, adding an ARVG0 value on the listener side:


LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
SID_LIST_LISTENER=(SID_LIST=
(SID_DESC=(GLOBAL_DBNAME=MYAPP)(ARGV0=myapp)(SID_NAME=CDB1)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
(SID_DESC=(GLOBAL_DBNAME=CDB1_DGMGRL)(SID_NAME=CDB1)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
(SID_DESC=(GLOBAL_DBNAME=CDB2_DGMGRL)(SID_NAME=CDB2)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
)

When reloading the listener with this (ARGV0=myapp) to identify connection from this MYAPP service

[oracle@VM122 ~]$ sqlplus -s system/oracle@//localhost/MYAPP <<< "host ps -u oracle -o pid,comm,cmd,args | grep -E '[ ]oracle_|[ ]PID'"
PID COMMAND CMD COMMAND
21155 oracle_21155_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21176 oracle_21176_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)
24261 oracle_24261_cd myapp (LOCAL=NO) myapp (LOCAL=NO)

However, I would not recommend to change the default. This can be very confusing for people expecting ora_xxxx_SID and oracleSID process names.

 

Cet article Server process name in Postgres and Oracle est apparu en premier sur Blog dbi services.

12cR2 PDB archive

Wed, 2018-02-07 13:50

In 12.1 we had the possibility to unplug a PDB by closing it and generating a .xml file that describes the PDB metadata required to plug the datafiles into another CDB.
In 12.2 we got an additional possibility to have this .xml file zipped together with the datafiles, for an easy transport. But that was not working for ASM files.
The latest Release Update, Oct 17 includes the patch that fixes this issue and is the occasion to show PDB archive.

Here is Oracle 12.2.0.1 with Oct 2017 (https://updates.oracle.com/download/26737266.html) applied (needs latest OPatch https://updates.oracle.com/download/6880880.html)
With a PDB1 pluggable database:

[oracle@VM106 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Wed Oct 18 16:16:41 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=920040307)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 810 SYSTEM YES /acfs/oradata/CDB1/datafile/o1_mf_system_dmrbv534_.dbf
3 540 SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrbxvds_.dbf
4 70 UNDOTBS1 YES /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrbz8mm_.dbf
5 250 PDB$SEED:SYSTEM NO /acfs/oradata/CDB1/datafile/o1_mf_system_dmrc52tm_.dbf
6 330 PDB$SEED:SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrc52t9_.dbf
7 5 USERS NO /acfs/oradata/CDB1/datafile/o1_mf_users_dygrpz79_.dbf
8 100 PDB$SEED:UNDOTBS1 NO /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrc52x0_.dbf
21 250 PDB1:SYSTEM YES /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_system_dygrqqq2_.dbf
22 350 PDB1:SYSAUX NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_sysaux_dygrqqs8_.dbf
23 100 PDB1:UNDOTBS1 YES +ASM1/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/DATAFILE/undotbs1.257.957719779
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /acfs/oradata/CDB1/datafile/o1_mf_temp_dmrc4wlh_.tmp
2 64 PDB$SEED:TEMP 32767 /acfs/oradata/CDB1/pdbseed/temp012017-06-10_19-17-38-745-PM.dbf
3 64 PDB1:TEMP 32767 /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_temp_dygrqqsh_.dbf

I have moved one file to ASM to show that it is now handled correctly.

The pluggable database is closed, we can unplug it. Nothing changes with the unplug syntax except the extension of the file. If the file mentioned is a .pdb instead of a .xml then it is a PDB archive:

RMAN> alter pluggable database PDB1 unplug into '/var/tmp/PDB1.pdb';
 
RMAN> alter pluggable database PDB1 close;
 
Statement processed
 
RMAN> alter pluggable database PDB1 unplug into '/var/tmp/PDB1.pdb'
2> ;
 
Statement processed
 
RMAN> exit

Actually it is just a zip file with the datafiles, without the full path:

[oracle@VM106 ~]$ unzip -t /var/tmp/PDB1.pdb
Archive: /var/tmp/PDB1.pdb
testing: o1_mf_system_dygrqqq2_.dbf OK
testing: o1_mf_sysaux_dygrqqs8_.dbf OK
testing: undotbs1.257.957719779 OK
testing: /var/tmp/PDB1.xml OK
No errors detected in compressed data of /var/tmp/PDB1.pdb.

You can see that the ASM file is not different from the others.

I drop the pluggable database

RMAN> drop pluggable database PDB1 including datafiles;
 
using target database control file instead of recovery catalog
Statement processed
 

And plug back the PDB1, as PDB2, using the zip file:

RMAN> create pluggable database PDB2 using '/var/tmp/PDB1.pdb';
 
Statement processed
 
RMAN> report schema;
 
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 810 SYSTEM YES /acfs/oradata/CDB1/datafile/o1_mf_system_dmrbv534_.dbf
3 540 SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrbxvds_.dbf
4 70 UNDOTBS1 YES /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrbz8mm_.dbf
5 250 PDB$SEED:SYSTEM NO /acfs/oradata/CDB1/datafile/o1_mf_system_dmrc52tm_.dbf
6 330 PDB$SEED:SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrc52t9_.dbf
7 5 USERS NO /acfs/oradata/CDB1/datafile/o1_mf_users_dygrpz79_.dbf
8 100 PDB$SEED:UNDOTBS1 NO /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrc52x0_.dbf
24 250 PDB2:SYSTEM NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_system_dygwt1lh_.dbf
25 350 PDB2:SYSAUX NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_sysaux_dygwt1lm_.dbf
26 100 PDB2:UNDOTBS1 NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_undotbs1_dygwt1lo_.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /acfs/oradata/CDB1/datafile/o1_mf_temp_dmrc4wlh_.tmp
2 64 PDB$SEED:TEMP 32767 /acfs/oradata/CDB1/pdbseed/temp012017-06-10_19-17-38-745-PM.dbf
4 64 PDB2:TEMP 32767 /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_temp_dygwt1lp_.dbf

Here all files are there, created in the db_create_file_dest.

File name convert

When you create a pluggable database and you are not in OMF you need to add a FILE_NAME_CONVERT to convert from the source file names to destination file names. When the files are referenced by a .xml file, the .xml file references the path to the files as they were in the source database. If you move then, you can update the .xml file, or you can use SOURCE_FILE_NAME_CONVERT to mention the new place. With a .pdb archive, the .xml inside contains the original path, but this is not what will be used. The path of the .pdb itself is used, as if the files were unzipped at that place.

If you use Oracle-Managed-Files, don’t care about the file names and then you don’t need all those file name converts.

 

Cet article 12cR2 PDB archive est apparu en premier sur Blog dbi services.

JAN18: Database 11gR2 PSU, 12cR1 ProactiveBP, 12cR2 RU

Tue, 2018-02-06 15:32

If you want to apply the latest patches (and you should), you can go to the My Oracle Support Recommended Patch Advisor. But sometimes it is not up-todate. For example, for 12.1.0.2 only the PSU is displayed and not the Proactive Bundle Patch, which is highly recommended. And across releases, the names have changed and can be misleading: PSU for 11.2.0.4 (no Proactive Bundle Patch except for Engineered Systems). 12.1.0.2 can have SPU, PSU, or Proactive BP but the latest is highly recommended, especially now that it includes the adaptive statistics patches. 12.2.0.1 introduce the new RUR and RU, the latest one being the one recommended.

To get things clear, there’s also the Master Note for Database Proactive Patch Program, with reference to one note per release. This blog post is my master note to link directly to the recommended updates for Oracle Database.

Master Note for Database Proactive Patch Program (Doc ID 756671.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=756671.1

11.2.0.4 – PSU

Database 11.2.0.4 Proactive Patch Information (Doc ID 2285559.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285559.1
Paragraph -> 11.2.0.4 Database Patch Set Update

Latest as of Q1 2018 -> 16-Jan-2018 11.2.0.4.180116 (Jan 2018) Database Patch Set Update (DB PSU) 26925576 (Windows: 26925576)

12.1.0.2  – ProactiveBP

Database 12.1.0.2 Proactive Patch Information (Doc ID 2285558.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285558.1
Paragraph -> 12.1.0.2 Database Proactive Bundle Patches (DBBP)

Latest as of Q1 2018 -> 16-Jan-2018 12.1.0.2.180116 Database Proactive Bundle Patch (Jan 2018) 12.1.0.2.180116 27010930

12.2.0.1 – RU

Database 12.2.0.1 Proactive Patch Information (Doc ID 2285557.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285557.1
Paragraph -> 12.2.0.1 Database Release Update (Update)

Latest as of Q1 2018 -> 16-Jan-2018 12.2.0.1.180116 (Jan 2018) Database Release Update 27105253 (Windows: 12.2.0.1.180116 WIN DB BP 27162931)
 

Don’t forget SQL Developer

In the 12c Oracle Home SQL Developer is installed, but you should update it to the latest version.
Download the following from http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
– The SQL Developer zip for ‘Other Platforms’, currently 17.4
– The SQLcl zip for ‘All Platforms’, currently 17.4

On the server, remove, or rename the original directory:
mv $ORACLE_HOME/sqldeveloper $ORACLE_HOME/sqldeveloper.orig

Unzip what you have downloaded:
unzip -d $ORACLE_HOME/ sqldeveloper-*-no-jre.zip
unzip -d $ORACLE_HOME/sqldeveloper sqlcl-*-no-jre.zip

I suggest to have a login.sql which sets the beautiful ansiconsole for SQLcl:

echo "set sqlformat ansiconsole" > $ORACLE_HOME/sqldeveloper/sqlcl/login.sql

On 12.2 you can run SQLcl just with ‘sql’ (and same arguments as sqlplus: / as sysdba or /nolog,…) because this is what is defined in $ORACLE_HOME/bin.
However, it sets the current working directory and i prefer to keep the current one as it is probably were I want to run scripts from.

Then I add the following aliases in .bashrc

alias sqlcl='JAVA_HOME=$ORACLE_HOME/jdk SQLPATH=$ORACLE_HOME/sqldeveloper/sqlcl bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'
alias sqldev='$ORACLE_HOME/sqldeveloper/sqldeveloper.sh'

When running SQL Developer for the first time you can create automatically a ‘/ as sysdba’ connection (but remember this is not a good practice to connect like this) and a connection for each user declared in the database: Right click on Connections and Create Local Connections

 

Cet article JAN18: Database 11gR2 PSU, 12cR1 ProactiveBP, 12cR2 RU est apparu en premier sur Blog dbi services.

Multitenant, PDB, ‘save state’, services and standby databases

Mon, 2018-02-05 15:12

Creating – and using – your own services has always been the recommendation. You can connect to a database without a service name, though the instance SID, but this is not what you should do. Each database registers its db_unique_name as a service, and you can use it to connect, but it is always better to create your own application service(s). In multitenant, each PDB registers its name as a service, but the recommendation is still there: create your own services, and connect with your services.
I’ll show in this blog post what happens if you use the PDB name as a service and the standby database registers to the same listener as the primary database. Of course, you can workaround the non-unique service names by registering to different listeners. But this just hides the problem. The main reason to use services is to be independent from physical attributes, so being forced to assign a specific TCP/IP port is not better than using an instance SID.

I have the primary (CDB1) and standby (CDB2) databases registered to the default local listener:

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 03-FEB-2018 23:11:23
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 02-FEB-2018 09:32:30
Uptime 1 days 13 hr. 38 min. 52 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/VM122/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM122)(PORT=5501))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "627f7512a0452fd4e0537a38a8c055c0" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_CFG" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1_DGB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_DGMGRL" has 1 instance(s).
Instance "CDB1", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGMGRL" has 1 instance(s).
Instance "CDB2", status UNKNOWN, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

Look at service ‘pdb1′, which is the name for my PDB. Connecting to //localhost:1521/PDB1 can connect you randomly to CDB1 (the primary database) or CDB2 (the standby database).

Here is an example, connecting several times to the PDB1 service:

[oracle@VM122 ~]$ for i in {1..5} ; do sqlplus -L -s sys/oracle@//localhost/pdb1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1

I was connected at random to CDB1 or CDB2.

As an administrator, you know the instance names and you can connect to the one you want with: //localhost:1521/PDB1/CDB1 or //localhost:1521/PDB1/CDB2:

[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB2 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2

Of course this is not what you want. And we must not start or stop the default services. For the application, the best you can do is to create your service. And if you want to be able to connect to the Active Data Guard standby, which is opened in read-only, then you can create a ‘read-write’ service and a ‘read-only’ service that you start depending on the role.

Create and Start a read-write service on the primary

This example supposes that you have only Oracle Database software installed. If you are in RAC, with the resources managed by Grid Infrastructure, or simply with Oracle Restart, creating a service is easy with srvctl, and you add it to a PDB with ‘-pdb’ and also with a role to start it automatically in the primary or in the standby. But without it, you use dbms_service:

SQL> connect /@CDB1 as sysdba
Connected.
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> alter session set container=cdb$root;
Session altered.

The service is created, stored in SERVICE$ visible with DBA_SERVICES:

SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

Save state

I have created and started the PDB1_RW service. However, if I restart the database, the service will not start automatically. How do you ensure that the PDB1 pluggable database starts automatically when you open the CDB? You ‘save state’ when it is opened. It is the same for the services you create. You need to ‘save state’ when they are opened.


SQL> alter pluggable database all save state;
Pluggable database ALL altered.

The information is stored in PDB_SVC_STATE$, and I’m not aware of a dictionary view on it:

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4
pdb1 1888881990 pdb1 11-JAN-18 4
 
SQL> select * from containers(pdb_svc_state$);
 
INST_ID INST_NAME PDB_GUID PDB_UID SVC_HASH SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 CON_ID
------- --------- -------- ------- -------- ------ ------ ------ ------ ------ ------ ------
1 CDB1 627F7512A0452FD4E0537A38A8C055C0 2872139986 3128030313 1

The name is not in this table, you have to join with v$services using(name_hash):

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
SYS$BACKGROUND 165959219 26-JAN-17 1
SYS$USERS 3427055676 26-JAN-17 1
CDB1_CFG 1053205690 CDB1_CFG 24-JAN-18 1
CDB1_DGB 184049617 CDB1_DGB 24-JAN-18 1
CDB1XDB 1202503288 CDB1XDB 11-JAN-18 1
CDB1 1837598021 CDB1 11-JAN-18 1
pdb1 1888881990 pdb1 11-JAN-18 4
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4

So, in addition to storing the PDB state in PDBSTATE$, visible with dba_pdb_saved_states, the service state is also stored. Note that they are at different level. PDBSTATE$ is a data link: stored on CDB$ROOT only (because the data must be read before opening the PDB) but PDB_SVC_STATE$ is a local table in the PDB as the services can be started only when the PDB is opened.

This new service is immediately registered on CDB1:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Create and Start a read-only service for the standby

If you try to do the same on the standby for a PDB1_RO service, you cannot because service information has to be stored in the dictionary:

SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
Error starting at line : 56 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
BEGIN dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO'); END;
Error report -
ORA-16000: database or pluggable database open for read-only access

So, the read-only service has to be created on the primary:

SQL> connect /@CDB1 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1_RO 1562179816 pdb1_RO 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

The SERVICE$ dictionary table is replicated to the standby, so I can I can start it on the standby:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RO');
PL/SQL procedure successfully completed.

Here is what is registered to the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Now, the PDB_RO connects to the standby and PDB_RW to the primary. Perfect.

No ‘save state’ on the standby

At this point, you would like to have the PDB_RO started when PDB1 is opened on the standby, but ‘save state’ is impossible on a read-only database:

SQL> alter session set container=cdb$root;
Session altered.
 
SQL> alter pluggable database all save state;
 
Error starting at line : 84 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
alter pluggable database all save state
Error report -
ORA-16000: database or pluggable database open for read-only access

You can’t manage the state (open the PDB, start the services) in the standby database.

The primary ‘save state’ is replicated in standby

For the moment, everything is ok with my services:

Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...

If I restart the primary CDB1, everything is ok again because I saved the state of the PDB and the service. But what happens when the standby CDB2 restarts?


SQL> connect /@CDB2 as sysdba
Connected.
SQL> startup force;
...
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The PDB is not opened: the ‘saved state’ for PDB is not read in the standby.
However, when I open the PDB, it seems that the ‘saved state’ for service is applied, and this one is replicated from the primary:

SQL> alter pluggable database PDB1 open;
Pluggable database altered.
SQL> host lsnrctl status
...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

My PDB1_RW is registered for both, connections will connect at random to the primary or the standby, and then the transactions will fail half of the times. It will be the same in case of switchover. This is not correct.

Save state instances=()

What I would like is the possibility to save state for a specific DB_UNIQUE_NAME, like with pluggable ‘spfile’ parameters. But this is not possible. What is possible is to mention an instance but you can use it only for the primary instance where you save the state (or you get ORA-65110: Invalid instance name specified) and anyway, this will not be correct after a switchover.

So what?

Be careful, with services and ensure that the services used by the application are registered only for the correct instance. Be sure that this persists when the instances are restarted. For this you must link a service name to a database role. This cannot be done correctly with ‘save state’. You can use startup triggers, or better, Grid Infrastructure service resources.

Do not connect to the default service with the PDB name, you cannot remove it and cannot stop it, so you may have the same name for different instances in a Data Guard configuration. You can register the standby instances to different local listeners, to avoid the confusion, but you may still register to the same SCAN listener.

Create your own services, start them depending on the database role, and do not use ‘save state’ in a physical standby configuration.

 

Cet article Multitenant, PDB, ‘save state’, services and standby databases est apparu en premier sur Blog dbi services.

Introducing SQL Server on Docker Swarm orchestration

Fri, 2018-02-02 07:48

SQL Server 2017 is available on multiple platforms: Windows, Linux and Docker. The latter provides containerization features with no lengthy setup and special prerequisites before running your SQL Server databases which are probably the key success of adoption for developers.

It was my case as developer for our DMK management kit which provide to our customers a SQL Server database maintenance solution on all editions from SQL Server 2005 to SQL Server 2017 (including Linux). In the context of our DMK, we have to develop for different versions of SQL Server, including cumulative updates and service packs that may provide new database maintenance features and it may be challenging when we often have to release a new development / fix or and to perform unit tests on different SQL Server versions or platforms. At this stage you may certainly claim that virtualization already addresses those requirements and you’re right because we used a lot of provisioned virtual machines on Hyper-V so far.

The obvious question is why to switch to Docker container technologies? Well, for different reasons in reality. Firstly, sharing my SQL Server containers and test databases with my team is pretty straightforward. We may use a Docker registry and use docker push / pull commands. Then, provisioning a new SQL Server instance is quicker with containers than virtual machines and generally lead to lower CPU / Memory / Disk footprint on my laptop. I talked a little bit about it in the last SQL Pass meetup in Geneva by the way.

But in this blog post I would like to take another step with Docker and to go beyond the development area. As DBA we may have to deal with container management in production in the near future (unless it is already done for you :) ) and we need to get a more global picture of the Docker echosystem. I remembered a discussion with an attendee during my SQL Server Docker and Microservices session in the last TugaIT 2017 Lisbon who told me Docker and containers are only for developers and not suitable for production. At the time of this discussion, I had to admit he was not entirely wrong. Firstly, let’s say that as virtualization before, based-container application adoption will probably take time. This is at least what I may concluded from my experience and from what I may notice around me, even if DevOps and microservices architectures seem to contribute to improve the situation. This is probably because production environments introduce other challenges and key factors than those we may have on development area as service availability, patching or upgrading stuff, monitoring and alerting, performance …. In the same time, Docker and more generally speaking container technologies are constantly maturing as well as tools to manage such infrastructures and in production area, as you know, we prefer to be safe and there is no room to no stable and non-established products that may compromise the core business.

So, we may wonder what’s the part of the DBAs in all of this? Well, regardless the underlying infrastructure we still have the same responsibilities as to provide configuration stuff, to ensure databases are backed up, to manage and to maintain data including performance, to prevent security threats and finally to guarantee data availability. In fact, looking back to last decade, we already faced exactly the same situation with the emergence of virtualization paradigm where we had to install our SQL Server instance in such infrastructures. I still remember some reluctance and heated discussions from DBAs.

From my side, I always keep in mind high availability and performance because it is the most concern of my customers when it comes to production environments. So, I was curious to dig further on container technologies in this area and with a first start on how to deal with different orchestration tools. The main leaders on the market are probably Docker Swarm, Kubernetes, Mesosphere, CoreOS fleet (recently acquired by RedHat), RedHat OpenShift, Amazon ECS and Azure Container Services.

In this first blog, I decided to write about Docker Swarm orchestrator probably because I was already comfortable with native Docker commands and Docker Swarm offers additional set of docker commands. When going into details, the interesting point is that I discovered a plenty of other concepts which lead me to realize I was reaching another world … a production world. This time it is not just about pulling / pushing containers for sure :) Before to keep reading this blog post, it is important to precise that it is not intended to learn about how to implement Docker Swarm. Docker web site is well-designed for that. My intention is just to highlight some important key features I think DBAs should to be aware before starting managing container infrastructures.

Firstly, implementing a Docker Swarm requires to be familiar with some key architecture concepts. Fortunately, most of them are easy to understand if you are already comfortable with SQL Server and cluster-based architectures including SQL FCIs or availability groups.

Let’s have a look at the main components:

  • Nodes: A node is just an instance of docker engine participating in swarm
  • Manager nodes: They are firstly designed to dispatch units of works (called tasks tied to containers) to worker nodes according your service definition
  • Worker nodes: Receive and execute tasks dispatched from manager nodes

Here was my first implementation of my Docker lab infrastructure:

blog 127 - 0 - swarm architecture lab

It was composed of 3 docker nodes and one of them acted as a both worker and manager. Obviously, this is not an ideal scenario to implement on production because this architecture lacks of fault-tolerance design. But anyway, that was enough to start with my basic container labs.

I use a Docker Server version 17.12.0 CE and as shown below swarm mode is enabled.

$sudo docker info
Containers: 13
 Running: 4
 Paused: 0
 Stopped: 9
Images: 37
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: 7a9v6uv6jur5cf8x3bi6ggpiz
 Is Manager: true
 ClusterID: 63pdntf40nzav9barmsnk91hb
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
The IP address of the manager is reachable from the host operation system
…

$ sudo docker node ls
ID                            HOSTNAME                      STATUS              AVAILABILITY        MANAGER STATUS
s6pu7x3htoxjqvg9vilkoffj1     sqllinux2.dbi-services.test   Ready               Active
ptcay2nq4uprqb8732u8k451a     sqllinux3.dbi-services.test   Ready               Active
7a9v6uv6jur5cf8x3bi6ggpiz *   sqllinux.dbi-services.test    Ready               Active              Leader

Here the IP address of the manager (sqllinux node) used during the swarm initialization with –advertise-addr parameter. Tu put it simply, this is the address used by other nodes to connect into this node during the joining phase.

There are a plenty of options to configure and to change the behavior of the swarm. Some of them concern resource and placement management and as DBA it makes sense to know how such infrastructures behave on your database environments regarding these settings. Maybe in a next blog post.

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:13:4b brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.20/24 brd 192.168.40.255 scope global eth0

 

I also opened the required ports on each node

  • TCP port 2376 for secure docker client communication (Docker machine)
  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic (container ingress networking)
$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: dhcpv6-client ssh nfs mountd rpc-bind
  ports: 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp 1433/tcp 8080/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

 

After initializing the swarm, the next step will consist in deploying the containers on it. The point here is the swarm mode changes a little bit the game because you have to deal with service or stacks (collection of services) deployment rather than using container deployment. It does mean you cannot deploy directly containers but you won’t benefit from swarm features in this case.

Let’s continue with stack / service model on docker. Because a picture is often worth a thousand words I put an overview of relationship between stacks, services and tasks.

blog 127 - 1 - swarm stack service task relationship

You may find the definition of each component in the docker documentation but let’s make a brief summary of important things: services are the primary location of interactions with the swarm and includes the definition of tasks to execute on the manager or worker nodes. Then tasks carry Docker containers and commands to run inside. Maybe the most important thing to keep in mind here: /!\Once a task is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail /!\. This is a different behavior from virtualized architectures when you may go through different features to move manually one virtual machine from one host to another one (VMotion, DRS for VMware …). Finally, a stack is just a collection of services (1-N) that make up an application on a specific environment.

From a user perspective, you may deploy directly a service or to go through a stack definition if you have to deal with an application composed of several services and relationships between them. For the latter, you may probably guess that this model is pretty suitable with microservices architectures. These concepts may seem obscure but with practice they become clearer.

But just before introducing services deployment models, one aspect we did not cover so far is the storage layout. Docker has long been considered as designed for stateless applications and storage persistence a weakness in the database world. Furthermore, from a container perspective, it is always recommended to isolate the data from a container to retain the benefits of adopting containerization. Data management should be separate from the container lifecycle. Docker has managed to overcome this issue by providing different ways to persist data outside containers since the version 1.9 including the capabilities to share volumes between containers on the same host (aka data volumes). But thinking about production environments, customers will certainly deploy docker clusters rending these options useless and the containers non-portables as well. In my context, I want to be able to share data containers on different hosts and the good news is Docker provide distributed filesystem capabilities. I picked up NFS for convenience but it exists other solutions like Ceph or GluterFS for instance. A direct mapping between my host directory and the directory inside my SQL Server container over a distributed storage based on a NFS share seems to work well in my case. From a SQL Server perspective this is not an issue as long as you deploy the service with a maximum of one replica at time to avoid data corruption. My updated architecture is as following:

blog 127 - 2 - swarm architecture lab with nfs

Here the configuration from one node concerning the mount point based on NFS share. Database files will be stored on /u01/sql2 in my case.

$ cat /etc/fstab
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=eccbc689-88c6-4e5a-ad91-6b47b60557f6 /boot                   xfs     defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/sdb1       /u99    xfs     defaults        0 0
192.168.40.14:/u01      /u01    nfs     nfsvers=4.2,timeo=14,intr       0 0

$ sudo showmount -e 192.168.40.14
Export list for 192.168.40.14:
/u01 192.168.40.22,192.168.40.21,192.168.40.20

 

My storage is in-place and let’s continue with network considerations. As virtualization products, you have different option to configure network:

  • Bridge: Allows internal communication between containers on the same host
  • docker_gwbridge : Network created when the swarm is installed and it is dedicated for the communication between nodes
  • Ingress: All nodes by default participate to ingress routing mesh. I will probably introduce this feature in my next blog post but let’s say that the routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node
  • Overlay: The manager node automatically extends the overlay network to nodes that run service tasks to allow communication between host containers. Not available if you deploy containers directly

Let’s add that Docker includes an embedded DNS server which provides DNS resolution among containers connected to the same user defined network. Pretty useful feature when you deploy applications with dependent services!

So, I created 2 isolated networks. One is dedicated for back-end server’s communication (backend-server) and the other one for the front-end server’s communication (frontend-server).

$sudo docker network create \
  --driver overlay \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  backend-server

$sudo docker network create \
  --driver overlay \
  --subnet 172.19.0.0/16 \
  --gateway 172.19.0.1 \
  frontend-server

$ sudo docker network ls
NETWORK ID	NAME			DRIVER		SCOPE
oab2ck3lsj2o	backend-server      	overlay		swarm
1372e2d1c92f   	bridge			bridge       	local
aeb179876301  	docker_gwbridge     	bridge       	local
qmlsfg6vjdsb	frontend-server     	overlay  	swarm
8f834d49873e  	host			host		local
2dz9wi4npgjw  	ingress             	overlay         swarm

we are finally able to deploy our first service based on SQL Server on Linux image:

$ sudo docker service create \
   --name "sql2" \
   --mount 'type=bind,src=/u01/sql2,dst=/var/opt/mssql' \
   --replicas 1 \
   --network backend-server \
   --env "ACCEPT_EULA=Y" --env "MSSQL_SA_PASSWORD=P@$$w0rd1" \
   --publish published=1500,target=1433 \
   microsoft/mssql-server-linux:2017-latest

Important settings are:

  • –name “sql2″ = Name of the service to deploy
  • –replicas 1 = We tell to the manage to deploy only on one replica of the SQL Server container at time on docker workers
  • – mount ‘type=bind,src=…,dst=…’ = Here we define the data persistence strategy. It will map the /u01/02 folder directory on the host with /var/opt/mssql directory within the container. If we shutdown or remove the container the data is persisted. If container moves to another docker node, data is still available thank to the distributed storage over NFS.
  • –network back-endserver = we will attach the sql2 service to the back-endserver user network
  • microsoft/mssql-server-linux:2017-latest = The container based-image used in this case (Latest image available for SQL Server 2017 on Linux)

After deploying the sql2 service, let’s have a look at services installed from the manager. We get interesting output including the service name, the replication mode and the listen port as well. You may notice replication mode is set to replicated. In this service model, the swarm distributes a specific number of replicas among nodes. In my context I capped the number of maximum task to 1 as discussed previously.

$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                      PORTS
ih4e2acqm2dm        registry            replicated          1/1                 registry:2                                 *:5000->5000/tcp
bqx1u9lc8dni        sql2                replicated          1/1                 microsoft/mssql-server-linux:2017-latest   *:1433->1433/tcp

Maybe you have noticed one additional service registry. Depending on the context, when you deploy services the correspond based-images must be available from all the nodes to be deployed. You may use images stored in public docker registry and to use a private one if you deploy internal images.

Let’s dig further by looking at the tasks associated to the sql2 service. We get other useful information as the desired state, current state and the node where the task is running.

$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                      PORTS
ih4e2acqm2dm        registry            replicated          1/1                 registry:2                                 *:5000->5000/tcp
bqx1u9lc8dni        sql2                replicated          1/1                 microsoft/mssql-server-linux:2017-latest   *:1433->1433/tcp

Maybe you have noticed one additional service registry. Depending on the context, when you deploy services the correspond based-images must be available from all the nodes to be deployed. You may use images stored in public docker registry and to use a private one if you deploy internal images.

Let’s dig further by looking at the tasks associated to the sql2 service. We get other useful information as the desired state, current state and the node where the task is running.

$ sudo docker service ps sql2
ID                  NAME                IMAGE                                      NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
zybtgztgavsd        sql2.1              microsoft/mssql-server-linux:2017-latest   sqllinux3.dbi-services.test   Running             Running 11 minutes ago

In the previous example I deployed a service that concerned only my SQL Server instance. For some scenarios it is ok but generally speaking a back-end service doesn’t come alone on container world and it is often part of a more global application service architecture. This is where stack deployment comes into play.

As stated to the Docker documentation stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately. Stack files include environment variables, deployment tags, the number of services and dependencies, number of tasks to deploy, related environment-specific configuration etc… If you already dealt with docker-compose files to deploy containers and dependencies you will be comfortable with stack files. The stack file is nothing more than a docker-compose file adjusted for stack deployments. I used one to deploy the app-voting application here. This application is composed to 5 services including Python, NodeJS, Java Worker, Redis Cache and of course SQL Server.

Here the result on my lab environment. My SQL Server instance is just a service that composes the stack related to my application. Once again you may use docker commands to get a picture of the stack hierarchy.

$ sudo docker stack ls
NAME                SERVICES
myapp               5

$ sudo docker stack services myapp
ID                  NAME                MODE                REPLICAS            IMAGE                                               PORTS
fo2g822czblu        myapp_worker        replicated          1/1                 127.0.0.1:5000/examplevotingapp_worker:latest
o4wj3gn5sqd2        myapp_result-app    replicated          1/1                 127.0.0.1:5000/examplevotingapp_result-app:latest   *:8081->80/tcp
q13e25byovdr        myapp_db            replicated          1/1                 microsoft/mssql-server-linux:2017-latest            *:1433->1433/tcp
rugcve5o6i7g        myapp_redis         replicated          1/1                 redis:alpine                                        *:30000->6379/tcp
tybmrowq258s        myapp_voting-app    replicated          1/1                 127.0.0.1:5000/examplevotingapp_voting-app:latest   *:8080->80/tcp

 

So, let’s finish with the following question: what is the role of DBAs in such infrastructure as code? I don’t pretend to hold the truth but here my opinion:

From an installation and configuration perspective, database images (from official editors) are often released without neither any standard nor best practices. I believe very strongly (and it seems I’m aligned with the dbi services philosophy on this point) that the responsibility of the DBA team here is to prepare, to build and to provide well-configured images as well as related deployment files (at least the database service section(s)) related to their context – simple containers and more complex environments with built-in high availability for instance.

In addition, from a management perspective, containers will not really change the game concerning the DBA daily job. They are still responsible of the core data business regardless the underlying infrastructure where database systems are running on.

In this blog post we just surfaced the Docker Swarm principles and over the time I will try to cover other important aspects DBAs may have to be aware with such infrastructures.

See you!

 

 

 

 

 

Cet article Introducing SQL Server on Docker Swarm orchestration est apparu en premier sur Blog dbi services.

Migrate Windows VM with more than 4 Disks from VMware to OVM

Fri, 2018-02-02 06:24

Suppose you got an OVA image created on VMware and the VM contains more than 4 Disks
and you have to migrate this machine from VMware to OVM.

As first step you import the OVA into the OVM in the usual way:

Bildschirmfoto 2018-02-02 um 09.43.12

You see the that the appliance was imported successfully, we have 5 disks:

Bildschirmfoto 2018-02-02 um 09.45.25

Now you create your VM from the imported appliance:

Bildschirmfoto 2018-02-02 um 09.47.34

So far so good, lets have a look on our newly created VM:

Bildschirmfoto 2018-02-02 um 09.51.22

All seems good, but if you edit the machine (we want to add a network and give a boot order for the system) you will be surprised:

Bildschirmfoto 2018-02-02 um 10.11.35

Oops, we lost a disk, what happened? You don’t make a mistake, its a restriction on OVM:
http://www.oracle.com/us/technologies/virtualization/ovm3-supported-config-max-459314.pdf states that a VM can have only 4 IDE disks in maximum, and if you import your Windows VM its considered as Xen HVM Domain Type, so you can only attach 4 disks to the VM.

And now, how can we solve the problem? Lets first try, if we can boot the system:

Bildschirmfoto 2018-02-02 um 10.25.32

Ok system is up what next? We deinstall all the VMware utilities:

Bildschirmfoto 2018-02-02 um 10.35.40For the next step we download the Oracle VM Server for x86 Windows PV Drivers – 3.4.2.0.0 for Microsoft Windows x64 (64-bit) from https://edelivery.oracle.com and install them on our Windows Box:

Bildschirmfoto 2018-02-02 um 11.04.28

After a system restart, all disks except the C: drive are gone:

Bildschirmfoto 2018-02-02 um 11.08.51

We shutdown the Windows Box and put the VM into the Xen HVM PV Driver Domain:

Bildschirmfoto 2018-02-02 um 12.07.39

After that we can add our lost disk without any problems:

Bildschirmfoto 2018-02-02 um 12.09.00

Ok lets restart the system look what happens:

Bildschirmfoto 2018-02-02 um 12.16.48

Ok all disks are there, we can now bring them online:

Bildschirmfoto 2018-02-02 um 12.59.29

After a reboot we can see that our drives are used correctly:

ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 4743
Current log# 3 seq# 4743 mem# 0: I:\ORADATA\TESTIDM1\REDO03.LOG
Successful open of redo thread 1

 

 

 

 

Cet article Migrate Windows VM with more than 4 Disks from VMware to OVM est apparu en premier sur Blog dbi services.

Result Cache: when *not* to use it

Mon, 2018-01-29 15:39

I encountered recently a case where result cache was incorrectly used, leading to high contention when the application encountered a peak of load. It was not a surprise when I’ve seen that the function was called with an ‘ID’ as argument, which may have thousands of values in this system. I mentioned to the software vendor that the result cache must be used only for frequently calling the function with same arguments, not for random values, even if each value have 2 or 3 identical calls. And, to detail this, I looked at the Oracle Documentation to link the part which explains when the result cache can be used and when it should be avoided.

But I’ve found nothing relevant. This is another(*) case where the Oracle Documentation is completely useless. Without explaining how a feature works, you completely fail to get this feature used. Most people will not take the risk to use it, and a few will use it in the wrong place, before definitely blacklisting this feature.

(*) By another case, I’m thinking about Kamil Stawiarski presentation about Pragma UDF and the lack of useful documentation about it.

Oracle documentation

So this is what I’ve find in the Database Performance Tuning Guide about the Benefits of Using the Server Result Cache

  1. The benefits of using the server result cache depend on the application
  2. OLAP applications can benefit significantly from its use.
  3. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.

So, this is vague (‘depends’, ‘can benefit’, ‘good candidates’). And doesn’t help to decide when it can be used.
The ‘access a high number of rows but return a small number’ is an indication why cache hits can benefit. However, there is no mention of the most important things, which are :

  • The cache result is invalidated for any DML on the tables the result relies on.
  • The cache miss, when the result is invalidated is expensive
  • The cache miss, when the result is not in the result cache is expensive
  • The ‘expensive’ here is a scalability issue: not detected in unit tests, but big contention when load increases
Real things to know

The first thing to know is that the Result Cache memory is protected by a latch:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This latch has no children:

SQL> select * from v$latch_children where name like '%Result Cache%';
 
no rows selected

Only one latch to protect the whole result cache: concurrent sessions – even for different functions – have to serialize their access on the same latch.

This latch is acquired in exclusive mode when the session has to write to the result cache (cache miss, invalidation,…) or in shared mode – since 11gR2 when reading only. This has been explained by Alex Fatkulin http://afatkulin.blogspot.ch/2012/05/result-cache-latch-in-11gr2-shared-mode.html.

This means that, whatever the Oracle Documentation says, the benefit of result cache comes only at cache hit: when the result of the function is already there, and has not been invalidated. If you call the same function with always the same parameter, frequently, and with no changes in the related tables, then we are in the good case.

But if there was a modification of one of the tables, even some rows that have nothing to do with the result, then you will have an overhead: exclusive latch get. And if you call the function with new values for the arguments, that’s also a cache miss which has to get this exclusive latch. And if you have multiple sessions experiencing a cache miss, then they will spin on CPU to get the exclusive latch. This can be disastrous with a large number of sessions. I have seen this kind of contention for hours with connection pools set to 100 sessions when the call to the function is frequent with different values.

To show it, I create a demo table (just to have a dependency) and a result_cache function:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table created.
 
SQL> create or replace function F(n number) return number result_cache as begin for i in (select * from DEMO where DEMO.n=F.n) loop return i.n; end loop; end;
2 /
Function created.

I have just restarted the instance and my latch statistics are reset:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

Result Cache Hit

This will call the function always with the same argument, and no change in the table it relies on:
SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(1); end loop; end;
2 /
PL/SQL procedure successfully completed.

So, the first call is a cache miss and the 999 next calls are cache hits. This is the perfect case for Result Cache.

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 1009 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So, that’s about 1000 latch gets. With cache hits you get the latch once per execution, and this is a shared latch, so no contention here.
You want to see check that it is a shared latch? Just set a breakpoint with gdb on the ksl_get_shared_latch function (up to 12.1 because 12.2 uses ksl_get_shared_latch_int) and print the arguments (as explained by Stefan Koehler and Frits Hoogland):

As my RC latch is at address 00000000600477D0 I set a beakpoint on ksl_get_shared_latch where the first argument is 0x600477d0 and display the other arguments:

break ksl_get_shared_latch
condition 1 $rdi == 0x600477d0
commands
silent
printf "ksl_get_shared_latch laddr:%x, willing:%d, where:%d, why:%d, mode:%d\n", $rdi, $rsi, $rdx, $rcx, $r8
c
end

Then one call with cache hit displays:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8

Mode 8 is shared: many concurrent sessions can do the same without waiting. Shared is scalable: cache hits are scalable.

Cache miss – result not in cache

Here each call will have a different value for the argument, so that they are all cache misses (except the first one):

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

Now the ‘RC latch’ statistics have increased further:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 6005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This is about 5000 additional latch gets, which means 5 per execution. And, because it writes, you can expect them to be exclusive.

Here is my gdb script output when I call the function with a value that is not already in cache:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5347, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5374, mode:16

Mode 16 is exclusive. And we have 3 of them in addition to the shared one. You can imagine what happens when several sessions are running this: spin and wait, all sessions on the same resource.

Cache miss – result in cache but invalid

I run the same again, where all values are in cache now:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

So this is only 1000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 7005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

The function depends on DEMO table, and I do some modifications on it:

SQL> insert into DEMO values (0)
1 row created.
SQL> commit;
Commit complete.

This has invalidated all previous results. A new run will have all cache miss:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

And this is 5000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 12007 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So what?

The important thing to know is that each cache miss requires an exclusive access to the Result Cache, multiple times. Those must be avoided. The Result Cache is good for a static set of result. It is not a short-term cache to workaround an application design where the function is called two or three times with the same values. This is, unfortunately, not explained in the Oracle Documentation. But it becomes obvious when we look at the implementation, or when we load test it with multiple sessions. The consequence can be this kind of high contention during minutes or hours:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
latch free 858,094 1,598,387 1863 78.8
enq: RC - Result Cache: Contention 192,855 259,563 1346 12.8

Without either the knowledge of the implementation, or relevant load tests, the risk is that a developer stays on his good results in unit testing, and implement Result Cache in each function. The consequence will be seen too late, in production, at a time of load peak. If this happens to you, you can disable the result cache (DBMS_RESULT_CACHE.BYPASS(TRUE);) but the risk is to have performance degradation in the ‘good cases’. Or recompile the procedures with removed RESULT_CACHE, but you may bring a new contention on library cache then.

 

Cet article Result Cache: when *not* to use it est apparu en premier sur Blog dbi services.

Testing Oracle SQL online

Sat, 2018-01-27 16:38

Want to test some DDL, a query, check an execution plan? You need only a browser. And you can copy-paste, or simply link, your test-case in a forum, a tweet, an e-mail, a tweet. Here is a small list (expecting to grow from your comments) of free online services which can run with an Oracle Database: SQL Fiddle, Rextester, db<>fiddle and Oracle Live SQL

SQL Fiddle

SQL Fiddle let you build a schema and run DDL on the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.6
  • Postgres 9.6 and 9.3
  • SQLLite (WebSQL and SQL.js)

As an Oracle user, the Oracle 11gR2 is not very useful as it is a version from 2010. But there’s a simple reason for that: that’s the latest free version – the Oracle XE Edition. And a free online service can run only free software. Now that Oracle plans to release an XE version every year, this should be better soon.

Example: http://sqlfiddle.com/#!4/42960/1/0

CaptureSQLfiddle

Rextester

Rextester is a service to compile code online, in a lot of languages and also the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.7
  • PostgreSQL 9.6

Example: http://rextester.com/QCYJF41984

Rextester has also an API where you can run a query and get a JSON answer:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api
{"Warnings":null,"Errors":null,"Result":"\u003ctable class=\"sqloutput\"\u003e\u003ctbody\u003e\u003ctr\u003e\u003cth\u003e\u0026nbsp;\u0026nbsp;\u003c/th\u003e\r\n\u003cth\u003eDUMMY\u003c/th\u003e\r\n\u003c/tr\u003e\r\n\u003ctr\u003e\u003ctd\u003e1\u003c/td\u003e\r\n\u003ctd\u003eX\u003c/td\u003e\r\n\u003c/tr\u003e\r\n\u003c/tbody\u003e\u003c/table\u003e\r\n","Stats":"absolute service time: 1,37 sec","Files":null}

The answer has the result as an HTML table:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result
<table class="sqloutput"><tbody><tr><th> nbsp; nbsp;</th>
<th>DUMMY</th>
</tr>
<tr><td>1</td>
<td>X</td>
</tr>
</tbody></table>

Here is my SELECT * FROM DUAL:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result | lynx -dump -stdin
DUMMY
1 X

Capturerextester

db<>fiddle

db<>fiddle has a very nice interface, easy to link and easy to paste to StackOverflow (click on ‘markdown’)

  • Oracle 11gR2
  • SQL Server 2014 2016 2017, and even 2017 Linux version.
  • MariaDB 10.2
  • SQLite 3.8
  • PostgreSQL 8.4 9.4 9.6 10

Example: http://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=948a067dd17780ca65b01243751c2cb0

Capturedbfiddle

Oracle Live SQL

Finally, you can also run on the latest release of Oracle, with a service provided by Oracle itself: Live SQL.

  • Oracle 12cR2 (an early build from October 2016)

Example: https://livesql.oracle.com/apex/livesql/s/f6ydueahcslf66dlynagw9s3w

CaptureLiveSQL

 

Cet article Testing Oracle SQL online est apparu en premier sur Blog dbi services.

Explain Plan format

Wed, 2018-01-24 10:57

The DBMS_XPLAN format accepts a lot of options, which are not all documented. Here is a small recap of available information.

The minimum that is displayed is the Plan Line Id, the Operation, and the Object Name. You can add columns and/or sections with options, such as ‘rows’, optionally starting with a ‘+’ like ‘+rows’. Some options group several additional information, such ‘typical’, which is also the default, or ‘basic’, ‘all’, ‘advanced’. You can choose one of them and remove some columns, with ‘-‘, such as ‘typical -rows -bytes -cost -plan_hash -predicate -remote -parallel -partition -note’. Finally, from an cursor executed with plan statistics, you can show all execution statistics with ‘allstats’, and the last execution statistics with ‘allstats last’. Subsets of ‘allstats’ are ‘rowstats’, ‘memstats’, ‘iostats’, buffstats’.

Of course, the column/section is displayed only if the information is present.

This blog post shows what is display by which option, as of 12cR2, and probably with some missing combinations.

+plan_hash, or BASIC


PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 1338588353

Plan hash value: is displayed by ‘basic +plan_hash’ or ‘typical’ or ‘all’ or ‘advanced’

+rows +bytes +cost +partition +parallel, or TYPICAL


-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ/Ins |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 287 | 19516 | 5 (20)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (ORDER) | :TQ10002 | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | P->S | QC (ORDER) |
| 3 | SORT ORDER BY | | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 5 | PX SEND RANGE | :TQ10001 | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | P->P | RANGE |
|* 6 | HASH JOIN | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX BLOCK ITERATOR | | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWC | |
| 8 | TABLE ACCESS FULL | EMP | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWP | |
| 9 | BUFFER SORT | | | | | | | | Q1,01 | PCWC | |
| 10 | PX RECEIVE | | 82 | 2460 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 11 | PX SEND BROADCAST| :TQ10000 | 82 | 2460 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 12 | REMOTE | DEPT | 82 | 2460 | 2 (0)| 00:00:01 | | | LOOPB~ | R->S | |
-----------------------------------------------------------------------------------------------------------------------------------

Rows or E-Rows: is displayed by ‘basic +rows’ or ‘typical’ or ‘all’ or ‘advanced’
Bytes or E-Bytes: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Cost: is displayed by ‘basic +cost’ or ‘typical’ or ‘all’ or ‘advanced’
TmpSpc or E-Temp: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Time or E-Time: is displayed by ‘typical’ or ‘all’ or ‘advanced’
Pstart/Pstop: is displayed by ‘basic +partition’ or ‘typical’ or ‘all’ or ‘advanced’
TQ/Ins, IN-OUT, PQ Distrib: is displayed by ‘basic +parallel’ or ‘typical’ or ‘all’ or ‘advanced’

The ‘A-‘ and ‘E-‘ prefixes are used when displaying execution statistics, to differentiate estimations with actual numbers

+alias


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
1 - SEL$58A6D7F6
8 - SEL$58A6D7F6 / EMP@SEL$1
12 - SEL$58A6D7F6 / DEPT@SEL$1

Query Block Name / Object Alias: is displayed by ‘basic +alias’ or ‘typical +alias’ or ‘all’ or ‘advanced’

+outline


Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
PQ_DISTRIBUTE(@"SEL$58A6D7F6" "DEPT"@"SEL$1" NONE BROADCAST)
USE_HASH(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "EMP"@"SEL$1" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "EMP"@"SEL$1")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
MERGE(@"SEL$1" >"SEL$2")
OUTLINE_LEAF(@"SEL$58A6D7F6")
ALL_ROWS
DB_VERSION('12.2.0.1')
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Outline Data: is displayed by ‘basic +outline’ or ‘typical +outline’ or ‘all +outline’ or ‘advanced’

+peeked_binds


Peeked Binds (identified by position):
--------------------------------------
 
1 - :X (VARCHAR2(30), CSID=873): 'x'

Peeked Binds: is displayed by ‘basic +peeked_binds’ or ‘typical +peeked_binds’ or ‘all +outline’ or ‘advanced’

+predicate


Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Predicate Information: is displayed by ‘basic +predicate’ or ‘typical’ or ‘all’ or ‘advanced’

+column


Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 2 - (#keys=0) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 3 - (#keys=1) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 4 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]

Column Projection Information: is displayed by ‘basic +projection’ or ‘typical +projection’ or ‘all’ or ‘advanced’

+remote


Remote SQL Information (identified by operation id):
----------------------------------------------------
 
12 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'LOOPBACK' )

Remote SQL Information: is displayed by ‘basic +remote’ or ‘typical’ or ‘all’ or ‘advanced’

+metrics


Sql Plan Directive information:
-------------------------------
 
Used directive ids:
9695481911885124390

Sql Plan Directive information: is displayed by ‘+metrics’

+note

The Note section can show information about SQL Profiles, SQL Patch, SQL Plan Baseline, Outlines, Dynamic Sampling, Degree of Parallelism, Parallel Query, Parallel DML, Create Index Size, Cardinality Feedback, Rely Constraints used for transformation, Sub-Optimal XML, Adaptive Plan, GTT private statistics,…


Note
-----
- Degree of Parallelism is 2 because of table property
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
- this is an adaptive plan (rows marked '-' are inactive)

Note: is displayed by ‘basic +note’ or ‘typical’ or ‘all’ or ‘advanced’

+adaptive


---------------------------------------------------------------------------------------
| Id | Operation | Name |Starts|E-Rows| A-Rows|
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |
| 1 | HASH UNIQUE | | 1 | 1 | 0 |
| * 2 | HASH JOIN SEMI | | 1 | 1 | 0 |
|- 3 | NESTED LOOPS SEMI | | 1 | 1 | 7 |
|- 4 | STATISTICS COLLECTOR | | 1 | | 7 |
| * 5 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 1 | 7 |
|- * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 0 | 1 | 0 |
|- * 7 | INDEX RANGE SCAN | EMP_DEP_IX | 0 | 10 | 0 |
| * 8 | TABLE ACCESS FULL | EMPLOYEES | 1 | 1 | 1 |
---------------------------------------------------------------------------------------

Inactive branches of adaptive plan: is displayed by ‘+adaptive’

+report


Reoptimized plan:
-----------------
This cursor is marked for automatic reoptimization. The plan that is
expected to be chosen on the next execution is displayed below.

Reoptimized plan: is displayed by ‘+report’

ALLSTATS


---------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | O/1/M |
---------------------------------------------------------------------------------------------------------------------------

Starts: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Rows: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Time: is displayed by ‘typical +rowstats’, ‘basic +allstats’
Buffers, Reads, Writes: is displayed by ‘basic +buffstats’, ‘basic +iostats’, ‘basic +allstats’
OMem, 1Mem, Used-Mem, O/1/M, Used-Mem: is displayed by ‘basic +memstats’, ‘basic +allstats’
Max-Tmp,Used-Tmp is displayed by ‘basic +memstats’, ‘typical +allstats’

With summed stats, O/1/M and Max-Tmp are used for the headers. With last stats, Used-Mem and Used-Tmp.

 

Cet article Explain Plan format est apparu en premier sur Blog dbi services.

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

Sat, 2018-01-20 16:16

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
22:22:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB6 READ WRITE NO

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb1 AcyH+Z... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
3 pdb6 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
 
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
 
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
6 PDB6 READ WRITE YES
 
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS MESSAGE ACTION
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
 
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb2 AdTdo9... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
4 pdb1 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

 

Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.

SQL Server on Linux and logging

Fri, 2018-01-19 01:39

On Windows world, SQL Server logs information both into the SQL Server error log and the Application log. Both automatically timestamp all recorded events. Unlike the SQL Server error log, the Windows application log provides an overall picture of events that occur globally on the Windows operating system. Thus, regarding the encountered issues taking a look at such event logs – by using either the Windows event viewer or the Get-EventLog PowerShell cmdlet – may be very helpful to figure out they are only SQL Server-scoped or if you have to correlate with to other operating system issues.

But what about SQL Server on Linux? Obviously, we may use the same logging technologies. As Windows, SQL Server logs information both in the SQL Server error log located on /var/opt/mssql/log/ and in Linux logs. Because SQL Server is only supported on Linux distributions that all include systemd ( RHEL 7.3+, SLES V12 SP2+ or Ubuntu 16.04+) we have to go through the journalctl command to browse the messages related to the SQL Server instance.

systemd-journald is a system service that collects and stores logging data based on logging information that is received from a variety of sources – Kernel and user log messages. All can be viewed through the journalctl command.

Let’s say that the journalctl command is very powerful and I don’t aim to cover all the possibilities. My intention is only to dig into some examples in the context of SQL Server. Conceptually this is not so different than we may usually do on Windows system for basic stuff.

Firstly, let’s say we may use a couple of options to filter records we want to display. Probably the first intuitive way to go through the journalctl command is to use time interval parameters as –since and –until as follows:

[root@sqllinux ~] journalctl --since "2018-01-16 12:00:00" --until "2018-01-16 23:30:00"

Here a sample of the corresponding output:

blog 126 - 1 - journalctl with time interval filter

All log messages are displayed including the kernel. But rather than using time interval filters we may prefer to use the -b parameter to show all log messages since the last system boot for instance:

[root@sqllinux ~] journalctl -b

The corresponding output:

blog 126 - 2 - journalctl with last boot filter

You may use different commands to get the system reboot as uptime, who -b. I’m in favour of last reboot because it provides the last reboot date rather than the uptime of the system.

Furthermore, one interesting point is that if you want to get log messages from older system boots (and not only the last one) you have to setup accordingly system-journald to enable log persistence. By default, it is volatile and logs are cleared after each system reboot. You may get this information directly from the system-journald configuration file (#Storage=auto by default):

[root@sqllinux ~] cat /etc/systemd/journald.conf
…
[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
…

I remembered a customer case where I had to diagnose a database check integrity job scheduled on each Sunday and that failed randomly. We finally figure out that the root cause was a system reboot after an automatic update. But the tricky part was that not all system reboots did not lead to fail the DBCC CHECKDB command and according to the information from the Windows log we understood it depended mostly on the DBCC CHECKDB command execution time which sometimes exceeded the time scheduled for system reboot. So, in this case going back to the previous reboots (before the last one) was helpful for us. Let’s say that for some Linux distributions this is not the default option and my colleague Daniel Westermann in the dbi services open source team explained it well through his blog post and how to change the default behavior as well.

So, after applying the correct setup, if you want to display log messages after a pre-defined boot time you may first identify the different system boot times logged into the journal as follows:

[root@sqllinux ~] journalctl --list-boots
-1 576f0fb259f4433083c05329614d749e Tue 2018-01-16 15:41:15 CET—Wed 2018-01-17 20:30:41 CET
 0 ea3ec7019f8446959cfad0bba517a47e Wed 2018-01-17 20:33:30 CET—Wed 2018-01-17 20:37:05 CET

Then you may rewind the journal until the corresponding offset:

[root@sqllinux ~] journalctl -b -1 
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:37:40 CET. --
Jan 16 15:41:15 localhost.localdomain systemd-journal[105]: Runtime journal is using 8.0M (max allowed 188.7M, trying to leave 283.1M free
 of 1.8G available → current limit 188.7M).
….

Let’s go ahead with filtering by unit (mssql-server unit). This is likely the most useful way for DBAs to display only SQL Server related records with a combination of the aforementioned options (time interval or last boot(s) parameters). In the following example, I want to display SQL Server related records since a system boot that occurred on 18 January 2018 20:39 (I may also deal with interval time filters)

[root@sqllinux ~] journalctl -b -1 -u mssql-server.service 
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:39:55 CET. --
Jan 16 15:41:17 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
ion 'Service'
Jan 16 20:47:15 sqllinux.dbi-services.test systemd[1]: Started Microsoft SQL Server Database Engine.
Jan 16 20:47:15 sqllinux.dbi-services.test systemd[1]: Starting Microsoft SQL Server Database Engine
...
Jan 16 20:47:22 sqllinux.dbi-services.test sqlservr[1119]: 2018-01-16 20:47:22.35 Server      Microsoft SQL Server 2017 (RTM-CU2) (KB40525
74) - 14.0.3008.27 (X64)
…

You may also want to get only error concerned your SQL Server instance. If you already used syslog in the past you will still be comfortable with systemd-journal that implements the standard syslog message levels and message priorities. Indeed, each message has its own priority as shown below. The counterpart on Windows event log are event types (warning, error, critical etc …). On Linux priorities are identified by number – 6 corresponds to info messages and 3 to error messages. Here an log message’s anatomy with the priority value.

[root@sqllinux ~] journalctl -b -1 -u mssql-server.service -n 1 -o verbose
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:48:36 CET. --
Wed 2018-01-17 20:30:38.937388 CET [s=5903eef6a5fd45e584ce03a4ae329ac3;i=88d;b=576f0fb259f4433083c05329614d749e;m=13e34d5fcf;t=562fde1d9a1
    PRIORITY=6
    _UID=0
    _GID=0
    _BOOT_ID=576f0fb259f4433083c05329614d749e
    _MACHINE_ID=70f4e4633f754037916dfb35844b4b16
    SYSLOG_FACILITY=3
    SYSLOG_IDENTIFIER=systemd
    CODE_FILE=src/core/job.c
    CODE_FUNCTION=job_log_status_message
    RESULT=done
    _TRANSPORT=journal
    _PID=1
    _COMM=systemd
    _EXE=/usr/lib/systemd/systemd
    _CAP_EFFECTIVE=1fffffffff
    _SYSTEMD_CGROUP=/
    CODE_LINE=784
    MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
    _HOSTNAME=sqllinux.dbi-services.test
    _CMDLINE=/usr/lib/systemd/systemd --switched-root --system --deserialize 21
    _SELINUX_CONTEXT=system_u:system_r:init_t:s0
    UNIT=mssql-server.service
    MESSAGE=Stopped Microsoft SQL Server Database Engine.
    _SOURCE_REALTIME_TIMESTAMP=1516217438937388

 

So, if you want to restrict more the output with only warning, error or critical messages (from a daemon point of view), you may have to add the -p option with a range of priorities from 2 (critical) and 4 (warning) as shown below:

[root@sqllinux ~] journalctl -p 2..4 -u mssql-server.service
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 21:44:04 CET. --
Jan 16 15:41:17 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
-- Reboot --
Jan 17 15:27:42 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
lines 1-4/4 (END)

Ultimately, filtering by message will be probably the most natural way to find out log messages. Let’s say in this case there is no built-in parameters or options provided by journalctl command and grep will be your friend for sure. In the following example, a classic customer case where we want to count number of failed logins during a specific period. So, I will have to use a combination of journalctl, grep and wc commands:

[root@sqllinux ~] journalctl -u mssql-server.service --since "2018-01-17 12:00:00" --until "2018-01-17 23:00:00"  | grep "Login failed" | wc -l
31

Finally, the journalctl command offers real-time capabilities to follow log messages through the -f option. For very specific cases it might be useful. In the example below I can use it to follow SQL Server related log messages:

[root@sqllinux ~] journalctl -u mssql-server.service -f
-- Logs begin at Tue 2018-01-16 15:41:15 CET. --
Jan 17 21:52:57 sqllinux.dbi-services.test sqlservr[1121]: 2018-01-17 21:52:57.97 Logon       Error: 18456, Severity: 14, State: 8.
Jan 17 21:52:57 sqllinux.dbi-services.test sqlservr[1121]: 2018-01-17 21:52:57.97 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: 192.168.40.30]

 

Another topic I wanted to introduce is the centralized logging management. Nowadays, a plenty of third party tools like splunk – or built-in Microsoft tools as SCOM – may address this need both on Windows and Linux world. I also remembered a special customer case where we went through built-in Windows event forwarding mechanism. On Linux world, you may benefit from a plenty of open source tools and you may also rely on built-in Linux tools as systemd-journal-remote, systemd-journal-upload and systemd-journal-gateway as well. I will probably go further into these tools in the future but this time let’s use an older tool rsyslog that implements the basic syslog protocol and extends it with additional features. In this blog post I used a CentOS 7 distro that comes with rsyslog. The good news is that it also includes by default the imjournal module (that provides access to the systemd journal). This module reads log from /run/log/journal and then writes out /var/log/messages, /var/log/maillog, /var/log/secure or others regarding the record type. Log records may be send over TCP or UDP protocols and securing capabilities are also provided (by using TLS and certificates for instance).

Just out of curiosity, I decided to implement a very simple log message forwarding scenario to centralize only SQL Server log messages. Basically, I only had to setup some parameters in the /etc/rsyslog.conf on both sides (sender and receiver servers) as well as applying some firewall rules to allow the traffic on port 514. In addition, I used TCP protocol because this is probably the simplest way to send log messages (because corresponding module are already loaded). Here an illustration of my scenario:

blog 126 - 3 - rsyslog architecture

Here the configuration settings of my log message sender. You may notice that I used expression-Based filters to filter and to send only my SQL Server instance related messages :

[root@sqllinux ~] cat /etc/rsyslog.conf
#### MODULES ####

# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark  # provides --MARK-- message capability
…
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
if $programname == 'sqlservr' then @@192.168.40.21:514 
…

On the receiver side I configured rsyslog daemon to accept messages that come from TCP protocol and port 514. Here a sample (only the interesting part) of the configuration file:

[root@sqllinux2 ~] cat /etc/rsyslog.conf
…
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

 

Finally, I ran a simple test to check if the log message forwarding process works correctly by using the following T-SQL command from my SQLLINUX instance …

RAISERROR('test syslog from SQLLINUX instance', 10, 1) WITH LOG

 

… and after jumping to the receiver side (SQLLINUX2) I used the tail command to check if my message was sent correctly:

[root@sqllinux2 ~] tail -f /var/log/messages
…
Jan 18 21:50:40 sqllinux sqlservr: 2018-01-18 21:50:40.66 spid57      test syslog
Jan 18 21:51:03 sqllinux sqlservr: 2018-01-18 21:51:03.75 spid57      test syslog 1 2 3
Jan 18 21:52:08 sqllinux sqlservr: 2018-01-18 21:52:08.74 spid52      Using 'dbghelp.dll' version '4.0.5'
Jan 18 21:56:31 sqllinux sqlservr: 2018-01-18 21:56:31.13 spid57      test syslog from SQLLINUX instance

Well done!
In this blog post we’ve surfaced how SQL Server deals with Linux logging system and how we may use the journalctl command to find out information for troubleshooting. Moving from Windows to Linux in this field remains straightforward with finally the same basics. Obviously, Linux is a command-line oriented operating system so you will not escape to use them :-)

 

Cet article SQL Server on Linux and logging est apparu en premier sur Blog dbi services.

Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again!

Thu, 2018-01-18 13:46

Today is the second day of the Alfresco DevCon 2018 and therefore yes, it is already over, unfortunately. In this blog, I will be continuing my previous one with sessions I attended on the afternoon of the day-1 as well as day-2. There were too many interesting sessions and I don’t really have the time to talk about all of them… But if you are interested, all the sessions were recorded (as always) so wait a little bit and check out the DevCon website, the Alfresco Community or the Alfresco Youtube channel and I’m sure you will find all the recordings as soon as they are available.

 

So on the afternoon of the day-1, I started with a presentation of Jeff Potts, you all know him, and he was talking about how to move in (upload) and out (download) of Alfresco some gigantic files (several gigabytes). He basically presented a use case where the users had to manage big files and put them all in Alfresco with the less headache possible. On the paper, Alfresco can handle any file no matter the size because the only limit is what the File System of the Alfresco Server supports. However, when you start working with 10 or 20 GB files, you can sometimes have issues like exceptions, timeouts, network outage, aso… It might not be frequent but it can happen for a variety of reasons (not always linked to Alfresco). The use case here was to simplify the import into Alfresco and make it faster. Jeff tested a lot of possible solutions like using the Desktop Sync, CMIS, FTP, the resumable upload share add-on, aso…

In the end, a pure simple (1 stream) upload/download will always be limited by the network. So he tried to work on improving this part and used the Resilio Sync software (formerly BitTorrent Sync). This tool can be used to stream a file to the Alfresco Server, BitTorrent style (P2P). But the main problem of this solution is that P2P is only as good as the number of users having this specific file available on their workstation… Depending on the use case, it might increase the performance but it wasn’t ideal.

In the end, Jeff came across the protocol “GridFTP”. This is an extension of the FTP for grid computing whose purpose is to make the file transfer more reliable and faster using multiple simultaneous TCP streams. There are several implementations of the GridFTP like the Globus Toolkit. Basically, the solution in this case was to use Globus to transfer the big files from the user’s workstation to a dedicated File System which is mounted on the Alfresco Server. Then using the Alfresco Bulk FileSystem Import Tool (BFSIT), it is really fast to import documents into Alfresco, as soon as they are on the File System of the Alfresco Server. For the download, it is just the opposite (using the BFSET)…

For files smaller than 512Mb, this solution is probably slower than the default Alfresco upload/download actions but for bigger files (or group of files), then it becomes very interesting. Jeff did some tests and basically for one or several files with a total size of 3 or 4GB, then the transfer using Globus and then the import into Alfresco was 50 to 60% faster than the Alfresco default upload/download.

 

Later, Jose Portillo shared Solr Sharding Best Practices. Sharding is the action of splitting your indexes into Shards (part of an index) to increase the searches and indexing (horizontal scaling). The Shards can be stored on a single Solr Server or they can be dispatched on several. Doing this basically increase the search speed because the search is executed on all Shards. For the indexing of a single node, there is no big difference but for a full reindex, it does increase a lot the performance because you do index several nodes at the same time on each Shards…

A single Shard can work well (according to the Alfresco Benchmark) with up to 50M documents. Therefore, using Shards is mainly for big repositories but it doesn’t mean that there are no use cases where it would be interesting for smaller repositories, there are! If you want to increase your search/index performance, then start creating Shards much earlier.

For the Solr Sharding, there are two registration options:

  • Manual Sharding => You need to manually configure the IPs/Host where the Shards are located in the Alfresco properties files
  • Dynamic Sharding => Easier to setup and Alfresco automatically provide information regarding the Shards on the Admin interface for easy management

There are several methods of Shardings which are summarized here:

  • MOD_ACL_ID (ACL v1) => Sharding based on ACL. If all documents have the same ACL (same site for example), then they will all be on the same Shard, which might not be very useful…
  • ACL_ID (ACL v2) => Same as v1 except that it uses the murmur hash of the ACL ID and not its modulus
  • DB_ID (DB ID) => Default in Solr6. Nodes are evenly distributed on the Shards based on their DB ID
  • DB_ID_RANGE (DB ID Range) => You can define the DB ID range for which nodes will go to which Shard (E.g.: 1 to 10M => Shard-0 / 10M to 20M => Shard-1 / aso…)
  • DATE (Date and Time) => Assign date for each Shards based on the month. It is possible to group some months together and assign a group per Shard
  • PROPERTY (Metadata) => The value of some property is hashed and this hash is used for the assignment to a Shard so all nodes with the same value are in the same Shard
  • EXPLICIT (?) => This is an all-new method that isn’t yet on the documentation… Since there aren’t any information about this except on the source code, I asked Jose to provide me some information about what this is doing. He’ll look at the source code and I will update this blog post as soon as I receive some information!

Unfortunately, the Solr Sharding has only been available starting with Alfresco Content Services 5.1 (Solr 4) and only using the ACL v1 method. New methods were then added using the Alfresco Search Services (Solr 6). The availability of methods VS Alfresco/Solr versions has been summarized in Jose’s presentation:

DevCon2018_ShardingMethodsAvailability

Jose also shared a comparison matrix of the different methods to choose the right one for each use case:

DevCon2018_ShardingMethodsFeatures

Some other best practices regarding the Solr Sharding:

  • Replicate the Shards to increased response time and it also provides High Availability so… No reasons not to!
  • Backup the Shards using the provided Web Service so Alfresco can do it for you for one or several Shards
  • Use DB_ID_RANGE if you want to be able to add Shards without having to perform a full reindex, this is the only way
  • If you need another method than DB_ID_RANGE, then plan carefully the number of Shards to be created. You might want to overshard to take into account the future growth
  • Keep in mind that each Shard will pull the changes from Alfresco every 15s and it all goes to the DB… It might create some load there and therefore be sure that your DB can handle that
  • As far as I know, at the moment, the Sharding does not support Solr in SSL. Solr should anyway be protected from external accesses because it is only used by Alfresco internally so this is an ugly point so far but it’s not too bad. Sharding is pretty new so it will probably support the SSL at some point in the future
  • Tune properly Solr and don’t forget the Application Server request header size
    • Solr4 => Tomcat => maxHttpHeaderSize=…
    • Solr6 => Jetty => solr.jetty.request.header.size=…

 

The day-2 started with a session from John Newton which presented the impact of emerging technologies on content. As usual, John’s presentation had a funny theme incorporated in the slides and this time it was Star Wars.

DevCon2018_StarWars

 

After that, I attended the Hack-a-thon showcase, presented/introduced by Axel Faust. In the Alfresco world, Hack-a-thons are:

  • There since 2012
  • Open-minded and all about collaboration. Therefore, the output of any project is open source and available for the community. It’s not about money!
  • Always the source of great add-ons and ideas
  • 2 times per year
    • During conferences (day-0)
    • Virtual Hack-a-thon (36h ‘follow-the-sun’ principle)

A few of the 16 teams that participated in the Hack-a-thon presented the result of their Hack-a-thon day and there were really interesting results for ACS, ACS on AWS, APS, aso…

Except that, I also attended all lightning talks on this day-2 as well as presentations on PostgreSQL and Solr HA/Backup solutions and best practices. The presentations about PostgreSQL and Solr were interesting especially for newcomers because it really explained what should be done to have a highly available and resilient Alfresco environment.

 

There were too many lightning talk to mention them all but as always, there were some quite interesting and there I just need to mention the talk about the ContentCraft plugin (from Roy Wetherall). There cannot be an Alfresco event (be it a Virtual Hack-a-thon, BeeCon or DevCon now) without an Alfresco integration into Minecraft. Every year, Roy keeps adding new stuff into his plugin… I remember years ago, Roy was already able to create a building in Minecraft where the height represented the number of folders stored in Alfresco and the depth was the number of documents inside, if my memory is correct (this changed now, it represents the number of sub-folders). This year, Roy presented the new version and it’s even more incredible! Now if you are in front of one of the building’s door, you can see the name and creator of the folder in a ‘Minecraft sign’. Then you can walk in the building and there is a corridor. On both sides, there are rooms which represent the sub-folders. Again, there are ‘Minecraft signs’ there with the name and creator of the sub-folders. Until then, it’s just the same thing again so that’s cool but it will get even better!

If you walk in a room, you will see ‘Minecraft bookshelves’ and ‘Minecraft chests’. Bookshelves are just there for the decoration but if you open the chests, then you will see, represented by ‘Minecraft books’, all your Alfresco documents stored on this sub-folder! Then if you open a book, you will see the content of this Alfresco document! And even crazier, if you update the content of the book on Minecraft and save it, the document stored in Alfresco will reflect this change! This is way too funny :D.

It’s all done using CMIS so there is nothing magical… Yet it really makes you wonder if there are any limits to what Alfresco can do ;).

 

If I dare to say: long live Alfresco! And see you around again for the next DevCon.

 

 

Cet article Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again! est apparu en premier sur Blog dbi services.

Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF!

Wed, 2018-01-17 13:30

Here we are, the Alfresco DevCon 2018 day-1 is over (well except for the social party)! It’s been already 2 years I attended my last Alfresco event (BeeCon 2016, first of its name (organized by the Order of the Bee)) because I wasn’t able to attend the second BeeCon (2017) since it happened on the exact dates of our internal dbi xChange event. Yesterday was the DevCon 2018 day-0 with the Hackathon, the full day training and the ACSCE/APSCE Certification preparation but today was really the first day of sessions.

DevCon2018_Logo

 

The day-1 started, as usual, with a Keynote from Thomas DeMeo which presented interesting information regarding the global direction of Alfresco products, the Roadmap (of course) for the coming year as well as some use cases where Alfresco was successfully used in very interesting projects including also AWS.

DevCon2018_Roadmap

 

The second part of the keynote has been presented by Brian Remmington which explained the future of the Alfresco Digital Platform. In the next coming months/years, Alfresco will include/refactor/work on the following points for its Digital Platform:

  • Improve SSO solutions. Kerberos is already working very well with Alfresco but they intend to also add SAML2, OAuth, aso… This is a very good thing!
  • Merging the Identity management for the ACS and APS into one single unit
  • Adding an API Gateway in front of ACS and APS to always talk to the same component and targeting in the background both the ACS and APS. It will also allow Alfresco to change the backend APIs, if needed (to align them for example), without the API Gateway noticing it. This is a very good thing too from a developer perspective since you will be sure that your code will not break if Alfresco rename something for example!
  • Merging the search on the ACS and APS into one single search index
  • We already knew it but it was confirmed that Alfresco [will probably drop the default installer and instead] will provide docker/kubernetes means for you to deploy Alfresco easily and quickly using these new technologies
  • Finishing the merge/refactor of other ACS/APS services into common units for both products so that work done once isn’t duplicated. This will concern the Search (Insight?) Service, the Transformation Service, the Form Service and a new Function Service (basically code functions shared between ACS and APS!).

All this looks promising, like really.

 

Then starting at 10am, there were four streams running in parallel so there is something that you will find interesting, that’s for sure. I didn’t mention it but DevCon isn’t just a name… It means that the sessions are really technical, we are far from the (boring) Business presentations that you can find on all other competitors’ events… I did a full morning on ADF. Mario Romano and Ole Hejlskov were presenting ADF Basics and Beyond.

For those of you who don’t know yet, ADF (Alfresco Development Framework) is the last piece of the Digital Platform that Alfresco has been bringing recently. It is a very interesting new framework that allows you to create your own UI to use in front of the ACS/APS. There are at the moment more than 100 angular components that you can use, extend, compose and configure to build the UI that will match your use case. Alfresco Share still provide way more features than ADF but I must say that I’m pretty impressed by what you can achieve in ADF with very little: it looks like it is going in the right direction.

ADF 2.0 has been released recently (November 2017) and it is based on three main pillars: the latest version of Angular 5, a powerful JavaScript API (that talk in the background with the ACS/APS/AGS APIs) and the Yeoman generator+Angular CLI for fast deployments.

ADF provides 3 extensions points for you to customize a component:

  • html extension points => adding html to customize the look&feel or the behavior
  • event listeners => adding behaviors on events for example
  • config properties => each component has properties that will customize it

One of the goal of ADF is the ability to upgrade your application without any efforts. Angular components will be updated in the future but it was designed (and Alfresco effort is going) in a way that even if you use these components in your ADF application, then an upgrade of your application won’t hurt at all. If you want to lean more about ADF, then I suggest you the Alfresco Tech Talk Live that took place in December as well as the Alfresco Office Hours.

 

After this first introduction session to ADF, Eugenio Romano went deeper and showed how to play with ADF 2.0: installing it, deploying a first application and then customizing the main pieces like the theme, the document list, the actions, the search, aso… There were some really interesting examples and I’m really looking forward seeing the tutorials and documentations popping up on the Alfresco Website about these ADF 2.0 new features and components.

 

To conclude the morning, Denys Vuika presented a session about how to use and Extend the Alfresco Content App (ACA). The ACA is the new ADF 2.0 application provided by Alfresco. It is a demo/sample application whose purpose is to be lightweight so it is as fast as possible. You can then customize it as you want, play with the ADF so that this sample application match your needs. One of the demo Denys presented is how you can change the default previewer for certain type of files (.txt, .js, .xml for example). In ADF, that’s like 5 lines of code (of course you need to have another previewer of your own but that’s not ADF stuff) and then he had an awesome preview for .js files where there were syntax highlighting right inside the Alfresco preview as well as tooltips on names to give description of variables and functions apparently. This kind of small features but done so easily look quite promising.

 

I already wrote a lot on ADF today so I will stop my blog here but I did attend a lot of other very interesting sessions on the afternoon. I might talk about that tomorrow.

 

 

 

Cet article Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF! est apparu en premier sur Blog dbi services.

Oracle Application Container: a Swiss Use case

Tue, 2018-01-16 17:11

Here we want to start a business in Switzerland in 3 different areas and make it easy to start a new market area as soon as required by that business. We are going to use the Application Container feature in order to:

  • Have a dedicated PDB for each marker with shared and local metadata and data
  • Roll out frequently data model and code to add features in a central manner with one command

We have first to create an master application container, you can have many master application containers within a CDB. We will also create a seed PDB from that master application container. Not mandatory, its role will be to keep a sync copy of the master and improves speed provisioning for new pluggable database creation within the master container.

SQL> create pluggable database B2C_WEB_CON as application container admin user pdbadmin identified by secret roles = (DBA) ;

Pluggable database B2C_WEB_CON created.

SQL> alter pluggable database B2C_WEB_CON open;

Pluggable database B2C_WEB_CON altered.

SQL> alter session set container = B2C_WEB_CON ;

Session altered.

SQL> create pluggable database as seed admin user pdbadmin identified by oracle roles=(DBA)  ;

Pluggable database AS created.

SQL> alter pluggable database B2C_WEB_CON$SEED open;

Pluggable database B2C_WEB_CON$SEED altered.

SQL> select PDB_ID, PDB_NAME, STATUS, IS_PROXY_PDB, APPLICATION_ROOT, APPLICATION_PDB, APPLICATION_SEED, APPLICATION_ROOT_CON_ID from dba_pdbs order by 1;
  PDB_ID PDB_NAME           STATUS   IS_PROXY_PDB   APPLICATION_ROOT   APPLICATION_PDB   APPLICATION_SEED     APPLICATION_ROOT_CON_ID
       4 B2C_WEB_CON        NORMAL   NO             YES                NO                NO
       5 B2C_WEB_CON$SEED   NORMAL   NO             NO                 YES               YES                                        4

 

I will now create an application from zero. An application is a set of command executed in the master application container and on which a version tag is applied. In other words, Oracle will record what happens in the master container and applied a version flag on those commands to replay them in future PDBs.

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON as sysdba
Connected.

SQL> create user USR_B2C_WEB identified by secret quota unlimited on TBS_B2C_WEB ;

User USR_B2C_WEB created.

SQL> alter user USR_B2C_WEB default tablespace TBS_B2C_WEB ;

User USR_B2C_WEB altered.

SQL> grant create session, resource to USR_B2C_WEB ;

Grant succeeded.

SQL> alter session set current_schema = USR_B2C_WEB ;

Session altered.

SQL> create table customers ( customer_id number, name varchar2(50), address varchar2(50) ) ;

Table CUSTOMERS created.

SQL> create table orders ( order_id number, customer_id number, order_date date ) ;

Table ORDERS created.

SQL> create table order_details ( order_detail_id number, order_id number, product_id number, quantity number ) ;

Table ORDER_DETAILS created.

SQL> create table products ( product_id number, name varchar2(50) ) ;

Table PRODUCTS created.

SQL> alter pluggable database application B2C_WEB_APP end install '1.0';

Pluggable database APPLICATION altered.

SQL> select * from dba_applications;
APP_NAME                                 APP_ID APP_VERSION   APP_STATUS   APP_IMPLICIT   APP_CAPTURE_SERVICE   APP_CAPTURE_MODULE
APP$62EA42BE47360FA8E0537A38A8C0A0F3          2 1.0           NORMAL       Y              SYS$USERS             java@VM122 (TNS V1-V3)
B2C_WEB_APP                                   3 1.0           NORMAL       N              b2c_web_con           java@VM122 (TNS V1-V3)

 

We can now synchronize the application tables of our application B2C_WEB_APP from the MASTER to the SEED in order to increase the next pluggable database creations speed

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON$SEED as sysdba

Session altered.

SQL> alter pluggable database application B2C_WEB_APP sync ;

 

Then, go to the master and create your pluggable database for each market with the latest B2C_WEB_APP application release which is currently 1.0

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> create pluggable database B2C_WEB_APP_VD admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_VD created.

SQL> create pluggable database B2C_WEB_APP_GE admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_GE created.

SQL> create pluggable database B2C_WEB_APP_ZH admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_ZH created.

 

We open and save state to make them opened at the next CDB restart

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON as sysdba
Connected.

SQL> alter pluggable database all open ;

Pluggable database ALL altered.

SQL> alter pluggable database save state ;

Pluggable database SAVE altered.

 

Let’s generate some business activity on each pluggable database corresponding to different Swiss markets

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_VD
Connected.
SQL> insert into usr_b2c_web.products select rownum, 'product_VD_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> commit ;

Commit complete.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> insert into usr_b2c_web.products select rownum, 'product_GE_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> insert into usr_b2c_web.products select rownum, 'product_ZH_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> commit;

Commit complete.

 

Now we can check from the master container if data are well located according to their market

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, p.*
  2  from containers ( PRODUCTS ) p ;
CON$NAME           PRODUCT_ID NAME               CON_ID
B2C_WEB_APP_ZH              1 product_ZH_001          7
B2C_WEB_APP_ZH              2 product_ZH_002          7
B2C_WEB_APP_ZH              3 product_ZH_003          7
B2C_WEB_APP_ZH              4 product_ZH_004          7
B2C_WEB_APP_ZH              5 product_ZH_005          7
B2C_WEB_APP_GE              1 product_GE_001          6
B2C_WEB_APP_GE              2 product_GE_002          6
B2C_WEB_APP_GE              3 product_GE_003          6
B2C_WEB_APP_GE              4 product_GE_004          6
B2C_WEB_APP_GE              5 product_GE_005          6
B2C_WEB_APP_VD              1 product_VD_001          3
B2C_WEB_APP_VD              2 product_VD_002          3
B2C_WEB_APP_VD              3 product_VD_003          3
B2C_WEB_APP_VD              4 product_VD_004          3
B2C_WEB_APP_VD              5 product_VD_005          3

 

We have different products for each of our markets. Now, we would like to Upgrade the data model and add some code (a procedure) to add a basic feature: add a customer

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin upgrade '1.0' to '1.1';

Pluggable database APPLICATION altered.

SQL> alter session set current_schema = USR_B2C_WEB ;

Session altered.

SQL> alter table customers drop ( address ) ;

Table CUSTOMERS altered.

SQL> alter table customers add ( email varchar2(35) ) ;

Table CUSTOMERS altered.

SQL> alter table products add ( price number (8, 2) ) ;

Table PRODUCTS altered.

SQL> create sequence customer_seq ;

Sequence CUSTOMER_SEQ created.

SQL> create procedure customer_add ( name in varchar2, email in varchar2 ) as
  2  begin
  3    insert into customers values ( customer_seq.nextval, name, email ) ;
  4    commit ;
  5  end;
  6  /

Procedure CUSTOMER_ADD compiled

SQL> alter pluggable database application B2C_WEB_APP end upgrade to '1.1';

Pluggable database APPLICATION altered.

 

Let push in production the release 1.1 one market after each other

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

 

Some business activity happens and new customer are going to appears with the new feature we deployed at the release 1.1 of our B2C_WEB_APP application

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Watson', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('John Curt', 'JOHN@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Dalton X', 'DALTON@AOL.COM') ;

PL/SQL procedure successfully completed.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Sandeep John', 'SANDEEP@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Curt', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Orlondo Watson', 'ORLONDO@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> exec customer_add ('Maria Smith', 'MARIA@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Scotty', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

 

Now, have a look on the new customer data from the master master container

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( CUSTOMERS ) c ;
CON$NAME           CUSTOMER_ID NAME             EMAIL                 CON_ID
B2C_WEB_APP_VD               1 Scotty Hertz     SCOTTY@GMAIL.COM           3
B2C_WEB_APP_VD               2 Scotty Hertz     SCOTTY@GMAIL.COM           3
B2C_WEB_APP_VD               3 Smith Watson     SMITH@YAHOO.COM            3
B2C_WEB_APP_VD               4 John Curt        JOHN@GMAIL.COM             3
B2C_WEB_APP_VD               5 Dalton X         DALTON@AOL.COM             3
B2C_WEB_APP_GE               1 Scotty Hertz     SCOTTY@GMAIL.COM           6
B2C_WEB_APP_GE               2 Sandeep John     SANDEEP@YAHOO.COM          6
B2C_WEB_APP_GE               3 Smith Curt       SMITH@YAHOO.COM            6
B2C_WEB_APP_GE               4 Orlondo Watson   ORLONDO@GMAIL.COM          6
B2C_WEB_APP_ZH               1 Maria Smith      MARIA@GMAIL.COM            7
B2C_WEB_APP_ZH               2 Smith Scotty     SMITH@YAHOO.COM            7


11 rows selected.

 

As we don’t like the email format because it’s ugly in the web interface, we now are going to release a “data” patch on top of the release 1.1 in order to format customer’s emails in a proper manner

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin patch 1 minimum version '1.1' ;

Pluggable database APPLICATION altered.

SQL> update usr_b2c_web.customers set email = trim(lower(email)) ;

0 rows updated.

SQL> alter pluggable database application B2C_WEB_APP end patch 1 ;

Pluggable database APPLICATION altered.

SQL> select * from dba_app_patches;
APP_NAME        PATCH_NUMBER PATCH_MIN_VERSION   PATCH_STATUS   PATCH_COMMENT
B2C_WEB_APP                1 1.1                 INSTALLED

 

As we know have a patch ready to cleanup the email format we are ready to deploy it on each market

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

 

Let’s check if the data patch has been applied successfully and the email format is now OK for all markets

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( CUSTOMERS ) c ;
CON$NAME           CUSTOMER_ID NAME             EMAIL                 CON_ID
B2C_WEB_APP_VD               1 Scotty Hertz     scotty@gmail.com           3
B2C_WEB_APP_VD               2 Scotty Hertz     scotty@gmail.com           3
B2C_WEB_APP_VD               3 Smith Watson     smith@yahoo.com            3
B2C_WEB_APP_VD               4 John Curt        john@gmail.com             3
B2C_WEB_APP_VD               5 Dalton X         dalton@aol.com             3
B2C_WEB_APP_ZH               1 Maria Smith      maria@gmail.com            7
B2C_WEB_APP_ZH               2 Smith Scotty     smith@yahoo.com            7
B2C_WEB_APP_GE               1 Scotty Hertz     scotty@gmail.com           6
B2C_WEB_APP_GE               2 Sandeep John     sandeep@yahoo.com          6
B2C_WEB_APP_GE               3 Smith Curt       smith@yahoo.com            6
B2C_WEB_APP_GE               4 Orlondo Watson   orlondo@gmail.com          6


11 rows selected.

 

A new feature has now been claimed from the business. We need an upgrade of the application to add a parameters table that should contains USR_B2C_WEB application’s parameters which must be shared on all PDB applications. Also each market want be able to add its own parameters without impacting existing one or others markets.
We are going to use the attribute “SHARING” set to “EXTENDED DATA” for that table to make possible a mix of shared data in the master and PDB local data in the same table (deeper explanation and others sharing modes here).

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin upgrade '1.1' to '1.2';

Pluggable database APPLICATION altered.

SQL> create table usr_b2c_web.settings sharing = extended data ( name varchar2(50), value varchar2(50) );

Table USR_B2C_WEB.SETTINGS created.

SQL> insert into usr_b2c_web.settings values ( 'compagny_name', 'wisdom IT' ) ;

1 row inserted.

SQL> insert into usr_b2c_web.settings values ( 'head_quarter_address', 'street village 34, 3819 Happiness, Switzerland' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> alter pluggable database application B2C_WEB_APP end upgrade to '1.2';

Pluggable database APPLICATION altered.

 

Upgrade 1.2 for all market and addition of a local parameter “market_name” for each market

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'VAUD' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'GENEVA' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'ZURICH' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

 

Now we check if shared parameters are available for all markets and each one of them has a dedicated “market_name” value

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( SETTINGS ) c ;

CON$NAME         NAME                   VALUE                                              CON_ID
B2C_WEB_CON      compagny_name          wisdom IT                                               4
B2C_WEB_CON      head_quarter_address   street village 34, 3819 Happiness, Switzerland          4
B2C_WEB_APP_GE   compagny_name          wisdom IT                                               6
B2C_WEB_APP_GE   head_quarter_address   street village 34, 3819 Happiness, Switzerland          6
B2C_WEB_APP_GE   market_name            GENEVA                                                  6
B2C_WEB_APP_VD   compagny_name          wisdom IT                                               3
B2C_WEB_APP_VD   head_quarter_address   street village 34, 3819 Happiness, Switzerland          3
B2C_WEB_APP_VD   market_name            VAUD                                                    3
B2C_WEB_APP_ZH   compagny_name          wisdom IT                                               7
B2C_WEB_APP_ZH   head_quarter_address   street village 34, 3819 Happiness, Switzerland          7
B2C_WEB_APP_ZH   market_name            ZURICH                                                  7

 

Looks all good.

Now the business need to extend the startup activity to a new market area of Switzerland. We are so going to add a new pluggable database for that marker. This market will benefit immediately of the latest application release.

SQL> alter session set container = B2C_WEB_CON ;

Session altered.

SQL> create pluggable database B2C_WEB_APP_ZG admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_ZG created.

SQL> alter pluggable database B2C_WEB_APP_ZG open;

Pluggable database B2C_WEB_APP_ZG altered.

 

Let’s check with the parameter table if all data have been synchronized

SQL> select con$name, c.*
  2  from containers ( SETTINGS ) c ;
CON$NAME         NAME                   VALUE                                              CON_ID
B2C_WEB_APP_VD   compagny_name          wisdom IT                                               3
B2C_WEB_APP_VD   head_quarter_address   street village 34, 3819 Happiness, Switzerland          3
B2C_WEB_APP_VD   market_name            VAUD                                                    3
B2C_WEB_CON      compagny_name          wisdom IT                                               4
B2C_WEB_CON      head_quarter_address   street village 34, 3819 Happiness, Switzerland          4
B2C_WEB_APP_GE   compagny_name          wisdom IT                                               6
B2C_WEB_APP_GE   head_quarter_address   street village 34, 3819 Happiness, Switzerland          6
B2C_WEB_APP_GE   market_name            GENEVA                                                  6
B2C_WEB_APP_ZH   compagny_name          wisdom IT                                               7
B2C_WEB_APP_ZH   head_quarter_address   street village 34, 3819 Happiness, Switzerland          7
B2C_WEB_APP_ZH   market_name            ZURICH                                                  7
B2C_WEB_APP_ZG   compagny_name          wisdom IT                                               5
B2C_WEB_APP_ZG   head_quarter_address   street village 34, 3819 Happiness, Switzerland          5

 

I wish this post will help to understand how to implement Container Application in real life and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Oracle Application Container: a Swiss Use case est apparu en premier sur Blog dbi services.

Pages