Feed aggregator

Updated Whats New whitepaper - 4.3.0.4.0

Anthony Shorten - Sun, 2017-08-06 17:40

The Whats New in FW4 whitepaper has been updated for the latest service pack release. This whitepaper is designed to summarize the major technology and functional changes implemented in the Oracle Utilities Application Framework since V2.2 till the latest service pack. This is primarily of interest to customer upgrading of those earlier versions to understand what has changed and what is new in the framework since that early release.

The whitepaper is only a summary of selected enhancements and it is still recommended to review the release notes of each release if you are interested in details of everything that is changed. This whitepaper does not cover the changes to any of the products that use the Oracle Utilities Application Framework, it is recommended to refer to the release notes of the individual products for details of new functionality.

The whitepaper is available from Whats New in FW4 (Doc Id: 1177265.1) from My Oracle Support.

Inserting data into a table

Tom Kyte - Sun, 2017-08-06 00:06
Hello, I am trying to insert data into a table, The only thing is it is of 20 years. I have already created a query. The query is in a good shape but the only thing missing in my query is the dates. Below is my query. I want LV_START_DATE as 201...
Categories: DBA Blogs

orcl

Tom Kyte - Sun, 2017-08-06 00:06
I have table employee...which has two columns like....Name and Id... create table employee(name varchar2(10),id number); Insert into employee values('A',1); Insert into employee values('B',2); Insert into employee values('C',3); Name...
Categories: DBA Blogs

Postgres vs. Oracle access paths IV – Order By and Index

Yann Neuhaus - Sat, 2017-08-05 15:00

I realize that I’m talking about indexes in Oracle and Postgres, and didn’t mention yet the best website you can find about indexes, with concepts and examples for all RDBMS: http://use-the-index-luke.com. You will probably learn a lot about SQL design. Now let’s continue on execution plans with indexes.

As we have seen two posts ago, an index can be used even with a 100% selectivity (all rows), when we don’t filter any rows. Oracle has INDEX FAST FULL SCAN which is the fastest, reading blocks sequentially as they come. But this doesn’t follow the B*Tree leaves chain and does not return the rows in the order of the index. However, there is also the possibility to read the leaf blocks in the index order, with INDEX FULL SCAN and random reads instead of multiblock reads.
It is similar to the Index Only Scan of Postgres except that there is no need to get to the table to filter out uncommitted changes. Oracle reads the transaction table to get the visibility information, and goes to undo records if needed.

The previous post had a query with a ‘where n is not null’ predicate to be sure having all index entries in Oracle indexes and we will continue on this by adding an order by.

For this post, I’ve increased the size of the column N in the Oracle table, by adding 1/3 to each number. I did this for this post only, and for the Oracle table only. The index on N is now 45 blocks instead of 20. The reason is to show what happens when the cost of ‘order by’ is high. I didn’t change the Postgres table because there is only one way to scan the index, where result is always sorted.

Oracle Index Fast Full Scan vs. Index Full Scan


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dbck3rgnqbakg, child number 0
-------------------------------------
select /*+ */ n from demo1 where n is not null order by n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 46 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 46 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Index Full Scan, the random read version of index read is chosen here by the Oracle optimizer because we want the result on the column N and the index can provide this without additional sorting.

We can force the optimizer to do multiblock reads, with INDEX_FFS hint:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID anqfbf5caat2a, child number 0
-------------------------------------
select /*+ index_ffs(demo1) */ n from demo1 where n is not null order
by n
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 82 (100)| 10000 |00:00:00.01 | 51 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 82 (2)| 10000 |00:00:00.01 | 51 | 478K| 448K| 424K (0)|
| 2 | INDEX FAST FULL SCAN| DEMO1_N | 1 | 10000 | 14 (0)| 10000 |00:00:00.01 | 51 | | | |
-----------------------------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) "N"[NUMBER,22] 2 - "N"[NUMBER,22]

The estimated cost is higher: the index read is cheaper (cost=14 instead of 46) but then the sort operation brings this to 82. We can see additional columns in the execution plan here because the sorting operation needs a workarea in memory (estimated 478K, actually 424K used during the execution). Note that the multiblock read has a few blocks of overhead (reads 51 blocks instead of 48) because it has to read the segment header to identify the extents to scan.

Postgres Index Only Scan

In PostgreSQL there’s only one way to scan indexes: random reads by following the chain of leaf blocks. This returns the rows in the order of the index and does not require an additional sort:


explain (analyze,verbose,costs,buffers) select n from demo1 where n is not null order by n ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..295.29 rows=10000 width=4) (actual time=0.125..1.277 rows=10000 loops=1)
Output: n
Index Cond: (demo1.n IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=30
Planning time: 0.532 ms
Execution time: 1.852 ms

In the previous posts, we have seen a cost of cost=0.29..270.29 for the Index Only Scan. Here we have an additional cost of 25 for the cpu_operator_cost because I’ve added the ‘where n is not null’. As the default constant is 0.0025 this is the query planner estimating to evaluate it for 10000 rows.

First Rows

The Postgres cost always shows two values. The first one is the startup cost: the cost just before being able to return the first row. Some operations have a very small startup cost, others have some blocking operations that must finish before sending their first result rows. Here, as we have no sort operation, the first row retrieved from the index can be returned immediately and the startup cost is small: 0.29
In Oracle you can see the initial cost by optimizing the plan to retrieve the first row, with the FIRST_ROWS() hint:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0fjk9vv4g1q1w, child number 0
-------------------------------------
select /*+ first_rows(1) */ n from demo1 where n is not null order by
n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 2 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

The actual number of blocks read (48) is the same as before because I finally fetched all rows, but the cost is small because it was estimated for two rows only. Of course, we can also tell Postgres or Oracle that we want only the first rows. This is for the next post.

Character strings

The previous example is an easy one because the column N is a number and both Oracle and Postgres stores number in a binary format that follows the same order as the numbers. But that’s different with character strings. If you are not in America, there is a very little chance that the order you want to see follows the ASCII order. Here I’ve run a similar query but using the column X instead of N, which is a text (VARCHAR2 in Oracle):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fsqk4fg1t47v5, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x
--------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2493 (100)| 10000 |00:00:00.27 | 1644 | 18 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 2493 (1)| 10000 |00:00:00.27 | 1644 | 18 | 32M| 2058K| 29M (0)|
|* 2 | INDEX FAST FULL SCAN| DEMO1_X | 1 | 10000 | 389 (0)| 10000 |00:00:00.01 | 1644 | 18 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) NLSSORT("X",'nls_sort=''FRENCH''')[2000], "X"[VARCHAR2,1000] 2 - "X"[VARCHAR2,1000]

I have created an index on X, and as you can see it can be used to get all X values, but with an Index Fast Full Scan, the multiblock index only access which is fast but does not return rows in the order of the index. And then a sort operation is applied. I can force an Index Full Scan with INDEX() hint but the sort will still have to be done.

The reason can be seen in the column projection note. My Oracle client application is running on a laptop where the OS is in French and Oracle returns the setting according to what the end-user can expect. This is National Language Support. An Oracle database can be accessed by users all around the world and they will see ordered lists, date format, decimal separators,… according to their country and language.

ORDER BY … COLLATE …

My databases has been created in a system which is in English. In Postgres we can get results sorted in French with the COLLATE option of ORDER BY:


explain (analyze,verbose,costs,buffers) select x from demo1 where x is not null order by x collate "fr_FR" ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=5594.17..5619.17 rows=10000 width=1036) (actual time=36.163..37.254 rows=10000 loops=1)
Output: x, ((x)::text)
Sort Key: demo1.x COLLATE "fr_FR"
Sort Method: quicksort Memory: 1166kB
Buffers: shared hit=59
-> Index Only Scan using demo1_x on public.demo1 (cost=0.29..383.29 rows=10000 width=1036) (actual time=0.156..1.559 rows=10000 loops=1)
Output: x, x
Index Cond: (demo1.x IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=52
Planning time: 0.792 ms
Execution time: 38.264 ms

Same idea here as in Oracle: there is an additional sort operation, which is a blocking operation that needs to be completed before being able to return the first row.

The detail of the cost is the following:

  • The index on the column X has 52 blocks witch is estimated at cost=208 (random_page_cost=4)
  • We have 10000 index entries to process, estimated at cost=50 (cpu_index_tuple_cost=0.005)
  • We have 10000 result rows to process, estimated at cost=100 (cpu_tuple_cost=0.01)
  • We have evaluated 10000 ‘is not null’ conditions, estimated at cost=25 (cpu_operator_cost=0.0025)

In Oracle we can use the same COLLATE syntax, but the name of the language is different, consistent across platforms rather than useing the OS one:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 82az4syppyndf, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x collate "French"
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2493 (100)| 10000 |00:00:00.28 | 1644 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 2493 (1)| 10000 |00:00:00.28 | 1644 | 32M| 2058K| 29M (0)|
|* 2 | INDEX FAST FULL SCAN| DEMO1_X | 1 | 10000 | 389 (0)| 10000 |00:00:00.01 | 1644 | | | |
-----------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) NLSSORT("X" COLLATE "French",'nls_sort=''FRENCH''')[2000], "X"[VARCHAR2,1000] 2 - "X"[VARCHAR2,1000]

In Oracle, we do not need to use the COLLATE option. The language can be set for the session (NLS_LANGUAGE=’French’) or from the environment (NLS_LANG=’=French_.’). Oracle can share cursors across sessions (to avoid to waste resource compiling and optimizing the same statements used by different sessions) but will not share execution plans among different NLS environments because, as we have seen, the plan can be different. Postgres do not have to manage that because each PREPARE statement does a full compilation and optimization. There is no cursor sharing in Postgres.

Indexing for different languages

We have seen in the Oracle execution plan Column Projection Information that an NLSSORT operation is applied on the column to get a value that follows the collation order of the language. We have seen in the previous post that we can index a function on a column. Then we have the possibility to create an index for different languages. The following index will be used to avoid sort from French users:

create index demo1_x_fr on demo1(nlssort(x,'NLS_SORT=French'));

Since 12cR2 we can create the same with de collate syntax:

create index demo1_x_fr on demo1(x collate "French");

Both syntaxes create the same index, which can be used by queries with ORDER BY … COLLATE or with session that set the NLS_LANGUAGE:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 82az4syppyndf, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x collate "French"
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 4770 (100)| 10000 |00:00:00.02 | 4772 |
|* 1 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 1 | 10000 | 4770 (1)| 10000 |00:00:00.02 | 4772 |
| 2 | INDEX FULL SCAN | DEMO1_X_FR | 1 | 10000 | 3341 (1)| 10000 |00:00:00.01 | 3341 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "X"[VARCHAR2,1000] 2 - "DEMO1".ROWID[ROWID,10], "DEMO1"."SYS_NC00004$"[RAW,2000]

There’s no sort operation here as the INDEX FULL SCAN returns the rows in order.

PostgreSQL has the same syntax:

create index demo1_x_fr on demo1(x collate "fr_FR");

and then the query can use this index and bypass the sort operation:

explain (analyze,verbose,costs,buffers) select x from demo1 where x is not null order by x collate "fr_FR" ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_x_fr on public.demo1 (cost=0.29..383.29 rows=10000 width=1036) (actual time=0.190..1.654 rows=10000 loops=1)
Output: x, x
Index Cond: (demo1.x IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=32 read=20
Planning time: 1.049 ms
Execution time: 2.304 ms

Avoiding a sort operation can really improve the performance of queries in two ways: save the resources required by a sort operation (which will have to spill to disk when the workarea do not fit in memory) and avoid a blocking operation and then be able to return the first rows quickly.

We have seen how indexes can be used to access a subset of columns from a smaller structure, and how they can be used to access a sorted version of the rows. Future posts will show how the index access is used to quickly filter a subset of rows. But for the moment I’ll continue on this blocking operation. We have seen a lot of Postgres costs, and they have two values (startup cost and total cost). More on startup cost in the next post.

 

Cet article Postgres vs. Oracle access paths IV – Order By and Index est apparu en premier sur Blog dbi services.

From idea to app or how I do an Oracle APEX project anno 2017

Dimitri Gielis - Sat, 2017-08-05 11:30
For a long time I had in mind to write in great detail how I do an Oracle APEX project from A to Z. But so far I never took the time to actually do it, until today :)

So here's the idea; I love building projects that help people and I love to share what I know, so I will combine both. I will write exactly my thoughts and things I do as I'm moving along with this project, so you have full insight what's happening behind the scenes.
BackgroundWay back, in the year 1999, I build an application in Visual Basic to help children study the multiplication tables. My father was a math teacher and taught people who wanted to become primary school teachers. While doing the visits of the primary schools, he saw the problem that children had difficulties to automate the multiplications from 1 till 10, so together we thought about how we could help them. That is how the Visual Basic application was born. I don't have a working example anymore of the program, but I found some paper prints from that time, which you see here:



We are now almost 20 years later and last year my son had difficulties memorizing the multiplication tables too. I tried sitting next to him and help him out, but when things don't go as smooth as you hope... You have to stay calm and supportive, but I found it hard, especially when there are two other children crying for attention too or you had a rough day yourself... In a way I felt frustrated because I didn't know how to help further in the time I had. At some point I thought about the program I wrote way back then and decided to quickly build a web app that would allow him to train himself. And to make it more fun for him, I told him I would exercise too, so he saw it was doable :)

At KScope16 I showed this web app during Open Mic Night; it was far from fancy, but it did the job.
Here's a quick demo:



Some people recognized my story and asked if I could put the app online. I just build the app quickly for my son, so it needs some more work to make it accessible for others.
During my holidays, I decided I should really treat this project as a real one, otherwise it would never happen, so here we are, that is what I'm going to do and I'll write about it in detail :)
Idea - our requirementThe application helps children (typically between 7 and 11 years old) to automate multiplications between 1 and 10. It also helps their parents to get insight in timings and mistakes of their children's multiplications.
TimelineNo project without deadline, so I've set my go-production date to August 20th, 2017. So I've about 2 weeks, typically one sprint in our projects.
Following along and feedbackI will tweet, blog and create some videos to show my progress. You can follow along and reach me on any of those channels. If you have any questions, tips or remarks during the development, don't hesitate to add a comment. I always welcome new ideas or insights and am happy to go in more detail if something is not clear.
High level break-down of plan for the following days
  • Create user stories and supporting ERD
  • List of the tools I use and why I use them
  • Set up the development environment
  • Create the Oracle database objects
  • Set up a domain name
  • Set up reverse proxy and https
  • Create a landing page and communicate
  • Build the Oracle APEX application: the framework
  • Refine the APEX app: create custom authentication
  • Refine the APEX app: adding the game
  • Refine the APEX app: improve the flow and navigation
  • Refine the APEX app: add ability to print results to PDF
  • Set up build process
  • Check security
  • Communicate first version of the app to registered people
  • Check performance
  • Refine the APEX app: add more reports and statistics
  • Check and reply to feedback
  • Set up automated testing
  • A word on debugging
  • Refine the APEX app: making final changes
  • Set up backups
  • Verify documentation and lessons learned
  • Close the loop and Celebrate :)
So now, let's get started ...
Categories: Development

12c MultiTenant Posts -- 7 : Adding Custom Service to PDB (nonRAC/GI)

Hemant K Chitale - Sat, 2017-08-05 10:20
Earlier I have already demonstrated adding and managing custom services in a RAC environment in a blog post and a video.

But what if you are running Single Instance and not using Grid Infrastructure?  The srvctl command in Grid Infrastructure is what you'd use to add and manage services in RAC and Oracle Restart environments.  But with Grid Infrastructure, you can fall back on DBMS_SERVICE.

The DBMS_SERVICE API has been available since Oracle 8i -- when Services were introduced.

Here is a quick demo of some facilities with DBMS_SERVICE.

1.  Adding a Custom Service into a PDB :

$sqlplus system/oracle@NEWPDB

SQL*Plus: Release 12.2.0.1.0 Production on Sat Aug 5 22:52:21 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Mon Jul 10 2017 22:22:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show con_id

CON_ID
------------------------------
4
SQL>
SQL> execute dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('HR');

PL/SQL procedure successfully completed.

SQL>


Connecting to the service via tnsnames.

SQL> connect hemant/hemant@HR
Connected.
SQL> show con_id

CON_ID
------------------------------
4
SQL>


2.  Disconnecting all connected users on the Service

$sqlplus system/oracle@NEWPDB

SQL*Plus: Release 12.2.0.1.0 Production on Sat Aug 5 23:02:47 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Sat Aug 05 2017 23:02:28 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> execute dbms_service.disconnect_session(-
> service_name=>'HR',disconnect_option=>DBMS_SERVICE.IMMEDIATE);

PL/SQL procedure successfully completed.

SQL>
In the HEMANT session connected to HR :
SQL> show con_id
ERROR:
ORA-03113: end-of-file on communication channel
Process ID: 5062
Session ID: 67 Serial number: 12744


SP2-1545: This feature requires Database availability.
SQL>


(Instead of DBMS_SERVICE.IMMEDIATE, we could also specify DBMS_SERVICE.POST_TRANSACTION).


3.  Shutting down a Service without closing the PDB :

SQL> execute dbms_service.stop_service('HR');

PL/SQL procedure successfully completed.

SQL>
SQL> connect hemant/hemant@HR
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Warning: You are no longer connected to ORACLE.
SQL>


Does restarting the Database, restart this custom service?

SQL> connect / as sysdba
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 8798312 bytes
Variable Size 343936920 bytes
Database Buffers 478150656 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.
SQL> alter pluggable databas all open;
alter pluggable databas all open
*
ERROR at line 1:
ORA-02000: missing DATABASE keyword


SQL> alter pluggable database all open;

Pluggable database altered.

SQL> connect hemant/hemant@NEWPDB
Connected.
SQL> connect hemant/hemant@HR
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Warning: You are no longer connected to ORACLE.
SQL>
SQL> connect system/oracle@NEWPDB
Connected.
SQL> execute dbms_service.start_service('HR');

PL/SQL procedure successfully completed.

SQL> connect hemant/hemant@HR
Connected.
SQL>


I had to reSTART this custom service ('HR') after the PDB was OPENed.

Services is a facility that has been available since 8i non-OPS.  However, Services were apparently only being used by most sites in RAC environments.

Services allow you to run multiple "applications" (each application advertised as a Service) within the same (one) database.

Note that, in a RAC environment, srvctl configuration of Services can configure auto-restart of the Service.
.
.
.

Categories: DBA Blogs

Encryption of shell scripts

Yann Neuhaus - Sat, 2017-08-05 07:52

In this blog, I will talk about the encryption of files and in particular the encryption of a shell script because that was my use case. Before starting, some people may say/think that you shouldn’t encrypt any scripts and I globally agree with that BUT I still think that there might be some exceptions. I will not debate this further but I found the encryption subject very interesting so I thought I would write a small blog with my thoughts.

 

Encryption?

So, when we talk about encryption, what is it exactly? There are actually two not-so-different concepts that people often mix up: encryption and obfuscation. The encryption is a technique to keep an information confidential by changing its form, which becomes unreadable. The obfuscation, on the other hand, refers to the protection of something by trying to hide it, convert it into something more difficult to read but it’s not completely unreadable. The main difference is that if you know what technique was used to encrypt something, you cannot decrypt it without the key while you can remove the obfuscation if you know how it was done.

The reason why I’m including this small paragraph in this blog is because when I was searching for a way to encrypt a shell script in Linux, I read a LOT of blogs and websites that just got it wrong… The problem with encrypted shell scripts is that at some points, the Operating System will need to know which commands should be executed. So, at some point, it will need to be decrypted.

 

Shell script

So, let’s start with the creation a test shell script that I will use for the rest of this blog. I’m creating a small, very simple, test script which contains a non-encrypted password that I need to enter correctly in order to get an exit code of 0. If the password is wrong, after 3 tries, I should get an exit code of 1. Please note that if the shell script contains interactions, then you need to use the redirection from tty (“< /dev/tty”) like I did in my example.

Below I’m displaying the content of this script and using it, without encryption, to show you the output. Please note that in my scripts, I included colors (green for INFO and OK, yellow for WARN and red for ERROR messages) which aren’t displayed in the blog… Sorry about that but I can’t add colors to the blog unfortunately!

[morgan@linux_server_01 ~]$ cat test_script.sh
#!/bin/bash
#
# File: test_script.sh
# Purpose: Shell script to test the encryption solutions
# Author: Morgan Patou (dbi services)
# Version: 1.0 29-Jul-2017
#
###################################################

### Defining colors & execution folder
red_c="33[31m"
yellow_c="33[33m"
green_c="33[32m"
end_c="33[m"
script_folder=`which ${0}`
script_folder=`dirname ${script_folder}`

### Verifying password
script_password="TestPassw0rd"
echo
echo -e "${green_c}INFO${end_c} - This file is a test script to test the encryption solutions."
echo -e "${green_c}INFO${end_c} - Entering the correct password will return an exit code of 0."
echo -e "${yellow_c}WARN${end_c} - Entering the wrong password will return an exit code of 1."
echo
retry_count=0
retry_max=3
while [ "${retry_count}" -lt "${retry_max}" ]; do
  echo
  read -p "  ----> Please enter the password to execute this script: " entered_password < /dev/tty
  if [[ "${entered_password}" == "${script_password}" ]]; then
    echo -e "${green_c}OK${end_c} - The password entered is the correct one."
    exit 0
  else
    echo -e "${yellow_c}WARN${end_c} - The password entered isn't the correct one. Please try again."
    retry_count=`expr ${retry_count} + 1`
  fi
done

echo -e "${red_c}ERROR${end_c} - Too many failed attempts. Exiting."
exit 1

[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ chmod 700 test_script.sh
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./test_script.sh

INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.


  ----> Please enter the password to execute this script: Password1
WARN - The password entered isn't the correct one. Please try again.

  ----> Please enter the password to execute this script: Password2
WARN - The password entered isn't the correct one. Please try again.

  ----> Please enter the password to execute this script: Password3
WARN - The password entered isn't the correct one. Please try again.
ERROR - Too many failed attempts. Exiting.
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ echo $?
1
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./test_script.sh

INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.


  ----> Please enter the password to execute this script: TestPassw0rd
OK - The password entered is the correct one.
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ echo $?
0
[morgan@linux_server_01 ~]$

 

As you can see above, the script is doing what I expect it to do so that’s fine.

 

SHc?

So, what is SHc? Is it really a way to encrypt your shell scripts?

Simple answer: I would NOT use SHc for that. I don’t have anything against SHc, this is actually a utility that might be useful but from my point of view, it’s clearly not a good solution for encrypting a shell script.

 

SHc is a utility (check its website) that – from a shell script – will create a C source code which represents it using a RC4 algorithm. This C source code contains a random structure as well as the decryption method. Then it is compiled to create a binary file. The problem with SHc is that the binary file contains the original shell script (encrypted) but also the decryption materials because this is needed to execute it. So, let’s install this utility:

[morgan@linux_server_01 ~]$ wget http://www.datsi.fi.upm.es/~frosal/sources/shc-3.8.9b.tgz
--2017-07-29 14:10:14--  http://www.datsi.fi.upm.es/~frosal/sources/shc-3.8.9b.tgz
Resolving www.datsi.fi.upm.es... 138.100.9.22
Connecting to www.datsi.fi.upm.es|138.100.9.22|:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 20687 (20K) [application/x-gzip]
Saving to: “shc-3.8.9b.tgz”

100%[===================================================================>] 20,687      --.-K/s   in 0.004s

2017-07-29 14:10:14 (5.37 MB/s) - “shc-3.8.9b.tgz” saved [20687/20687]

[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ tar -xvzf shc-3.8.9b.tgz
shc-3.8.9b/CHANGES
shc-3.8.9b/Copying
shc-3.8.9b/match
shc-3.8.9b/pru.sh
shc-3.8.9b/shc-3.8.9b.c
shc-3.8.9b/shc.c
shc-3.8.9b/shc.1
shc-3.8.9b/shc.README
shc-3.8.9b/shc.html
shc-3.8.9b/test.bash
shc-3.8.9b/test.csh
shc-3.8.9b/test.ksh
shc-3.8.9b/makefile
shc-3.8.9b/testit
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ cd shc-3.8.9b/
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ make
cc -Wall  shc.c -o shc
***     Do you want to probe shc with a test script?
***     Please try...   make test
[morgan@linux_server_01 shc-3.8.9b]$

 

At this point, I only built the utility locally because I will be removing it shortly. Now, let’s “encrypt” the file using shc:

[morgan@linux_server_01 shc-3.8.9b]$ cp ../test_script.sh ./
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ls test_script*
test_script.sh
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ./shc -f test_script.sh
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ls test_script*
test_script.sh  test_script.sh.x  test_script.sh.x.c
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ # Removing the C source code and original script
[morgan@linux_server_01 shc-3.8.9b]$ rm test_script.sh test_script.sh.x.c
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ # Renaming the "encrypted" file to .bin
[morgan@linux_server_01 shc-3.8.9b]$ mv test_script.sh.x test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ls test_script*
test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$

 

So above, I used shc and it created two files:

  • test_script.sh.x => This is the C compiled file which can then be executed. I renamed it to test_script.bin to really see the differences between the files
  • test_script.sh.x.c => This is the C source code which I removed since I don’t need it

 

At this point, if you try to view the content of the .bin file (previously test_script.sh.x), you will not be able to see the real content and you will see something that looks like a real .bin executable. To see its “binary” content, you can use the “strings” command which will display all readable (printable) words from the file and you will see that we cannot see the password or any commands from the original shell script. So, at first look, that seems to be a success, the shell script seems to be encrypted:

[morgan@linux_server_01 shc-3.8.9b]$ strings test_script.bin
/lib64/ld-linux-x86-64.so.2
__gmon_start__
libc.so.6
sprintf
perror
__isoc99_sscanf
fork
...
EcNB
,qIB`^
gLSI
U)L&
fX4u
j[5,
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ./test_script.bin
 
INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.
 
 
  ----> Please enter the password to execute this script: Password1
WARN - The password entered isn't the correct one. Please try again.
 
  ----> Please enter the password to execute this script: TestPassw0rd
OK - The password entered is the correct one.
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ echo $?
0
[morgan@linux_server_01 shc-3.8.9b]$

 

So, what is the issue with SHc? Why am I saying that this isn’t a suitable encryption solution? Well that’s because you can always just strip the text out of the file or substitute the normal shell to another one in order to grab the text when it runs. There are also several projects on GitHub (like UnSHc) which will allow you to retrieve the original content of the shell script and to revert the changes done by SHc. This works because the content of the bin file is predictable and can be analysed in order to decrypt it. So, that’s not really a good solution I would say.

There are a lot of ways to see the original content of a file encrypted by SHc. One of them being just checking the list of processes and you will see that the original shell script is actually passed as a parameter to the binary file in this format: ./test_script.bin -c   <<<a lot of spaces>>>    <<<script_unencrypted_newlines_separated_by_’?’>>>. See below my example:

[morgan@linux_server_01 shc-3.8.9b]$ ./test_script.bin& (ps -ef | grep "test_script.bin" | grep -v grep > test_decrypt_content.sh)
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ # The real file is in 1 line only. For readability on the blog, I split that in several lines 
[morgan@linux_server_01 shc-3.8.9b]$ cat test_decrypt_content.sh
405532   20125  2024  0 16:18 pts/3    00:00:00 ./test_script.bin -c                                                                              
                                                                                                                                                  
                                                                                                                                                  
                                                                                                                                                  
                                                                                                                                                  
                                                                                                                                                  
#!/bin/bash?#?# File: test_script.sh?# Purpose: Shell script to test the encryption solutions?# Author: Morgan Patou (dbi services)?# Version: 1.029-Jul-2017?
#?###################################################??### Defining colors & execution folder?red_c="33[31m"?yellow_c="33[33m"?green_c="33[32m"?end_c="\
033[m"?script_folder=`which ${0}`?script_folder=`dirname ${script_folder}`??### Verifying password?script_password="TestPassw0rd"?echo?echo -e "${green_c}INFO
${end_c} - This file is a test script to test the encryption solutions."?echo -e "${green_c}INFO${end_c} - Entering the correct password will return an exit c
ode of 0."?echo -e "${yellow_c}WARN${end_c} - Entering the wrong password will return an exit code of 1."?echo?retry_count=0?retry_max=3?while [ "${retry_coun
t}" -lt "${retry_max}" ]; do?  echo?  read -p "  ----> Please enter the password to execute this script: " entered_password < /dev/tty?  if [[ "${entered_pass
word}" == "${script_password}" ]]; then?    echo?    echo -e "${green_c}OK${end_c} - The password entered is the correct one."?    exit 0?  else?    echo -e "
${yellow_c}WARN${end_c} - The password entered isn't the correct one. Please try again."?    retry_count=`expr ${retry_count} + 1`?  fi?done??echo -e "${red_c
}ERROR${end_c} - Too many failed attempts. Exiting."?exit 1?? ./test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$

 

As you can see above, the whole content of the original shell script is displayed in the “ps” command. Not very hard to find out what is the original content… With a pretty simple command, we can even reformat the original file:

[morgan@linux_server_01 shc-3.8.9b]$ sed -i -e 's,?,\n,g' -e 's,.*     [[:space:]]*,,' test_decrypt_content.sh
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ cat test_decrypt_content.sh
#!/bin/bash
#
# File: test_script.sh
# Purpose: Shell script to test the encryption solutions
# Author: Morgan Patou (dbi services)
# Version: 1.0 29-Jul-2017
#
###################################################

### Defining colors & execution folder
red_c="33[31m"
yellow_c="33[33m"
green_c="33[32m"
end_c="33[m"
script_folder=`which ${0}`
script_folder=`dirname ${script_folder}`

### Verifying password
script_password="TestPassw0rd"
echo
echo -e "${green_c}INFO${end_c} - This file is a test script to test the encryption solutions."
echo -e "${green_c}INFO${end_c} - Entering the correct password will return an exit code of 0."
echo -e "${yellow_c}WARN${end_c} - Entering the wrong password will return an exit code of 1."
echo
retry_count=0
retry_max=3
while [ "${retry_count}" -lt "${retry_max}" ]; do
  echo
  read -p "  ----> Please enter the password to execute this script: " entered_password < /dev/tty
  if [[ "${entered_password}" == "${script_password}" ]]; then
    echo -e "${green_c}OK${end_c} - The password entered is the correct one."
    exit 0
  else
    echo -e "${yellow_c}WARN${end_c} - The password entered isn't the correct one. Please try again."
    retry_count=`expr ${retry_count} + 1`
  fi
done

echo -e "${red_c}ERROR${end_c} - Too many failed attempts. Exiting."
exit 1

 ./test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$

 

And voila, with two very simple command, it is possible to retrieve the original file with its original formatting too (just remove the final line which is the call of the script itself). Please also note that if the original script contains some ‘?’ characters, they will also be replaced with a newline but that’s spotted pretty easily. With Shell options, you can also just ask your shell to print all commands that it executes so again without even additional commands you can see the content of the binary file.

 

What solution then?

For this section, I will re-use the same un-encrypted shell script (test_script.sh). So, what can be done to really protect a shell script? Well there are no perfect solutions because like I said previously, at some point, the OS will need to know which commands should be executed and for that purpose, it needs to be decrypted. There are a few ways to encrypt a shell script but the simplest would probably be to use openssl because it’s quick, it’s free and it’s portable without having to install anything since openssl is usually already there on Linux. Also, it allows you to choose the encryption algorithm you want to use. To encrypt the base file, I created a small shell script which I named “encrypt_script.sh”. This shell script takes an input file which is the un-encrypted original file and a second parameter is the output file which will contain the encryption:

[morgan@linux_server_01 shc-3.8.9b]$ cd ..
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ls
encrypt_script.sh shc-3.8.9b  shc-3.8.9b.tgz  test_script.sh
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ rm -rf shc-3.8.9b*
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ cat encrypt_script.sh
#!/bin/bash
#
# File: encrypt_script.sh
# Purpose: Script to encrypt a shell script and provide the framework around it for execution
# Author: Morgan Patou (dbi services)
# Version: 1.0 26/03/2016
#
###################################################

### Defining colors & execution folder
green_c="33[32m"
end_c="33[m"
script_folder="`which ${0}`"
script_folder="`dirname ${script_folder}`"
encryption="aes-256-cbc"

### Help
if [[ ${#} != 2 ]]; then
  echo -e "`basename ${0}`: usage: ${green_c}`basename ${0}`${end_c} <${green_c}shell_script_to_encrypt${end_c}> <${green_c}encrypted_script${end_c}>"
  echo -e "\t<${green_c}shell_script_to_encrypt${end_c}>  : Name of the shell script to encrypt. Must be placed under '${green_c}${script_folder}${end_c}'"
  echo -e "\t<${green_c}encrypted_script${end_c}>         : Name of the encrypted script to be created. The file will be created under '${green_c}${script_folder}${end_c}'"
  echo
  exit 1
else
  shell_script_to_encrypt="${1}"
  encrypted_script="${2}"
fi

### Encrypting the input file into a temp file
openssl enc -e -${encryption} -a -A -in "${script_folder}/${shell_script_to_encrypt}" > "${script_folder}/${shell_script_to_encrypt}.txt"

### Creating the output script with the requested name and containing the content to decrypt it
echo "#!/bin/bash" > "${script_folder}/${encrypted_script}"
echo "# " >> "${script_folder}/${encrypted_script}"
echo "# File: ${encrypted_script}" >> "${script_folder}/${encrypted_script}"
echo "# Purpose: Script containing the encrypted version of ${shell_script_to_encrypt} (this file has been generated using `basename ${0}`)" >> "${script_folder}/${encrypted_script}"
echo "# Author: Morgan Patou (dbi services)" >> "${script_folder}/${encrypted_script}"
echo "# Version: 1.0 26/03/2016" >> "${script_folder}/${encrypted_script}"
echo "# " >> "${script_folder}/${encrypted_script}"
echo "###################################################" >> "${script_folder}/${encrypted_script}"
echo "" >> "${script_folder}/${encrypted_script}"
echo "#Storing the encrypted script in a variable" >> "${script_folder}/${encrypted_script}"
echo "encrypted_script=\"`cat "${script_folder}/${shell_script_to_encrypt}.txt"`\"" >> "${script_folder}/${encrypted_script}"
echo "" >> "${script_folder}/${encrypted_script}"
echo "#Decrypting the encrypted script and executing it" >> "${script_folder}/${encrypted_script}"
echo "echo \"\${encrypted_script}\" | openssl enc -d -${encryption} -a -A | sh -" >> "${script_folder}/${encrypted_script}"
echo "" >> "${script_folder}/${encrypted_script}"

### Removing the temp file and setting the output file to executable
rm "${script_folder}/${shell_script_to_encrypt}.txt"
chmod 700 "${script_folder}/${encrypted_script}"
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./encrypt_script.sh
encrypt_script.sh: usage: encrypt_script.sh <shell_script_to_encrypt> <encrypted_script>
        <shell_script_to_encrypt>  : Name of the shell script to encrypt. Must be placed under '/home/morgan'
        <encrypted_script>         : Name of the encrypted script to be created. The file will be created under '/home/morgan'

[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./encrypt_script.sh test_script.sh encrypted_test_script.sh
enter aes-256-cbc encryption password:
Verifying - enter aes-256-cbc encryption password:
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ # The real variable "encrypted_script" below is in 1 line only. For readability, I split that in several lines
[morgan@linux_server_01 ~]$ cat encrypted_test_script.sh
#!/bin/bash
#
# File: encrypted_test_script.sh
# Purpose: Script containing the encrypted version of test_script.sh (this file has been generated using encrypt_script.sh)
# Author: Morgan Patou
# Version: 1.0 26/03/2016
#
###################################################

#Storing the encrypted script in a variable
encrypted_script="U2FsdGVkX18QaIvqrQ27FQE8fNhJi2Izi9zRHwANEEt4WJkA3gQzOkrPOF+JYpIEFuvjweL2Eq02vr0MhkjMXIGXYlLipQ7U8TG912/9LdUOYlEx7YV4/1g9enBfZc2gBRHcGL6XW7oMih3wexGNrrq3J5Ys+mDgrmKDLJ75aU6v87iIPFi2ZfFx2NchAc4tHHDQ8gcZFLMByCkWwPZoicx8ODgUstNLRHKTMA7nj/v0fig1BLygQUQpEFjvNTScK6MT01aby8DvNuka0t0hjavTcP8gBEFVC5GQk3Ds/FVQBDqCdltxIhtnHGgbetloKHVwieSw+OsfKyKj9fuOKJ4RRCb7pNq42FHtiwUHhy2FkpxbkJxLgT3uMJopqJy3dU8tlf3nRqGQbm1eNZsf+uWLxgmd7Eq5rsywZjwjbsq1oIeCGzEq4k6WNCbMi3O1RIkKmJ6eR1q8pZcmLT6sEGJUlO3PfkD7ONcO4Ta48zCi7Rsi1PNJouGyNK8NrD34pbEKwu9MTsYTyNzKHCScDjt8QQne6NB+3ODQM26/6SAUM5gd9WmzZMByW6gFyKmkXhRxHsWDlNN5SJDbdd5w4r7+guqnLo/31hZSC2GZLSbQzrmz5FMKoriSuSxmZITQMV5yMp1IaYzJGxTECyl2V5g89aiOLqhehlM6c4uDfkPYZtZlmPX1JVfTTTy7dUeu08VUQqzvU2qdJV4g2rKJQtMw7py4B4a8E0+ShQgpp/Zi6yvKDxlzx9oZC+Gjtegg7TEsOx4kiefzSr+s3Vy/5puBza1vFBG51ZygyDb+p/ptCrmwUClY9qqR7bm+Wd9uRsG41XxReI5WXyZt1t/GZT0x5EkYQ5tn1DKQMc33G1f11yYTSZinwbbO49qL5xw0ZCSUB5AKTBye+b3rHTNKIhkd16P3+rkUN5fjMgUgEo0ojhh99PmwzszVJYdZQdliyHXbn1PJNMa4BLebmcH8PP6uzz8IDaMLrhHkFGTlkTQY+DoMPCb5FXztth3+FVry/Z2AdFDKogB7rXFfWeGWfQ4F+nZnvcqzasZTL9vWLGiFYCovra29ul5pHU5xLeTxi6FSC5naoT2yj0KY2jaRyPc4MKhb5T6DU/K/Wgj/0TNIS0TL/sbReprFtU0f/Kj6z/tzsIucBb0hN9QFIlOBzDfS0dz5xYoMlJ4Es22iMELiNhvF/zv6+j7IE0QdxhfcnJbYZAA9/ehL2osABkSCOBwUH8dkC1CSAvjgYB/WZSGAWpQhrARWTIJiwEYeMMh1+lRmR9qk4OrWzzJrgLvKOrYTjeAMmXZrRFt8vGQ5I7jiJN2VwET4zqm8pppY4eptK9Uaac2sEunGoxg0eBhuWY6dYgDeW6RMa3kK4wJ3DafJLlhmrhpxULEI8Owo8SzJjHpR+UrhrK3hPBw/Zy30El6MCIJ6pJNgeETpF4naK/EZqqKzrxQ8uSAwLDIucVVtOEdV+4lIcISPV1jza2O4eMu/1W39jSs6sA1ORb8H/taSkYvO80iygERCcYCxNBHZEW3mWRzGGWwojpQjmKaALCHYxprmXdKaL8aDoV+43V+90UO++gfamW8kWxzVeV7R/VoyhQQ1R+tem5eGZSsRpMEL7k1p7YIwyg3Yxt3bha22DEDf0UUzzOwakpnK09gzCnxH3RUSSNnutEkTSw9I22IZXJRkrHydARauj7S0Fd9MDRPgBRloiELVNM2uVNyCdFtMheg8q0wlF+GKLvWyzQ=="

#Decrypting the encrypted script and executing it
echo "${encrypted_script}" | openssl enc -d -aes-256-cbc -a -A | sh -

[morgan@linux_server_01 ~]$

 

As you can see above, when encrypting the shell script, you will have to enter an encryption password. This is NOT the password contained in the original shell script. This is a new password that you define and that you will need to remember because without it, you will NOT be able to execute it properly. Also, you can see that the file “encrypted_test_script.sh” contains the variable “encrypted_script”. This variable is the encrypted string representing the original shell script.

/!\ Please note that if you replace “sh -” at the end of the file with “cat” for example, then upon execution, you will see the content of the original shell script. That suppose that you know the password to decrypt it, of course, so that’s still secure. However, it would be easy for someone with bad intentions to change the file encrypted_script.sh so that when you execute it and provide the right password, it in fact send it via email or something like that. I will not describe it but it would be possible to protect you against that by using signatures for example so you are sure the content of the shell script is the one you generated and it hasn’t been tampered.

So like I said before, no perfect solutions… Or at least no easy solutions.

 

To execute the encrypted script, enter the encryption password and then the script is executed automatically:

[morgan@linux_server_01 ~]$ ./encrypted_test_script.sh
enter aes-256-cbc decryption password:

INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.


  ----> Please enter the password to execute this script: Password1
WARN - The password entered isn't the correct one. Please try again.

  ----> Please enter the password to execute this script: TestPassw0rd
OK - The password entered is the correct one.
[morgan@linux_server_01 ~]$

 

Complicated topic, isn’t it? I’m not a security expert but I like these kind of subjects, so… If you have other ideas or thoughts, don’t hesitate to share!

 

 

Cet article Encryption of shell scripts est apparu en premier sur Blog dbi services.

Database links without specifying password (using Oracle Wallet)

Tom Kyte - Sat, 2017-08-05 05:46
Is it possible to create a database link without specifying the password (say somehow using a oracle wallet)? As of now we use passwords for everything - JDBC connection database connections (languages other than Java) Creating database links ...
Categories: DBA Blogs

listagg gives ORA-01427: single-row subquery returns more than one row

Tom Kyte - Sat, 2017-08-05 05:46
I need to concatenate row field into one field and I'm trying to use LISTAGG, but I need values to be distinct in the list. I was able to do almost everything with regexp_replace as alternative, but when I have too many orders for a customer I would...
Categories: DBA Blogs

How to run a update query without commit at the end , inside my pl/sql block multiple times without waiting for lock ?

Tom Kyte - Sat, 2017-08-05 05:46
I have an update query inside a pl/SQL block. The pl/SQL block is optimised to execute within 800 ms.I have tested the code and it executes fine. However, if my code is put to test on regression it is taking huge time to complete. My code is bein...
Categories: DBA Blogs

Reports - web.show_document userid password contains character .#

Tom Kyte - Sat, 2017-08-05 05:46
We use web.show_document to view reports and there are users who have a # character in their password. For these users comes a message rep-0501, for the other users who do not have that character in their password everything works fine. Here I show t...
Categories: DBA Blogs

Alternative for CLOB data type in oracle 12g

Tom Kyte - Sat, 2017-08-05 05:46
I wanted to know what is the best alternative data type for CLOB? My current database have a few CLOB data type. Does CLOB Data type is going to be deprecated in newer version? I tried to search around and seems like varchar2 will be the alte...
Categories: DBA Blogs

Is there a UTL_MAIL connection limit?

Tom Kyte - Sat, 2017-08-05 05:46
Hi, We recently encountered a connection limit on UTL_SMTP of 16 open connections. This is not because connections are being left open its just that we have reached a threshold of the number of applications utilising the UTL_SMTP package on our o...
Categories: DBA Blogs

LiveSQL: Accepting Input From User

Tom Kyte - Sat, 2017-08-05 05:46
I am not able to accept input from user on Live SQL Platform I have tried & and : both but i am not able to accept the input from user. Please suggest me the syntax for the same. Thanks in Advance
Categories: DBA Blogs

Documentum – Unable to install xCP 2.3 on a CS 7.3

Yann Neuhaus - Sat, 2017-08-05 02:53

Beginning of this year, we were doing our first silent installations of the new Documentum stack. I already created a few blogs to talk about some issues with CS 7.3 and xPlore 1.6. This time, I will talk about xCP 2.3 and in particular the installation on a CS 7.3. The Patch of xCP as well as the patch for the CS 7.3 doesn’t matter since all versions are affected. Please just note that the first supported patch on a CS 7.3 is xCP 2.3 P03 so you shouldn’t be installing a previous patch on 7.3.

So, when installing an xCP 2.3 on a Content Server 7.3, you will get a pop-up in the installer with the following error message: “Installation of DARs failed”. You will only have an “OK” button on this pop-up which will close the installer. Ok so there is an issue with the installation of the DARs but what’s the issue exactly?

 

On the installation log file, we can see the following:

[dmadmin@content_server_01 ProcessEngine]$ cat logs/install.log
13:44:45,356  INFO [Thread-8] com.documentum.install.pe.installanywhere.actions.PEInitializeSharedLibrary - Done InitializeSharedLibrary ...
13:44:45,395  INFO [Thread-10] com.documentum.install.appserver.jboss.JbossApplicationServer - setApplicationServer sharedDfcLibDir is:$DOCUMENTUM_SHARED/dfc
13:44:45,396  INFO [Thread-10] com.documentum.install.appserver.jboss.JbossApplicationServer - getFileFromResource for templates/appserver.properties
13:44:45,532  WARN [Thread-10] com.documentum.install.pe.installanywhere.actions.DiWAPeInitialize - init-param tags found in Method Server webapp:

<init-param>
      <param-name>docbase_install_owner_name</param-name>
      <param-value>dmadmin</param-value>
</init-param>
<init-param>
      <param-name>docbase-GR_DOCBASE</param-name>
      <param-value>GR_DOCBASE</param-value>
</init-param>
<init-param>
      <param-name>docbase-DocBase1</param-name>
      <param-value>DocBase1</param-value>
</init-param>
<init-param>
      <param-name>docbase-DocBase2</param-name>
      <param-value>DocBase2</param-value>
</init-param>
13:44:58,771  INFO [AWT-EventQueue-0] com.documentum.install.pe.ui.panels.DiWPPELicenseAgreementPanel - UserSelection: "I accept the terms of the license agreement."
13:46:13,398  INFO [AWT-EventQueue-0] com.documentum.install.appserver.jboss.JbossApplicationServer - The batch file: $DOCUMENTUM_SHARED/temp/installer/wildfly/dctm_tmpcmd0.sh exist? false
13:46:13,399  INFO [AWT-EventQueue-0] com.documentum.install.appserver.jboss.JbossApplicationServer - The user home is : /home/dmadmin
13:46:13,405  INFO [AWT-EventQueue-0] com.documentum.install.appserver.jboss.JbossApplicationServer - Executing temporary batch file: $DOCUMENTUM_SHARED/temp/installer/wildfly/dctm_tmpcmd0.sh for running: $DOCUMENTUM_SHARED/java64/1.8.0_77/bin/java -cp $DOCUMENTUM_SHARED/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/dfc.jar:$DOCUMENTUM_SHARED/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/aspectjrt.jar:$DOCUMENTUM_SHARED/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/DctmUtils.jar com.documentum.install.appserver.utils.DctmAppServerAuthenticationString $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer jboss
13:46:42,320  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeInstallActions - starting DctmActions
13:46:42,724  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - user name = admin
13:46:42,724  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - Server DctmServer_MethodServer already exists!
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - Deploying to Group MethodServer... bpm (bpm.ear): does not exist!
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/log4j.properties
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/bpm.war/WEB-INF/web.xml
13:46:42,727  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeInstallActions - Finished DctmActions.
13:46:44,885  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Start to deploy dars for docbase: DocBase2
13:52:20,931  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - End to deploy dars for repository: DocBase2
13:52:20,932  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Start to deploy dars for docbase: DocBase1
13:57:59,510  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - End to deploy dars for repository: DocBase1
13:57:59,511  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Start to deploy dars for docbase: GR_DOCBASE
14:04:03,231  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - End to deploy dars for repository: GR_DOCBASE
14:04:03,268 ERROR [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Installation of DARs failed
com.documentum.install.shared.common.error.DiException: 3 DAR(s) failed to install.
        at com.documentum.install.shared.common.services.dar.DiDocAppFailureList.report(DiDocAppFailureList.java:39)
        at com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars.deployDars(DiPAPeProcessDars.java:123)
        at com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars.setup(DiPAPeProcessDars.java:71)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:75)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.actions.InstallProgressAction.ae(Unknown Source)
        at com.zerog.ia.installer.actions.ProgressPanelAction$1.run(Unknown Source)
14:04:03,269  INFO [installer]  - The INSTALLER_UI value is SWING
14:04:03,269  INFO [installer]  - The env PATH value is: /usr/xpg4/bin:$DOCUMENTUM_SHARED/java64/JAVA_LINK/bin:$DOCUMENTUM/product/7.3/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DOCUMENTUM_SHARED/java64/JAVA_LINK/bin:$DOCUMENTUM/product/7.3/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DOCUMENTUM/product/7.3/bin:$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
[dmadmin@content_server_01 ProcessEngine]$

 

It is mentioned that three DARs failed to be installed but since there are three docbases here, that’s actually one DAR per docbase. The only interesting information we can find from the install log file is that some DARs were installed properly so it’s not a generic issue but more likely an issue with one specific DAR. The next step is therefore to check the log file of the DAR installation:

[dmadmin@content_server_01 ProcessEngine]$ grep -i ERROR logs/dar_logs/GR_DOCBASE/peDars.log | grep -v "^\[INFO\].*ERROR"
[INFO]  dmbasic.exe output : dmbasic: Error 35 in line 585: Sub or Function not defined
[ERROR]  Unable to install dar file $DOCUMENTUM/product/7.3/install/DARsInternal/BPM.dar
com.emc.ide.installer.InstallException: Error handling controllable object Status = New; IsInstalled = true; com.emc.ide.artifact.bpm.model.bpm.impl.ActivityImpl@5e020dd1 (objectTypeName: null) (objectName: DB Inbound - Initiate, title: , subject: , authors: [], keywords: [], applicationType: , isHidden: false, compoundArchitecture: , componentLabel: [], resolutionLabel: , contentType: xml, versionLabel: [1.0, CURRENT], specialApp: DB-IN-IN.GIF, languageCode: , creatorName: null, archive: false, category: , controllingApp: , effectiveDate: [], effectiveFlag: [], effectiveLabel: [], expirationDate: [], extendedProperties: [], fullText: true, isSigned: false, isTemplate: false, lastReviewDate: null, linkResolved: false, publishFormats: [], retentionDate: null, status: , rootObject: true) (isPrivate: false, definitionState: installed, triggerThreshold: 0, triggerEvent: , execType: manual, execSubType: inbound_initiate, execMethodName: null, preTimer: 0, preTimerCalendarFlag: notusebusinesscal, preTimerRepeatLast: 0, postTimer: 0, postTimerCalendarFlag: notusebusinesscal, postTimerRepeatLast: 0, repeatableInvoke: true, execSaveResults: false, execTimeOut: 0, execErrHandling: stopAfterFailure, signOffRequired: false, resolveType: normal, resolvePkgName: , controlFlag: taskAssignedtoSupervisor, taskName: null, taskSubject: , performerType: user, performerFlag: noDeligationOrExtention, transitionMaxOutputCnt: 0, transitionEvalCnt: trigAllSelOutputLinks, transitionFlag: trigAllSelOutputLinks, transitionType: prescribed, execRetryMax: 0, execRetryInterval: 0, groupFlag: 0, template: true, artifactVersion: D65SP1);  Object ID = 4c0f123450002b1e;
Caused by: DfException:: THREAD: main; MSG: Error while making activity uneditable: com.emc.ide.artifactmanager.model.artifact.impl.ArtifactImpl@4bbc02ef (urn: urnd:com.emc.ide.artifact.bpm.activity/DB+Inbound+-+Initiate?location=%2FTemp%2FIntegration&name=DB+Inbound+-+Initiate, locale: null, repoLocation: null, categoryId: com.emc.ide.artifact.bpm.activity, implicitlyCreated: false, modifiedByUser: true); ERRORCODE: ff; NEXT: null
Caused by: DfException:: THREAD: main; MSG: [DM_WORKFLOW_E_NAME_NOT_EXIST]error:  "The dm_user object by the name 'dm_bps_inbound_user' specified in attribute performer_name does not exist."; ERRORCODE: 100; NEXT: null
[ERROR]  Failed to install DAR
Caused by: com.emc.ide.installer.InstallException: Error handling controllable object Status = New; IsInstalled = true; com.emc.ide.artifact.bpm.model.bpm.impl.ActivityImpl@5e020dd1 (objectTypeName: null) (objectName: DB Inbound - Initiate, title: , subject: , authors: [], keywords: [], applicationType: , isHidden: false, compoundArchitecture: , componentLabel: [], resolutionLabel: , contentType: xml, versionLabel: [1.0, CURRENT], specialApp: DB-IN-IN.GIF, languageCode: , creatorName: null, archive: false, category: , controllingApp: , effectiveDate: [], effectiveFlag: [], effectiveLabel: [], expirationDate: [], extendedProperties: [], fullText: true, isSigned: false, isTemplate: false, lastReviewDate: null, linkResolved: false, publishFormats: [], retentionDate: null, status: , rootObject: true) (isPrivate: false, definitionState: installed, triggerThreshold: 0, triggerEvent: , execType: manual, execSubType: inbound_initiate, execMethodName: null, preTimer: 0, preTimerCalendarFlag: notusebusinesscal, preTimerRepeatLast: 0, postTimer: 0, postTimerCalendarFlag: notusebusinesscal, postTimerRepeatLast: 0, repeatableInvoke: true, execSaveResults: false, execTimeOut: 0, execErrHandling: stopAfterFailure, signOffRequired: false, resolveType: normal, resolvePkgName: , controlFlag: taskAssignedtoSupervisor, taskName: null, taskSubject: , performerType: user, performerFlag: noDeligationOrExtention, transitionMaxOutputCnt: 0, transitionEvalCnt: trigAllSelOutputLinks, transitionFlag: trigAllSelOutputLinks, transitionType: prescribed, execRetryMax: 0, execRetryInterval: 0, groupFlag: 0, template: true, artifactVersion: D65SP1);  Object ID = 4c0f123450002b1e;
Caused by: DfException:: THREAD: main; MSG: Error while making activity uneditable: com.emc.ide.artifactmanager.model.artifact.impl.ArtifactImpl@4bbc02ef (urn: urnd:com.emc.ide.artifact.bpm.activity/DB+Inbound+-+Initiate?location=%2FTemp%2FIntegration&name=DB+Inbound+-+Initiate, locale: null, repoLocation: null, categoryId: com.emc.ide.artifact.bpm.activity, implicitlyCreated: false, modifiedByUser: true); ERRORCODE: ff; NEXT: null
Caused by: DfException:: THREAD: main; MSG: [DM_WORKFLOW_E_NAME_NOT_EXIST]error:  "The dm_user object by the name 'dm_bps_inbound_user' specified in attribute performer_name does not exist."; ERRORCODE: 100; NEXT: null
[dmadmin@content_server_01 ProcessEngine]$

 

With the above, we know that the only failed DAR is the BPM.dar and it looks like we have the reason for this: the DAR needs a user named “dm_bps_inbound_user” to proceed with the installation but couldn’t find it and therefore the installation failed. But actually that’s not the root cause, it’s only a consequence. The real reason why the DAR installation failed is displayed in the first line above.

[INFO]  dmbasic.exe output : dmbasic: Error 35 in line 585: Sub or Function not defined

 

For some reason, a function couldn’t be executed because not defined properly. This function is the one that is supposed to create the “dm_bps_inbound_user” user but with a CS 7.3 this function cannot be executed properly. As a result, the user isn’t created and then the DAR installation fail. For more information, you can refer to the BPM-11223.

 

This issue will – according to EMC – not be fixed in any patch of the xCP 2.3, even if this issue has been spotted quickly after the release of the xCP 2.3. Therefore, if you want to avoid this issue, you will have to wait several months for the xCP 2.4 to be released (not really realistic ;)) or you will need to create this user manually before installing the xCP 2.3 on a CS 7.3. You don’t need special permissions for this user and you don’t need to know its password so it’s rather simple to create it for all installed docbases in a few simple commands:

[dmadmin@content_server_01 ProcessEngine]$ echo "?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';" > create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "create,c,dm_user" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "set,c,l,user_name" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "dm_bps_inbound_user" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "set,c,l,user_login_name" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "dm_bps_inbound_user" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "save,c,l" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$ cat create_user.api
?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';
create,c,dm_user
set,c,l,user_name
dm_bps_inbound_user
set,c,l,user_login_name
dm_bps_inbound_user
save,c,l
?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$ sep="***********************"
[dmadmin@content_server_01 ProcessEngine]$ for docbase in `cd $DOCUMENTUM/dba/config; ls`;do echo;echo "$sep";echo "Create User: ${docbase}";echo "$sep";iapi ${docbase} -Udmadmin -Pxxx -Rcreate_user.api;done

***********************
Create User: GR_DOCBASE
***********************


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0000.0205


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f12345001c734 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
Session id is s0
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------

(0 row affected)

API> ...
110f12345000093c
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------
110f12345000093c     dm_bps_inbound_user   dm_bps_inbound_user
(1 row affected)

API> Bye

***********************
Create User: DocBase1
***********************


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0000.0205


Connecting to Server using docbase DocBase1
[DM_SESSION_I_SESSION_START]info:  "Session 010f234560052632 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
Session id is s0
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------

(0 row affected)

API> ...
110f234560001532
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------
110f234560001532     dm_bps_inbound_user   dm_bps_inbound_user                                                                                                                                                            
(1 row affected)

API> Bye

***********************
Create User: DocBase2
***********************


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0000.0205


Connecting to Server using docbase DocBase2
[DM_SESSION_I_SESSION_START]info:  "Session 010f345670052632 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
Session id is s0
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------

(0 row affected)

API> ...
110f345670001532
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------
110f345670001532     dm_bps_inbound_user   dm_bps_inbound_user                                                                                                                                                            
(1 row affected)

API> Bye
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$ rm create_user.api
[dmadmin@content_server_01 ProcessEngine]$

 

The users have been created properly in all docbases so just restart the xCP installer and this time the BPM.dar installation will succeed.

 

 

Cet article Documentum – Unable to install xCP 2.3 on a CS 7.3 est apparu en premier sur Blog dbi services.

Documentum – Using DA with Self-Signed SSL Certificate

Yann Neuhaus - Sat, 2017-08-05 01:58

A few years ago, I was working on a Documentum project and one of the tasks was to setup all components in SSL. I already published a lot of blogs on this subject but there is one I wanted to do but never really took the time to publish it. In this blog, I will therefore talk about Documentum Administrator in SSL using a Self-Sign SSL Certificate. Recently, a colleague of mine had the same issue at another customer so I provided him the full procedure that I will describe below. However, since the process below requires the signature of a jar file and since this isn’t available for all companies, you might want to check out my colleague’s blog too.

A lot of companies are working with their own SSL Trust Chain, meaning that they provide/create their own SSL Certificate (Self-Signed) including their Root and Intermediate SSL Certificate for the trust. End-users will not really notice the difference but they are actually using Self-Sign SSL Certificate. This has some repercussions when working with Documentum since you need to import the SSL Trust Chain on the various Application Servers (JMS, WebLogic, Dsearch, aso…). This is pretty simple but there is one thing that is a little bit trickier and this is related to Documentum Administrator.

Below, I will use a DA 7.2 P16 (that is therefore pretty recent) but the same applies to all patches of DA 7.2 and 7.3. For information, we didn’t face this issue with DA 7.1 so something most probably changed between DA 7.1 and 7.2. If you are seeing the same thing with a DA 7.1, feel free to put a comment below, I would love to know! When you are accessing DA for the first time, you will actually download a JRE which will be put under C:\Users\<user_name>\Documentum\ucf\<machine_name>, by default. This JRE is used for various stuff including the transfer of files (UCF), display of DA preferences, aso… DA isn’t taking the JRE from the website of Oracle, it is, in fact, taking it from the da.war file. The DA war file always contains two or three different JREs versions. Now if you want to use DA in HTTPS, these JREs will also need to contain your custom SSL Trust Chain. So how can you do that?

Well a simple answer would be: just like for the JMS or WebLogic, just import the custom SSL Trust Chain in the “cacerts” of these JREs. That will actually not work for a very vicious reason: EMC is now signing all the files provided and that also include the JREs inside da.war (well actually they are signing the checksums of the JREs, not the JREs themselves). Because of this signature, if you edit the cacerts file of the JREs, DA will say something like that: “Invalid checksum for the file ‘win-jre1.8.0_91.zip'”. This checksum ensures that the JREs and all the files you are using on your local workstation that have been downloaded from the da.war are the one provided by EMC. This is good from a security point of view since it prevents intruders to exchanges the files during transfer or directly on your workstation but that also prevents you from updating the JREs with your custom SSL Trust Chain.

 

So what I will do below to update the Java cacerts AND still keep a valid signature is:

  1. Extract the JREs and ucfinit.jar file from da.war
  2. Update the cacerts of each JREs with a custom SSL Trust Chain (Root + Intermediate)
  3. Repackage the JREs
  4. Calculate the checksum of the JREs using the ComputeChecksum java class
  5. Extract the old checksum files from ucfinit.jar
  6. Replace the old checksum files for the JREs with the new one generated on step 4
  7. Remove .RSA and .SF files from the META-INF folder and clean the MANIFEST to remove Documentum’s digital signature
  8. Recreate the file ucfinit.jar with the clean manifest and all other files
  9. Ask the company’s dedicated team to sign the new jar file
  10. Repackage da.war with the updated JREs and the updated/signed ucfinit.jar

 

I will use below generic commands that do not specify any version of the JREs or DA because there will be two or three different JREs and the versions will change depending on your DA Patch level, so better stay generic. I will also use my custom SSL Trust Chain which I put under /tmp.

In this first part, I will create a working folder to avoid messing with the deployed applications. Then I will extract the needed files and finally remove all files and folders that I don’t need. That’s the step 1:

[weblogic@weblogic_server_01 ~]$ mkdir /tmp/workspace; cd /tmp/workspace
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ cp $WLS_APPLICATIONS/da.war .
[weblogic@weblogic_server_01 workspace]$ ls
da.war
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ jar -xvf da.war wdk/system/ucfinit.jar wdk/contentXfer/
  created: wdk/contentXfer/
 inflated: wdk/contentXfer/All-MB.jar
 ...
 inflated: wdk/contentXfer/Web/Emc.Documentum.Ucf.Client.Impl.application
 inflated: wdk/contentXfer/win-jre1.7.0_71.zip
 inflated: wdk/contentXfer/win-jre1.7.0_72.zip
 inflated: wdk/contentXfer/win-jre1.8.0_91.zip
 inflated: wdk/system/ucfinit.jar
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ cd ./wdk/contentXfer/
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
All-MB.jar                                    jacob.dll                 libUCFSolarisGNOME.so   ucf-client-installer.zip  win-jre1.8.0_91.zip
Application Files                             jacob.jar                 libUCFSolarisJNI.so     ucf.installer.config.xml
Emc.Documentum.Ucf.Client.Impl.application    libMacOSXForkerIO.jnilib  licenses                UCFWin32JNI.dll
ES1_MRE.msi                                   libUCFLinuxGNOME.so       MacOSXForker.jar        Web
ExJNIAPI.dll                                  libUCFLinuxJNI.so         mac_utilities.jar       win-jre1.7.0_71.zip
ExJNIAPIGateway.jar                           libUCFLinuxKDE.so         ucf-ca-office-auto.jar  win-jre1.7.0_72.zip
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ for i in `ls | grep -v 'win-jre'`; do rm -rf "./${i}"; done
[weblogic@weblogic_server_01 contentXfer]$ rm -rf ./*/
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
win-jre1.7.0_71.zip  win-jre1.7.0_72.zip  win-jre1.8.0_91.zip
[weblogic@weblogic_server_01 contentXfer]$

 

At this point, only the JREs are present in the current folder (wdk/contentXfer) and I also have another file in another folder (wdk/system/ucfinit.jar). Once that is done, I’m creating a list of the JREs available that I will use for the whole blog and I’m also performing the steps 2 and 3, to extract the cacerts from the JREs, update them and finally repackage them (this is where I use the custom SSL Trust Chain):

[weblogic@weblogic_server_01 contentXfer]$ ls win-jre* | sed -e 's/.*win-//' -e 's/.zip//' > /tmp/list_jre.txt
[weblogic@weblogic_server_01 contentXfer]$ cat /tmp/list_jre.txt
jre1.7.0_71
jre1.7.0_72
jre1.8.0_91
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do unzip -x win-${line}.zip ${line}/lib/security/cacerts; done < /tmp/list_jre.txt
Archive:  win-jre1.7.0_71.zip
  inflating: jre1.7.0_71/lib/security/cacerts
Archive:  win-jre1.7.0_72.zip
  inflating: jre1.7.0_72/lib/security/cacerts
Archive:  win-jre1.8.0_91.zip
  inflating: jre1.8.0_91/lib/security/cacerts
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do keytool -import -noprompt -trustcacerts -alias custom_root_ca -keystore ${line}/lib/security/cacerts -file /tmp/Company_Root_CA.cer -storepass changeit; done < /tmp/list_jre.txt
Certificate was added to keystore
Certificate was added to keystore
Certificate was added to keystore
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do keytool -import -noprompt -trustcacerts -alias custom_int_ca -keystore ${line}/lib/security/cacerts -file /tmp/Company_Intermediate_CA.cer -storepass changeit; done < /tmp/list_jre.txt
Certificate was added to keystore
Certificate was added to keystore
Certificate was added to keystore
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do zip -u win-${line}.zip ${line}/lib/security/cacerts; done < /tmp/list_jre.txt
updating: jre1.7.0_71/lib/security/cacerts (deflated 35%)
updating: jre1.7.0_72/lib/security/cacerts (deflated 35%)
updating: jre1.8.0_91/lib/security/cacerts (deflated 33%)
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do rm -rf ./${line}; done < /tmp/list_jre.txt
[weblogic@weblogic_server_01 contentXfer]$

 

At this point, the JREs have been updated with a new “cacerts” and therefore its checksum changed. It doesn’t match the signed checksum anymore so if you try to deploy DA at this point, you will get the error message I put above. So, let’s perform the steps 4, 5 and 6. For that purpose, I will use the file /tmp/ComputeChecksum.class that was provided by EMC. This class is needed in order to recalculate the new checksum of the JREs:

[weblogic@weblogic_server_01 contentXfer]$ pwd
/tmp/workspace/wdk/contentXfer
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ cp /tmp/ComputeChecksum.class .
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
ComputeChecksum.class  win-jre1.7.0_71.zip  win-jre1.7.0_72.zip  win-jre1.8.0_91.zip
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ java ComputeChecksum .
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
ComputeChecksum.class           win-jre1.7.0_71.zip           win-jre1.7.0_72.zip           win-jre1.8.0_91.zip
ComputeChecksum.class.checksum  win-jre1.7.0_71.zip.checksum  win-jre1.7.0_72.zip.checksum  win-jre1.8.0_91.zip.checksum
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ rm ComputeChecksum.class*
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ cd /tmp/workspace/wdk/system/
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ pwd
/tmp/workspace/wdk/system
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ ls
ucfinit.jar
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ jar -xvf ucfinit.jar
 inflated: META-INF/MANIFEST.MF
 inflated: META-INF/COMPANY.SF
 inflated: META-INF/COMPANY.RSA
  created: META-INF/
 inflated: All-MB.jar.checksum
  created: com/
  created: com/documentum/
  ...
 inflated: UCFWin32JNI.dll.checksum
 inflated: win-jre1.7.0_71.zip.checksum
 inflated: win-jre1.7.0_72.zip.checksum
 inflated: win-jre1.8.0_91.zip.checksum
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ mv /tmp/workspace/wdk/contentXfer/win-jre*.checksum .
[weblogic@weblogic_server_01 system]$

 

With this last command, the new checksum have replaced the old ones. The next step is now to remove the old signatures (.RSA and .SF files + content of the manifest) and the repack the ucfinit.jar file (step 7 and 8):

[weblogic@weblogic_server_01 system]$ rm ucfinit.jar META-INF/*.SF META-INF/*.RSA
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ sed -i -e '/^Name:/d' -e '/^SHA/d' -e '/^ /d' -e '/^[[:space:]]*$/d' META-INF/MANIFEST.MF
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ cat META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.8.4
Title: Documentum Client File Selector Applet
Bundle-Version: 7.2.0160.0058
Application-Name: Documentum
Built-By: dmadmin
Build-Version: 7.2.0160.0058
Permissions: all-permissions
Created-By: 1.6.0_30-b12 (Sun Microsystems Inc.)
Copyright: Documentum Inc. 2001, 2004
Caller-Allowable-Codebase: *
Build-Date: August 16 2016 06:35 AM
Codebase: *
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ vi META-INF/MANIFEST.MF
    => Add a new empty line at the end of this file with vi, vim, nano or whatever... The file must always end with an empty line.
    => Do NOT use the command "echo '' >> META-INF/MANIFEST.MF" because it will change the fileformat of the file which complicate the signature (usually the FF is DOS...)
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ jar -cmvf META-INF/MANIFEST.MF ucfinit.jar *
added manifest
adding: All-MB.jar.checksum(in = 28) (out= 30)(deflated -7%)
adding: com/(in = 0) (out= 0)(stored 0%)
adding: com/documentum/(in = 0) (out= 0)(stored 0%)
adding: com/documentum/ucf/(in = 0) (out= 0)(stored 0%)
...
adding: UCFWin32JNI.dll.checksum(in = 28) (out= 30)(deflated -7%)
adding: win-jre1.7.0_71.zip.checksum(in = 28) (out= 30)(deflated -7%)
adding: win-jre1.7.0_72.zip.checksum(in = 28) (out= 30)(deflated -7%)
adding: win-jre1.8.0_91.zip.checksum(in = 28) (out= 30)(deflated -7%)
[weblogic@weblogic_server_01 system]$

 

At this point, the file ucfinit.jar has been recreated with an “empty” manifest, without signature but with all the new checksum files. Therefore, it is now time to send this file (ucfinit.jar) to your code signing team (step 9). This is out of scope for this blog but basically what will be done by your signature team is the creation of the .RSA and .SF files inside the folder META-INF as well as the repopulation of the manifest. The .SF and the manifest will contain more or less the same thing: the different files of the ucfinit.jar files will have their entries in these files with a pair filename/signature. At this point, we therefore have re-signed the checksum of the JREs.

 

The last step is now to repack the da.war with the new ucfinit.jar file which has been signed. I put the new signed file under /tmp:

[weblogic@weblogic_server_01 system]$ pwd
/tmp/workspace/wdk/system
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ rm -rf *
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ ll
total 0
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ cp /tmp/ucfinit.jar .
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ cd /tmp/workspace/
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ ls wdk/*
wdk/contentXfer:
win-jre1.7.0_71.zip  win-jre1.7.0_72.zip  win-jre1.8.0_91.zip

wdk/system:
ucfinit.jar
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ jar -uvf da.war wdk
adding: wdk/(in = 0) (out= 0)(stored 0%)
adding: wdk/contentXfer/(in = 0) (out= 0)(stored 0%)
adding: wdk/contentXfer/win-jre1.7.0_71.zip(in = 41373620) (out= 41205241)(deflated 0%)
adding: wdk/contentXfer/win-jre1.7.0_72.zip(in = 41318962) (out= 41137924)(deflated 0%)
adding: wdk/contentXfer/win-jre1.8.0_91.zip(in = 62424686) (out= 62229724)(deflated 0%)
adding: wdk/system/(in = 0) (out= 0)(stored 0%)
adding: wdk/system/ucfinit.jar(in = 317133) (out= 273564)(deflated 13%)
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ mv $WLS_APPLICATIONS/da.war $WLS_APPLICATIONS/da.war_bck_beforeSignature
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ mv da.war $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 workspace]$

 

Once this has been done, simply redeploy the Documentum Administrator and the next time you will access it in HTTPS, you will be able to transfer files, view the DA preferences, aso… The JREs are now trusted automatically because the checksum of the JRE is now signed properly.

 

 

Cet article Documentum – Using DA with Self-Signed SSL Certificate est apparu en premier sur Blog dbi services.

Developer GUI tools for PostgreSQL

Yann Neuhaus - Fri, 2017-08-04 13:33

There was a recent thread on the PostgreSQL general mailing list asking for GUI tools for PostgreSQL. This is question we get asked often at customers so I though it might be good idea to summarize some of them in a blog post. When you know other tools than the ones listed here which look promising, let me know so I can add them. There is a list of tools in the PostgreSQL Wiki as well.

Name Linux Windows MacOS Free Screenshot pgAdmin Y Y Y Y pg_gui_pgadmin DBeaver Y Y Y Y pg_gui_dbeaver EMS SQL Manager for PostgreSQL N Y N N pg_gui_ems_sql_manager JET BRAINS DataCrip Y Y Y N pg_gui_datagrip PostgreSQL Studio Y Y Y Y pg_gui_pgstudio Navicat for PostgreSQL Y Y Y N pg_gui_navicat execute Query Y Y Y Y pg_gui_executequery SQuirreL SQL Client Y Y Y Y pg_gui_aquirrel pgModeler Y Y Y Y pg_gui_pgmodeler DbSchema Y Y Y N pg_gui_dbschema Oracle SQL Developer Y Y Y Y pg_gui_sqldeveloper PostgreSQL Maestro N Y N N pg_gui_sqlmaestro SQL workbench Y Y Y Y pg_gui_sqlworkbench Nucleon Database Master N Y N N pg_gui_databasemaster Razor SQL Y Y Y N pg_gui_razorsql Database Workbench N Y N N pg_gui_databaseworkbench  

Cet article Developer GUI tools for PostgreSQL est apparu en premier sur Blog dbi services.

Six Plus One Types of Interviewers

Abhinav Agarwal - Fri, 2017-08-04 12:29

R
emember Chuck Noland? The character in the movie Castaway, who has to use the blade of an ice-skate to extract his abscessed tooth, without anesthesia? The scene is painful to watch, yet you can't look away.

Interviews have this habit of turning up a Chuck Noland - in the interviewee or the interviewer. You willingly agree to subject yourself to the wanton abuse by random strangers who you may have to end up working for or with. Apart from the talented few whom companies are more eager to hire than they are to get hired, most are in less enviable positions.

What about interviewers? Not all are cut from the same cloth. But there are at least six types that I think we have all met in our lives, and a seventh one.
1. The Interview As an End In Itself - Hyper-excited newbieYou know this guy. You have been this person, most likely. You have a team now. You expect your team to grow. You have to build a team. You believe that you, and you alone, know what it takes to hire the absolutely best person for the opening you have.
You sit down and explain to the harried hiring HR person what the role is, what qualifications you are looking for, why the job is special, why just ordinary programming skills in ordinary programming languages will simply not cut it, why you as the hiring manager are special, and how you will, with the new hire, change the product, the company, and eventually the whole wide world. The HR executive therefore needs to spend every waking minute of her time in the pursuance of this nobler than noble objective. You badger your hiring rep incessantly, by phone, by IM, by email, in person, several times a day, asking for better resumes if you are getting many, and more if you aren't getting enough.
You read every single resume you get, several times over. You redline the points you don't like. You redline the points you like. You make notes on the resumes. You still talk to every single candidate. You continue interviewing, never selecting, till the economic climate changes and the vacancy is no longer available.
Yes, we all know this person.
2. Knows what he is looking for and knows when he finds itThis person is a somewhat rare commodity. This person does not suffer from buyer's remorse, knows that there is no such thing as a perfect candidate, and that the best he can hope to get is a person who comes off as reasonably intelligent, hard-working, ethical, and is going to be a team player.

This person will however also suffer from blind spots. Specifically, two kinds of blindspots. The first is that he will look for and evaluate a person only on those criteria that he can assess best. The second is that he is more likely to hire candidates that are similar to other successful employees in his team, and will probably become less likely to take chances on a different type of a candidate. On the other hand, this manager also knows that conceptual skills are more important to test than specific knowledge of some arcane syntax in a geeky programming language - if you are talking of the world of software for instance.
This person is a rare commodity.
3. Hire for EmpireLike our previous type of hiring manager, this hiring manager is also very clear-headed.  But, here the interviewer is hiring to add headcount to his team. Grow the empire. More people equates to more perceived power. This person understands three things, and understands them perfectly.
First, that if he is slow in hiring, then a hiring freeze may come in, and the headcount may no longer stay open.
Second, he (or she) is also unable and equally unwilling to evaluate a candidate, so just about anyone will do.
Third, and most importantly, this manager knows that every additional person reporting to him on the organization chart elevates him in importance vis-a-vis his peers, and therefore hiring is a goal noble enough to be pursued in its own right.
It's a win-win situation for everyone - except the customers, the company, and the team.
4. I have other work to do. What am I doing here? What is he doing here?This person has little skin in the game. He has no dog in the fight. Pick your metaphor. He is there to take the interview because of someone's absence, or because in the charade of the interview "process" that exists at many companies, there exists a need to do this interview. The interviewer agrees because it is a tax that needs to be paid. You don't want to be labeled a non-team-player. Who knows when this Scarlet Letter may come to haunt you. So our interviewer sets aside half an hour or more, preferably less, of his time, and comes back wondering where thirty minutes of his life just went. That question remains unanswered.
5. Know-it-all and desperate to show itThis person perceived himself as an overachiever. This is the sort of person who will tell you with casual nonchalance that he had predicted the rise of Google in 1999  - just so you can get to know that he had heard of Google in 1999. This person knows he knows everything that there is to know, that it is his beholden duty to make you know it too, and it is your beholden duty to acknowledge this crushing sacerdotal burden he carries. This is the person who will begin the interview with a smirk, sustain a a wry smile, transform into a frown, and end with an exaggerated sense of self-importance.
Do not get fooled.
This person is as desperate, if not more, to interview you as you are to do well on the interview. He will in all likelihood end up talking more than the interviewee.
In every group in every department of every company there exists at least one such person. The successful companies have no more than one.
6. The rubber-stampThe boss has decided the person who needs to be hired. The charade needs to be completed. The requisite number of people have to interview the candidate so that HR can dot the "I"s and cross the "T"s. Our interviewer here has to speak with this person. With an air of deference. He will ask all the right questions, but the answers do not matter. You sign off with a heartfelt, "Great talking to you. Thanks a ton for your time. Take care, and we really look forward to working with/for you." No, don't belittle this rubber-stamp. He could be you.

These are not mutually exclusive sets. There are overlaps that exist, sometimes in combinations that would warm Stephen King's heart.

Oh, what about the seventh type of interviewer? He is the Interviewer as Saboteur.  I will talk about him in a separate post.

This post appeared on LinkedIn on July 31st, 2017.
This is an edited version of a post I wrote on April 23rd, 2013.

© 2017, Abhinav Agarwal. All rights reserved.

Video: Kubernetes: Finding the Magic

OTN TechBlog - Fri, 2017-08-04 12:18

"It's one thing to say 'I want to use Kubernetes for my orchestration and for my application,'" says TJ Fontaine. "But  you can't just sprinkle some Kubernetes dust on it and get magic out of it. You actually have to do a little bit of work and understand how it all fits together." Fortunately, getting started with that work can be as easy as watching a video. In this case, the video in question is this short interview with TJ, recorded at the Oracle Code event in Atlanta on June 22, 2017.

TJ, a software engineer, leads Oracle's open source efforts for Kubernetes. In this interview he recaps his Oracle Code presentation, "Introduction to Kubernetes," discusses patterns and anti-patterns for using Kubernetes in your development projects, and explains why the abstract for his session contains only 15 words.

Additional Resources
  1. Video: Discover Graal: Open Source Polyglot Runtime Environment
  2. Video: When Apache Spark Meets Hazelcast
  3. Video: Cassandra, Open Source, and Bare Metal Cloud
  4. Video: Basic Help for Docker Noobs
  5. Video: Meet Anzen A Startup Powered by Open Source, Oracle Cloud, Math, and Modern Art
  6. Blog Post: Three New Open Source Container Utilities

SELECT CASE INTO

Tom Kyte - Fri, 2017-08-04 11:26
How can I use a SELECT CASE INTO to store a value in a local variable?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator