Feed aggregator

Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Web Content Management

Oracle Press Releases - Mon, 2017-08-07 07:00
Press Release
Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Web Content Management Oracle positioned as a leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Aug 7, 2017

Oracle today announced that it has been named a leader in Gartner’s 2017 “Magic Quadrant for Web Content Management*” report. Oracle believes this placement is another proof point of momentum for Oracle’s hybrid cloud strategy with Oracle WebCenter Sites and growth for Oracle Content and Experience Cloud, part of the Oracle Cloud Platform.

“We believe this placement is further validation of Oracle’s continued momentum in the content as a service space and larger PaaS and SaaS market,” said Amit Zavery, senior vice president, product development, Oracle Cloud Platform. “Without proper tools, organizations cannot manage all types of content in a meaningful way. Not only does our solution put content in the hands of its owners, but it also offers the versatility and comprehensiveness to support a broad range of initiatives.”

According to Gartner, “Leaders should drive market transformation. Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clear vision and a thorough appreciation of the broader context of digital business. They have strong channel partners, a presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technologies or vertical markets. Leaders are aware of the ecosystem in which their offerings need to fit.”

Oracle’s capabilities extend beyond the typical role of content management. Oracle provides low-code development tools for building digital experiences that exploit a service catalog of data connections. Oracle Content and Experience Cloud enables organizations to manage and deliver content to any digital channel to drive effective engagement with customers, partners, and employees. With Oracle Content and Experience Cloud, organizations can enable content collaboration, deliver consistent omni-channel experience with one central content hub.

Download Gartner’s 2017 “Magic Quadrant for Web Content Management” here.

Oracle WebCenter Sites and Oracle Content and Experience Cloud enable organizations to build rich digital experiences with centralized content management, providing a unified repository to house unstructured content, enabling organizations to deliver content in the proper format to customers, employees and partners, within the context of familiar applications that fit the way they work.

* Gartner, “Magic Quadrant for Web Content Management,” Mick MacComascaigh, Jim Murphy, July 2017

Contact Info
Kristin Reeves
Blanc & Otus
+1.415.856.5145
+1.415.856.5145
Sarah Fraser
Oracle
+1.650.743.0660
sarah.fraser@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Talk to a Press Contact

Kristin Reeves

  • +1.415.856.5145

Sarah Fraser

  • +1.650.743.0660

Create user stories and supporting ERD

Dimitri Gielis - Mon, 2017-08-07 02:19
This post is part of a series of posts: From idea to app or how I do an Oracle APEX project anno 2017

In the first post we defined our high level idea of our application. Now what are the real requirements? what does the app has to do? In an Agile software development approach we typically create user stories to define that. We write sentences in the form of:
As a < type of user >, I want < some goal > so that < some reason >Goal of defining user storiesThe only relevant reason to write user stories is to have a discussion with the people you're building the application for. To developers those stories give an overview of what is expected before the development is started in a language that all parties understand.

Some people might like to write a big requirements document, but for me personally I just don't like that (neither to read or to write). I really want to speak to the people I'm going to build something for, to get in their body and skin and really understand and feel their pain. If somebody gives me a very detailed requirements document, they don't give me much freedom. Most people don't even know what is technically possible or what could really help them.


I like this quote of Henry Ford which illustrates the above:


If I had asked people what they wanted, they would have said faster horses.






Now having said that, you have to be really careful with my statement above... it really depends the developer if you can give them freedom or not. I know many developers who are the complete opposite of me and just want you to tell them exactly what and how to build. Same applies to people who have a bright idea, they do know what would help them. I guess it comes down to, use each others strength and be open and supportive during the communication.
User stories for our projectIn our multiplication table project we will write user stories for three different types of users: the player (child), the supervisor (parent/teacher) and the administrator of the app.
  • As a player, I want to start a session so that I can practice
  • As a player, I want to practice multiplications so that I get better at multiplying
  • As a player, I want to see how I did so that I know if I improved, stayed the same or did worse
  • As a player, I want to compare myself to other people so that I get a feeling of my level
  • As a supervisor, I want to register players so that they can practice
  • As a supervisor, I want to start the session so that player can practice
  • As a supervisor, I want to choose the difficulty level so that the player gets only exercises he's supposed to know
  • As a supervisor, I want to get an overview of the players progress and achievements
  • As a supervisor, I want to get an overview of the players mistakes
  • As a supervisor, I want to print a certificate so the player feels proud of it's achievement
  • As an administrator, I want to see the people who registered for the app so that I have an overview how much the app is used
  • As an administrator, I want to add, update and remove users so that the application can be maintained
  • As an administrator, I want to see statistics of the site so that I know if it's being used
The above is not meant to be a static list, in contrary, whenever we think of something else, we will come back to the list and add more sentences. So far I took the role as administrator and parent, my son as child and my father as teacher to come to this list. I welcome more peoples ideas, so feel free to add more user stories in the comment field of things you think of. More info on user stories and how to write them you find here.
More on Agile, Scrum, Kanban, XPBefore we move on with what I typically do after having discussed the requirements with the people, I want to touch on some buzz-words. I guess most companies claim they do Agile software development. Most popular Agile software development frameworks are Scrum, Kanban and XP. I'm by far an expert in any of those, but for me it all comes down to make the team more efficient to deliver what is really needed.

My company and I are not following any of those frameworks to the letter, instead we use a mix of all. We have a place where we note all the things we have to do (backlog), we developed iteratively and ship versions frequently (sprints), we have coding standards, we limit the work in progress (WIP) etc.

When we are doing consulting or development we adapt to how the customer likes to work. It also depends a bit the size of the project and team that is in place.

So my advice is, do whatever works best for you and your team. The only important thing at the end of the day is that you deliver (in time, on budget and what is needed) :)
Thinking in relational modelsSo when I really understand the problem, my mind starts to think in an entity relational diagram or in short ERD. I don't use any tool just yet, a few pieces of paper is all I need. I start writing down words, draw circles and relations, in fact those will become my tables, columns and foreign keys later on. For me personally drawing an ERD really helps me moving to the next step of seeing what data I will have and how I should structure it. I read the user stories one by one and see if I have a table for the data to build the story. I write down the ideas, comments and questions that pop-up and put it on a cleaner piece of paper. This paper is again food for discussion with the end-users.

Here are the papers for the multiplication table project:


Our ERD is not that complicated I would say; we basically need a table to store the users who will connect to the site/app. I believe in first instance it will most likely be parents or teachers who are interested in this app. Every user has the "user" role, but some will have the administrator role, so the app can be managed. We could also use a flag in the user table to specify who's an admin, but I like to have a separate table for roles as it's more flexible, for example if we wanted to make a difference between a teacher and parent in the future. Once you are in the app you create some players, most likely your children. Those players will play games and every game consists out of some details, for example which multiplication they did.

While reading the user stories, we also want some rankings. In the above ERD I could create the players own ranking, or the ranking of the players of a user (supervisor), but it's not that flexible. That is why I added the concept of teams. A player can belong to one or more teams, so I could create a specific team where my son and I belong too, so we can see each others rank in that team, but I can also create a team of friends. The team concept makes it even flexible for teachers, so they can create their classes and add players to a specific class.

I also added a note that instead of a custom username/password, it might be interesting to add a social login like Facebook, just so the app is even easier to be accessed. As I know in Oracle APEX 5.2 social authentication will be included, I will hold off to build it myself for now, but plan to upgrade our authentication scheme once Oracle APEX 5.2 comes out.

So my revised version of the ERD looks like this:


I hope this already gives some insight in the first steps I do when starting a project.

In the above post I didn't really go into the tools to support the Agile software development (as I didn't use it yet), that is for another post.

If you have questions, comments or want to share your thoughts, don't hesitate to put a comment to this post.
Categories: Development

Updated Whats New whitepaper - 4.3.0.4.0

Anthony Shorten - Sun, 2017-08-06 17:40

The Whats New in FW4 whitepaper has been updated for the latest service pack release. This whitepaper is designed to summarize the major technology and functional changes implemented in the Oracle Utilities Application Framework since V2.2 till the latest service pack. This is primarily of interest to customer upgrading of those earlier versions to understand what has changed and what is new in the framework since that early release.

The whitepaper is only a summary of selected enhancements and it is still recommended to review the release notes of each release if you are interested in details of everything that is changed. This whitepaper does not cover the changes to any of the products that use the Oracle Utilities Application Framework, it is recommended to refer to the release notes of the individual products for details of new functionality.

The whitepaper is available from Whats New in FW4 (Doc Id: 1177265.1) from My Oracle Support.

Inserting data into a table

Tom Kyte - Sun, 2017-08-06 00:06
Hello, I am trying to insert data into a table, The only thing is it is of 20 years. I have already created a query. The query is in a good shape but the only thing missing in my query is the dates. Below is my query. I want LV_START_DATE as 201...
Categories: DBA Blogs

orcl

Tom Kyte - Sun, 2017-08-06 00:06
I have table employee...which has two columns like....Name and Id... create table employee(name varchar2(10),id number); Insert into employee values('A',1); Insert into employee values('B',2); Insert into employee values('C',3); Name...
Categories: DBA Blogs

Postgres vs. Oracle access paths IV – Order By and Index

Yann Neuhaus - Sat, 2017-08-05 15:00

I realize that I’m talking about indexes in Oracle and Postgres, and didn’t mention yet the best website you can find about indexes, with concepts and examples for all RDBMS: http://use-the-index-luke.com. You will probably learn a lot about SQL design. Now let’s continue on execution plans with indexes.

As we have seen two posts ago, an index can be used even with a 100% selectivity (all rows), when we don’t filter any rows. Oracle has INDEX FAST FULL SCAN which is the fastest, reading blocks sequentially as they come. But this doesn’t follow the B*Tree leaves chain and does not return the rows in the order of the index. However, there is also the possibility to read the leaf blocks in the index order, with INDEX FULL SCAN and random reads instead of multiblock reads.
It is similar to the Index Only Scan of Postgres except that there is no need to get to the table to filter out uncommitted changes. Oracle reads the transaction table to get the visibility information, and goes to undo records if needed.

The previous post had a query with a ‘where n is not null’ predicate to be sure having all index entries in Oracle indexes and we will continue on this by adding an order by.

For this post, I’ve increased the size of the column N in the Oracle table, by adding 1/3 to each number. I did this for this post only, and for the Oracle table only. The index on N is now 45 blocks instead of 20. The reason is to show what happens when the cost of ‘order by’ is high. I didn’t change the Postgres table because there is only one way to scan the index, where result is always sorted.

Oracle Index Fast Full Scan vs. Index Full Scan


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dbck3rgnqbakg, child number 0
-------------------------------------
select /*+ */ n from demo1 where n is not null order by n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 46 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 46 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

Index Full Scan, the random read version of index read is chosen here by the Oracle optimizer because we want the result on the column N and the index can provide this without additional sorting.

We can force the optimizer to do multiblock reads, with INDEX_FFS hint:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID anqfbf5caat2a, child number 0
-------------------------------------
select /*+ index_ffs(demo1) */ n from demo1 where n is not null order
by n
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 82 (100)| 10000 |00:00:00.01 | 51 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 82 (2)| 10000 |00:00:00.01 | 51 | 478K| 448K| 424K (0)|
| 2 | INDEX FAST FULL SCAN| DEMO1_N | 1 | 10000 | 14 (0)| 10000 |00:00:00.01 | 51 | | | |
-----------------------------------------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) "N"[NUMBER,22] 2 - "N"[NUMBER,22]

The estimated cost is higher: the index read is cheaper (cost=14 instead of 46) but then the sort operation brings this to 82. We can see additional columns in the execution plan here because the sorting operation needs a workarea in memory (estimated 478K, actually 424K used during the execution). Note that the multiblock read has a few blocks of overhead (reads 51 blocks instead of 48) because it has to read the segment header to identify the extents to scan.

Postgres Index Only Scan

In PostgreSQL there’s only one way to scan indexes: random reads by following the chain of leaf blocks. This returns the rows in the order of the index and does not require an additional sort:


explain (analyze,verbose,costs,buffers) select n from demo1 where n is not null order by n ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_n on public.demo1 (cost=0.29..295.29 rows=10000 width=4) (actual time=0.125..1.277 rows=10000 loops=1)
Output: n
Index Cond: (demo1.n IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=30
Planning time: 0.532 ms
Execution time: 1.852 ms

In the previous posts, we have seen a cost of cost=0.29..270.29 for the Index Only Scan. Here we have an additional cost of 25 for the cpu_operator_cost because I’ve added the ‘where n is not null’. As the default constant is 0.0025 this is the query planner estimating to evaluate it for 10000 rows.

First Rows

The Postgres cost always shows two values. The first one is the startup cost: the cost just before being able to return the first row. Some operations have a very small startup cost, others have some blocking operations that must finish before sending their first result rows. Here, as we have no sort operation, the first row retrieved from the index can be returned immediately and the startup cost is small: 0.29
In Oracle you can see the initial cost by optimizing the plan to retrieve the first row, with the FIRST_ROWS() hint:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0fjk9vv4g1q1w, child number 0
-------------------------------------
select /*+ first_rows(1) */ n from demo1 where n is not null order by
n
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 (100)| 10000 |00:00:00.01 | 48 |
| 1 | INDEX FULL SCAN | DEMO1_N | 1 | 10000 | 2 (0)| 10000 |00:00:00.01 | 48 |
---------------------------------------------------------------------------------------------------
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "N"[NUMBER,22]

The actual number of blocks read (48) is the same as before because I finally fetched all rows, but the cost is small because it was estimated for two rows only. Of course, we can also tell Postgres or Oracle that we want only the first rows. This is for the next post.

Character strings

The previous example is an easy one because the column N is a number and both Oracle and Postgres stores number in a binary format that follows the same order as the numbers. But that’s different with character strings. If you are not in America, there is a very little chance that the order you want to see follows the ASCII order. Here I’ve run a similar query but using the column X instead of N, which is a text (VARCHAR2 in Oracle):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fsqk4fg1t47v5, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x
--------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2493 (100)| 10000 |00:00:00.27 | 1644 | 18 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 2493 (1)| 10000 |00:00:00.27 | 1644 | 18 | 32M| 2058K| 29M (0)|
|* 2 | INDEX FAST FULL SCAN| DEMO1_X | 1 | 10000 | 389 (0)| 10000 |00:00:00.01 | 1644 | 18 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) NLSSORT("X",'nls_sort=''FRENCH''')[2000], "X"[VARCHAR2,1000] 2 - "X"[VARCHAR2,1000]

I have created an index on X, and as you can see it can be used to get all X values, but with an Index Fast Full Scan, the multiblock index only access which is fast but does not return rows in the order of the index. And then a sort operation is applied. I can force an Index Full Scan with INDEX() hint but the sort will still have to be done.

The reason can be seen in the column projection note. My Oracle client application is running on a laptop where the OS is in French and Oracle returns the setting according to what the end-user can expect. This is National Language Support. An Oracle database can be accessed by users all around the world and they will see ordered lists, date format, decimal separators,… according to their country and language.

ORDER BY … COLLATE …

My databases has been created in a system which is in English. In Postgres we can get results sorted in French with the COLLATE option of ORDER BY:


explain (analyze,verbose,costs,buffers) select x from demo1 where x is not null order by x collate "fr_FR" ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------
Sort (cost=5594.17..5619.17 rows=10000 width=1036) (actual time=36.163..37.254 rows=10000 loops=1)
Output: x, ((x)::text)
Sort Key: demo1.x COLLATE "fr_FR"
Sort Method: quicksort Memory: 1166kB
Buffers: shared hit=59
-> Index Only Scan using demo1_x on public.demo1 (cost=0.29..383.29 rows=10000 width=1036) (actual time=0.156..1.559 rows=10000 loops=1)
Output: x, x
Index Cond: (demo1.x IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=52
Planning time: 0.792 ms
Execution time: 38.264 ms

Same idea here as in Oracle: there is an additional sort operation, which is a blocking operation that needs to be completed before being able to return the first row.

The detail of the cost is the following:

  • The index on the column X has 52 blocks witch is estimated at cost=208 (random_page_cost=4)
  • We have 10000 index entries to process, estimated at cost=50 (cpu_index_tuple_cost=0.005)
  • We have 10000 result rows to process, estimated at cost=100 (cpu_tuple_cost=0.01)
  • We have evaluated 10000 ‘is not null’ conditions, estimated at cost=25 (cpu_operator_cost=0.0025)

In Oracle we can use the same COLLATE syntax, but the name of the language is different, consistent across platforms rather than useing the OS one:


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 82az4syppyndf, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x collate "French"
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2493 (100)| 10000 |00:00:00.28 | 1644 | | | |
| 1 | SORT ORDER BY | | 1 | 10000 | 2493 (1)| 10000 |00:00:00.28 | 1644 | 32M| 2058K| 29M (0)|
|* 2 | INDEX FAST FULL SCAN| DEMO1_X | 1 | 10000 | 389 (0)| 10000 |00:00:00.01 | 1644 | | | |
-----------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=1) NLSSORT("X" COLLATE "French",'nls_sort=''FRENCH''')[2000], "X"[VARCHAR2,1000] 2 - "X"[VARCHAR2,1000]

In Oracle, we do not need to use the COLLATE option. The language can be set for the session (NLS_LANGUAGE=’French’) or from the environment (NLS_LANG=’=French_.’). Oracle can share cursors across sessions (to avoid to waste resource compiling and optimizing the same statements used by different sessions) but will not share execution plans among different NLS environments because, as we have seen, the plan can be different. Postgres do not have to manage that because each PREPARE statement does a full compilation and optimization. There is no cursor sharing in Postgres.

Indexing for different languages

We have seen in the Oracle execution plan Column Projection Information that an NLSSORT operation is applied on the column to get a value that follows the collation order of the language. We have seen in the previous post that we can index a function on a column. Then we have the possibility to create an index for different languages. The following index will be used to avoid sort from French users:

create index demo1_x_fr on demo1(nlssort(x,'NLS_SORT=French'));

Since 12cR2 we can create the same with de collate syntax:

create index demo1_x_fr on demo1(x collate "French");

Both syntaxes create the same index, which can be used by queries with ORDER BY … COLLATE or with session that set the NLS_LANGUAGE:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 82az4syppyndf, child number 0
-------------------------------------
select /*+ */ x from demo1 where x is not null order by x collate "French"
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 4770 (100)| 10000 |00:00:00.02 | 4772 |
|* 1 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 1 | 10000 | 4770 (1)| 10000 |00:00:00.02 | 4772 |
| 2 | INDEX FULL SCAN | DEMO1_X_FR | 1 | 10000 | 3341 (1)| 10000 |00:00:00.01 | 3341 |
-----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("X" IS NOT NULL)
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - "X"[VARCHAR2,1000] 2 - "DEMO1".ROWID[ROWID,10], "DEMO1"."SYS_NC00004$"[RAW,2000]

There’s no sort operation here as the INDEX FULL SCAN returns the rows in order.

PostgreSQL has the same syntax:

create index demo1_x_fr on demo1(x collate "fr_FR");

and then the query can use this index and bypass the sort operation:

explain (analyze,verbose,costs,buffers) select x from demo1 where x is not null order by x collate "fr_FR" ;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Index Only Scan using demo1_x_fr on public.demo1 (cost=0.29..383.29 rows=10000 width=1036) (actual time=0.190..1.654 rows=10000 loops=1)
Output: x, x
Index Cond: (demo1.x IS NOT NULL)
Heap Fetches: 0
Buffers: shared hit=32 read=20
Planning time: 1.049 ms
Execution time: 2.304 ms

Avoiding a sort operation can really improve the performance of queries in two ways: save the resources required by a sort operation (which will have to spill to disk when the workarea do not fit in memory) and avoid a blocking operation and then be able to return the first rows quickly.

We have seen how indexes can be used to access a subset of columns from a smaller structure, and how they can be used to access a sorted version of the rows. Future posts will show how the index access is used to quickly filter a subset of rows. But for the moment I’ll continue on this blocking operation. We have seen a lot of Postgres costs, and they have two values (startup cost and total cost). More on startup cost in the next post.

 

Cet article Postgres vs. Oracle access paths IV – Order By and Index est apparu en premier sur Blog dbi services.

From idea to app or how I do an Oracle APEX project anno 2017

Dimitri Gielis - Sat, 2017-08-05 11:30
For a long time I had in mind to write in great detail how I do an Oracle APEX project from A to Z. But so far I never took the time to actually do it, until today :)

So here's the idea; I love building projects that help people and I love to share what I know, so I will combine both. I will write exactly my thoughts and things I do as I'm moving along with this project, so you have full insight what's happening behind the scenes.
BackgroundWay back, in the year 1999, I build an application in Visual Basic to help children study the multiplication tables. My father was a math teacher and taught people who wanted to become primary school teachers. While doing the visits of the primary schools, he saw the problem that children had difficulties to automate the multiplications from 1 till 10, so together we thought about how we could help them. That is how the Visual Basic application was born. I don't have a working example anymore of the program, but I found some paper prints from that time, which you see here:



We are now almost 20 years later and last year my son had difficulties memorizing the multiplication tables too. I tried sitting next to him and help him out, but when things don't go as smooth as you hope... You have to stay calm and supportive, but I found it hard, especially when there are two other children crying for attention too or you had a rough day yourself... In a way I felt frustrated because I didn't know how to help further in the time I had. At some point I thought about the program I wrote way back then and decided to quickly build a web app that would allow him to train himself. And to make it more fun for him, I told him I would exercise too, so he saw it was doable :)

At KScope16 I showed this web app during Open Mic Night; it was far from fancy, but it did the job.
Here's a quick demo:



Some people recognized my story and asked if I could put the app online. I just build the app quickly for my son, so it needs some more work to make it accessible for others.
During my holidays, I decided I should really treat this project as a real one, otherwise it would never happen, so here we are, that is what I'm going to do and I'll write about it in detail :)
Idea - our requirementThe application helps children (typically between 7 and 11 years old) to automate multiplications between 1 and 10. It also helps their parents to get insight in timings and mistakes of their children's multiplications.
TimelineNo project without deadline, so I've set my go-production date to August 20th, 2017. So I've about 2 weeks, typically one sprint in our projects.
Following along and feedbackI will tweet, blog and create some videos to show my progress. You can follow along and reach me on any of those channels. If you have any questions, tips or remarks during the development, don't hesitate to add a comment. I always welcome new ideas or insights and am happy to go in more detail if something is not clear.
High level break-down of plan for the following days
  • Create user stories and supporting ERD
  • List of the tools I use and why I use them
  • Set up the development environment
  • Create the Oracle database objects
  • Set up a domain name
  • Set up reverse proxy and https
  • Create a landing page and communicate
  • Build the Oracle APEX application: the framework
  • Refine the APEX app: create custom authentication
  • Refine the APEX app: adding the game
  • Refine the APEX app: improve the flow and navigation
  • Refine the APEX app: add ability to print results to PDF
  • Set up build process
  • Check security
  • Communicate first version of the app to registered people
  • Check performance
  • Refine the APEX app: add more reports and statistics
  • Check and reply to feedback
  • Set up automated testing
  • A word on debugging
  • Refine the APEX app: making final changes
  • Set up backups
  • Verify documentation and lessons learned
  • Close the loop and Celebrate :)
So now, let's get started ...
Categories: Development

12c MultiTenant Posts -- 7 : Adding Custom Service to PDB (nonRAC/GI)

Hemant K Chitale - Sat, 2017-08-05 10:20
Earlier I have already demonstrated adding and managing custom services in a RAC environment in a blog post and a video.

But what if you are running Single Instance and not using Grid Infrastructure?  The srvctl command in Grid Infrastructure is what you'd use to add and manage services in RAC and Oracle Restart environments.  But with Grid Infrastructure, you can fall back on DBMS_SERVICE.

The DBMS_SERVICE API has been available since Oracle 8i -- when Services were introduced.

Here is a quick demo of some facilities with DBMS_SERVICE.

1.  Adding a Custom Service into a PDB :

$sqlplus system/oracle@NEWPDB

SQL*Plus: Release 12.2.0.1.0 Production on Sat Aug 5 22:52:21 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Mon Jul 10 2017 22:22:30 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show con_id

CON_ID
------------------------------
4
SQL>
SQL> execute dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL> execute dbms_service.start_service('HR');

PL/SQL procedure successfully completed.

SQL>


Connecting to the service via tnsnames.

SQL> connect hemant/hemant@HR
Connected.
SQL> show con_id

CON_ID
------------------------------
4
SQL>


2.  Disconnecting all connected users on the Service

$sqlplus system/oracle@NEWPDB

SQL*Plus: Release 12.2.0.1.0 Production on Sat Aug 5 23:02:47 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Sat Aug 05 2017 23:02:28 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> execute dbms_service.disconnect_session(-
> service_name=>'HR',disconnect_option=>DBMS_SERVICE.IMMEDIATE);

PL/SQL procedure successfully completed.

SQL>
In the HEMANT session connected to HR :
SQL> show con_id
ERROR:
ORA-03113: end-of-file on communication channel
Process ID: 5062
Session ID: 67 Serial number: 12744


SP2-1545: This feature requires Database availability.
SQL>


(Instead of DBMS_SERVICE.IMMEDIATE, we could also specify DBMS_SERVICE.POST_TRANSACTION).


3.  Shutting down a Service without closing the PDB :

SQL> execute dbms_service.stop_service('HR');

PL/SQL procedure successfully completed.

SQL>
SQL> connect hemant/hemant@HR
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Warning: You are no longer connected to ORACLE.
SQL>


Does restarting the Database, restart this custom service?

SQL> connect / as sysdba
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 8798312 bytes
Variable Size 343936920 bytes
Database Buffers 478150656 bytes
Redo Buffers 7974912 bytes
Database mounted.
Database opened.
SQL> alter pluggable databas all open;
alter pluggable databas all open
*
ERROR at line 1:
ORA-02000: missing DATABASE keyword


SQL> alter pluggable database all open;

Pluggable database altered.

SQL> connect hemant/hemant@NEWPDB
Connected.
SQL> connect hemant/hemant@HR
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Warning: You are no longer connected to ORACLE.
SQL>
SQL> connect system/oracle@NEWPDB
Connected.
SQL> execute dbms_service.start_service('HR');

PL/SQL procedure successfully completed.

SQL> connect hemant/hemant@HR
Connected.
SQL>


I had to reSTART this custom service ('HR') after the PDB was OPENed.

Services is a facility that has been available since 8i non-OPS.  However, Services were apparently only being used by most sites in RAC environments.

Services allow you to run multiple "applications" (each application advertised as a Service) within the same (one) database.

Note that, in a RAC environment, srvctl configuration of Services can configure auto-restart of the Service.
.
.
.

Categories: DBA Blogs

Encryption of shell scripts

Yann Neuhaus - Sat, 2017-08-05 07:52

In this blog, I will talk about the encryption of files and in particular the encryption of a shell script because that was my use case. Before starting, some people may say/think that you shouldn’t encrypt any scripts and I globally agree with that BUT I still think that there might be some exceptions. I will not debate this further but I found the encryption subject very interesting so I thought I would write a small blog with my thoughts.

 

Encryption?

So, when we talk about encryption, what is it exactly? There are actually two not-so-different concepts that people often mix up: encryption and obfuscation. The encryption is a technique to keep an information confidential by changing its form, which becomes unreadable. The obfuscation, on the other hand, refers to the protection of something by trying to hide it, convert it into something more difficult to read but it’s not completely unreadable. The main difference is that if you know what technique was used to encrypt something, you cannot decrypt it without the key while you can remove the obfuscation if you know how it was done.

The reason why I’m including this small paragraph in this blog is because when I was searching for a way to encrypt a shell script in Linux, I read a LOT of blogs and websites that just got it wrong… The problem with encrypted shell scripts is that at some points, the Operating System will need to know which commands should be executed. So, at some point, it will need to be decrypted.

 

Shell script

So, let’s start with the creation a test shell script that I will use for the rest of this blog. I’m creating a small, very simple, test script which contains a non-encrypted password that I need to enter correctly in order to get an exit code of 0. If the password is wrong, after 3 tries, I should get an exit code of 1. Please note that if the shell script contains interactions, then you need to use the redirection from tty (“< /dev/tty”) like I did in my example.

Below I’m displaying the content of this script and using it, without encryption, to show you the output. Please note that in my scripts, I included colors (green for INFO and OK, yellow for WARN and red for ERROR messages) which aren’t displayed in the blog… Sorry about that but I can’t add colors to the blog unfortunately!

[morgan@linux_server_01 ~]$ cat test_script.sh
#!/bin/bash
#
# File: test_script.sh
# Purpose: Shell script to test the encryption solutions
# Author: Morgan Patou (dbi services)
# Version: 1.0 29-Jul-2017
#
###################################################

### Defining colors & execution folder
red_c="33[31m"
yellow_c="33[33m"
green_c="33[32m"
end_c="33[m"
script_folder=`which ${0}`
script_folder=`dirname ${script_folder}`

### Verifying password
script_password="TestPassw0rd"
echo
echo -e "${green_c}INFO${end_c} - This file is a test script to test the encryption solutions."
echo -e "${green_c}INFO${end_c} - Entering the correct password will return an exit code of 0."
echo -e "${yellow_c}WARN${end_c} - Entering the wrong password will return an exit code of 1."
echo
retry_count=0
retry_max=3
while [ "${retry_count}" -lt "${retry_max}" ]; do
  echo
  read -p "  ----> Please enter the password to execute this script: " entered_password < /dev/tty
  if [[ "${entered_password}" == "${script_password}" ]]; then
    echo -e "${green_c}OK${end_c} - The password entered is the correct one."
    exit 0
  else
    echo -e "${yellow_c}WARN${end_c} - The password entered isn't the correct one. Please try again."
    retry_count=`expr ${retry_count} + 1`
  fi
done

echo -e "${red_c}ERROR${end_c} - Too many failed attempts. Exiting."
exit 1

[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ chmod 700 test_script.sh
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./test_script.sh

INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.


  ----> Please enter the password to execute this script: Password1
WARN - The password entered isn't the correct one. Please try again.

  ----> Please enter the password to execute this script: Password2
WARN - The password entered isn't the correct one. Please try again.

  ----> Please enter the password to execute this script: Password3
WARN - The password entered isn't the correct one. Please try again.
ERROR - Too many failed attempts. Exiting.
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ echo $?
1
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./test_script.sh

INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.


  ----> Please enter the password to execute this script: TestPassw0rd
OK - The password entered is the correct one.
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ echo $?
0
[morgan@linux_server_01 ~]$

 

As you can see above, the script is doing what I expect it to do so that’s fine.

 

SHc?

So, what is SHc? Is it really a way to encrypt your shell scripts?

Simple answer: I would NOT use SHc for that. I don’t have anything against SHc, this is actually a utility that might be useful but from my point of view, it’s clearly not a good solution for encrypting a shell script.

 

SHc is a utility (check its website) that – from a shell script – will create a C source code which represents it using a RC4 algorithm. This C source code contains a random structure as well as the decryption method. Then it is compiled to create a binary file. The problem with SHc is that the binary file contains the original shell script (encrypted) but also the decryption materials because this is needed to execute it. So, let’s install this utility:

[morgan@linux_server_01 ~]$ wget http://www.datsi.fi.upm.es/~frosal/sources/shc-3.8.9b.tgz
--2017-07-29 14:10:14--  http://www.datsi.fi.upm.es/~frosal/sources/shc-3.8.9b.tgz
Resolving www.datsi.fi.upm.es... 138.100.9.22
Connecting to www.datsi.fi.upm.es|138.100.9.22|:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 20687 (20K) [application/x-gzip]
Saving to: “shc-3.8.9b.tgz”

100%[===================================================================>] 20,687      --.-K/s   in 0.004s

2017-07-29 14:10:14 (5.37 MB/s) - “shc-3.8.9b.tgz” saved [20687/20687]

[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ tar -xvzf shc-3.8.9b.tgz
shc-3.8.9b/CHANGES
shc-3.8.9b/Copying
shc-3.8.9b/match
shc-3.8.9b/pru.sh
shc-3.8.9b/shc-3.8.9b.c
shc-3.8.9b/shc.c
shc-3.8.9b/shc.1
shc-3.8.9b/shc.README
shc-3.8.9b/shc.html
shc-3.8.9b/test.bash
shc-3.8.9b/test.csh
shc-3.8.9b/test.ksh
shc-3.8.9b/makefile
shc-3.8.9b/testit
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ cd shc-3.8.9b/
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ make
cc -Wall  shc.c -o shc
***     Do you want to probe shc with a test script?
***     Please try...   make test
[morgan@linux_server_01 shc-3.8.9b]$

 

At this point, I only built the utility locally because I will be removing it shortly. Now, let’s “encrypt” the file using shc:

[morgan@linux_server_01 shc-3.8.9b]$ cp ../test_script.sh ./
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ls test_script*
test_script.sh
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ./shc -f test_script.sh
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ls test_script*
test_script.sh  test_script.sh.x  test_script.sh.x.c
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ # Removing the C source code and original script
[morgan@linux_server_01 shc-3.8.9b]$ rm test_script.sh test_script.sh.x.c
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ # Renaming the "encrypted" file to .bin
[morgan@linux_server_01 shc-3.8.9b]$ mv test_script.sh.x test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ls test_script*
test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$

 

So above, I used shc and it created two files:

  • test_script.sh.x => This is the C compiled file which can then be executed. I renamed it to test_script.bin to really see the differences between the files
  • test_script.sh.x.c => This is the C source code which I removed since I don’t need it

 

At this point, if you try to view the content of the .bin file (previously test_script.sh.x), you will not be able to see the real content and you will see something that looks like a real .bin executable. To see its “binary” content, you can use the “strings” command which will display all readable (printable) words from the file and you will see that we cannot see the password or any commands from the original shell script. So, at first look, that seems to be a success, the shell script seems to be encrypted:

[morgan@linux_server_01 shc-3.8.9b]$ strings test_script.bin
/lib64/ld-linux-x86-64.so.2
__gmon_start__
libc.so.6
sprintf
perror
__isoc99_sscanf
fork
...
EcNB
,qIB`^
gLSI
U)L&
fX4u
j[5,
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ ./test_script.bin
 
INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.
 
 
  ----> Please enter the password to execute this script: Password1
WARN - The password entered isn't the correct one. Please try again.
 
  ----> Please enter the password to execute this script: TestPassw0rd
OK - The password entered is the correct one.
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ echo $?
0
[morgan@linux_server_01 shc-3.8.9b]$

 

So, what is the issue with SHc? Why am I saying that this isn’t a suitable encryption solution? Well that’s because you can always just strip the text out of the file or substitute the normal shell to another one in order to grab the text when it runs. There are also several projects on GitHub (like UnSHc) which will allow you to retrieve the original content of the shell script and to revert the changes done by SHc. This works because the content of the bin file is predictable and can be analysed in order to decrypt it. So, that’s not really a good solution I would say.

There are a lot of ways to see the original content of a file encrypted by SHc. One of them being just checking the list of processes and you will see that the original shell script is actually passed as a parameter to the binary file in this format: ./test_script.bin -c   <<<a lot of spaces>>>    <<<script_unencrypted_newlines_separated_by_’?’>>>. See below my example:

[morgan@linux_server_01 shc-3.8.9b]$ ./test_script.bin& (ps -ef | grep "test_script.bin" | grep -v grep > test_decrypt_content.sh)
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ # The real file is in 1 line only. For readability on the blog, I split that in several lines 
[morgan@linux_server_01 shc-3.8.9b]$ cat test_decrypt_content.sh
405532   20125  2024  0 16:18 pts/3    00:00:00 ./test_script.bin -c                                                                              
                                                                                                                                                  
                                                                                                                                                  
                                                                                                                                                  
                                                                                                                                                  
                                                                                                                                                  
#!/bin/bash?#?# File: test_script.sh?# Purpose: Shell script to test the encryption solutions?# Author: Morgan Patou (dbi services)?# Version: 1.029-Jul-2017?
#?###################################################??### Defining colors & execution folder?red_c="33[31m"?yellow_c="33[33m"?green_c="33[32m"?end_c="\
033[m"?script_folder=`which ${0}`?script_folder=`dirname ${script_folder}`??### Verifying password?script_password="TestPassw0rd"?echo?echo -e "${green_c}INFO
${end_c} - This file is a test script to test the encryption solutions."?echo -e "${green_c}INFO${end_c} - Entering the correct password will return an exit c
ode of 0."?echo -e "${yellow_c}WARN${end_c} - Entering the wrong password will return an exit code of 1."?echo?retry_count=0?retry_max=3?while [ "${retry_coun
t}" -lt "${retry_max}" ]; do?  echo?  read -p "  ----> Please enter the password to execute this script: " entered_password < /dev/tty?  if [[ "${entered_pass
word}" == "${script_password}" ]]; then?    echo?    echo -e "${green_c}OK${end_c} - The password entered is the correct one."?    exit 0?  else?    echo -e "
${yellow_c}WARN${end_c} - The password entered isn't the correct one. Please try again."?    retry_count=`expr ${retry_count} + 1`?  fi?done??echo -e "${red_c
}ERROR${end_c} - Too many failed attempts. Exiting."?exit 1?? ./test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$

 

As you can see above, the whole content of the original shell script is displayed in the “ps” command. Not very hard to find out what is the original content… With a pretty simple command, we can even reformat the original file:

[morgan@linux_server_01 shc-3.8.9b]$ sed -i -e 's,?,\n,g' -e 's,.*     [[:space:]]*,,' test_decrypt_content.sh
[morgan@linux_server_01 shc-3.8.9b]$
[morgan@linux_server_01 shc-3.8.9b]$ cat test_decrypt_content.sh
#!/bin/bash
#
# File: test_script.sh
# Purpose: Shell script to test the encryption solutions
# Author: Morgan Patou (dbi services)
# Version: 1.0 29-Jul-2017
#
###################################################

### Defining colors & execution folder
red_c="33[31m"
yellow_c="33[33m"
green_c="33[32m"
end_c="33[m"
script_folder=`which ${0}`
script_folder=`dirname ${script_folder}`

### Verifying password
script_password="TestPassw0rd"
echo
echo -e "${green_c}INFO${end_c} - This file is a test script to test the encryption solutions."
echo -e "${green_c}INFO${end_c} - Entering the correct password will return an exit code of 0."
echo -e "${yellow_c}WARN${end_c} - Entering the wrong password will return an exit code of 1."
echo
retry_count=0
retry_max=3
while [ "${retry_count}" -lt "${retry_max}" ]; do
  echo
  read -p "  ----> Please enter the password to execute this script: " entered_password < /dev/tty
  if [[ "${entered_password}" == "${script_password}" ]]; then
    echo -e "${green_c}OK${end_c} - The password entered is the correct one."
    exit 0
  else
    echo -e "${yellow_c}WARN${end_c} - The password entered isn't the correct one. Please try again."
    retry_count=`expr ${retry_count} + 1`
  fi
done

echo -e "${red_c}ERROR${end_c} - Too many failed attempts. Exiting."
exit 1

 ./test_script.bin
[morgan@linux_server_01 shc-3.8.9b]$

 

And voila, with two very simple command, it is possible to retrieve the original file with its original formatting too (just remove the final line which is the call of the script itself). Please also note that if the original script contains some ‘?’ characters, they will also be replaced with a newline but that’s spotted pretty easily. With Shell options, you can also just ask your shell to print all commands that it executes so again without even additional commands you can see the content of the binary file.

 

What solution then?

For this section, I will re-use the same un-encrypted shell script (test_script.sh). So, what can be done to really protect a shell script? Well there are no perfect solutions because like I said previously, at some point, the OS will need to know which commands should be executed and for that purpose, it needs to be decrypted. There are a few ways to encrypt a shell script but the simplest would probably be to use openssl because it’s quick, it’s free and it’s portable without having to install anything since openssl is usually already there on Linux. Also, it allows you to choose the encryption algorithm you want to use. To encrypt the base file, I created a small shell script which I named “encrypt_script.sh”. This shell script takes an input file which is the un-encrypted original file and a second parameter is the output file which will contain the encryption:

[morgan@linux_server_01 shc-3.8.9b]$ cd ..
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ls
encrypt_script.sh shc-3.8.9b  shc-3.8.9b.tgz  test_script.sh
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ rm -rf shc-3.8.9b*
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ cat encrypt_script.sh
#!/bin/bash
#
# File: encrypt_script.sh
# Purpose: Script to encrypt a shell script and provide the framework around it for execution
# Author: Morgan Patou (dbi services)
# Version: 1.0 26/03/2016
#
###################################################

### Defining colors & execution folder
green_c="33[32m"
end_c="33[m"
script_folder="`which ${0}`"
script_folder="`dirname ${script_folder}`"
encryption="aes-256-cbc"

### Help
if [[ ${#} != 2 ]]; then
  echo -e "`basename ${0}`: usage: ${green_c}`basename ${0}`${end_c} <${green_c}shell_script_to_encrypt${end_c}> <${green_c}encrypted_script${end_c}>"
  echo -e "\t<${green_c}shell_script_to_encrypt${end_c}>  : Name of the shell script to encrypt. Must be placed under '${green_c}${script_folder}${end_c}'"
  echo -e "\t<${green_c}encrypted_script${end_c}>         : Name of the encrypted script to be created. The file will be created under '${green_c}${script_folder}${end_c}'"
  echo
  exit 1
else
  shell_script_to_encrypt="${1}"
  encrypted_script="${2}"
fi

### Encrypting the input file into a temp file
openssl enc -e -${encryption} -a -A -in "${script_folder}/${shell_script_to_encrypt}" > "${script_folder}/${shell_script_to_encrypt}.txt"

### Creating the output script with the requested name and containing the content to decrypt it
echo "#!/bin/bash" > "${script_folder}/${encrypted_script}"
echo "# " >> "${script_folder}/${encrypted_script}"
echo "# File: ${encrypted_script}" >> "${script_folder}/${encrypted_script}"
echo "# Purpose: Script containing the encrypted version of ${shell_script_to_encrypt} (this file has been generated using `basename ${0}`)" >> "${script_folder}/${encrypted_script}"
echo "# Author: Morgan Patou (dbi services)" >> "${script_folder}/${encrypted_script}"
echo "# Version: 1.0 26/03/2016" >> "${script_folder}/${encrypted_script}"
echo "# " >> "${script_folder}/${encrypted_script}"
echo "###################################################" >> "${script_folder}/${encrypted_script}"
echo "" >> "${script_folder}/${encrypted_script}"
echo "#Storing the encrypted script in a variable" >> "${script_folder}/${encrypted_script}"
echo "encrypted_script=\"`cat "${script_folder}/${shell_script_to_encrypt}.txt"`\"" >> "${script_folder}/${encrypted_script}"
echo "" >> "${script_folder}/${encrypted_script}"
echo "#Decrypting the encrypted script and executing it" >> "${script_folder}/${encrypted_script}"
echo "echo \"\${encrypted_script}\" | openssl enc -d -${encryption} -a -A | sh -" >> "${script_folder}/${encrypted_script}"
echo "" >> "${script_folder}/${encrypted_script}"

### Removing the temp file and setting the output file to executable
rm "${script_folder}/${shell_script_to_encrypt}.txt"
chmod 700 "${script_folder}/${encrypted_script}"
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./encrypt_script.sh
encrypt_script.sh: usage: encrypt_script.sh <shell_script_to_encrypt> <encrypted_script>
        <shell_script_to_encrypt>  : Name of the shell script to encrypt. Must be placed under '/home/morgan'
        <encrypted_script>         : Name of the encrypted script to be created. The file will be created under '/home/morgan'

[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ ./encrypt_script.sh test_script.sh encrypted_test_script.sh
enter aes-256-cbc encryption password:
Verifying - enter aes-256-cbc encryption password:
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$
[morgan@linux_server_01 ~]$ # The real variable "encrypted_script" below is in 1 line only. For readability, I split that in several lines
[morgan@linux_server_01 ~]$ cat encrypted_test_script.sh
#!/bin/bash
#
# File: encrypted_test_script.sh
# Purpose: Script containing the encrypted version of test_script.sh (this file has been generated using encrypt_script.sh)
# Author: Morgan Patou
# Version: 1.0 26/03/2016
#
###################################################

#Storing the encrypted script in a variable
encrypted_script="U2FsdGVkX18QaIvqrQ27FQE8fNhJi2Izi9zRHwANEEt4WJkA3gQzOkrPOF+JYpIEFuvjweL2Eq02vr0MhkjMXIGXYlLipQ7U8TG912/9LdUOYlEx7YV4/1g9enBfZc2gBRHcGL6XW7oMih3wexGNrrq3J5Ys+mDgrmKDLJ75aU6v87iIPFi2ZfFx2NchAc4tHHDQ8gcZFLMByCkWwPZoicx8ODgUstNLRHKTMA7nj/v0fig1BLygQUQpEFjvNTScK6MT01aby8DvNuka0t0hjavTcP8gBEFVC5GQk3Ds/FVQBDqCdltxIhtnHGgbetloKHVwieSw+OsfKyKj9fuOKJ4RRCb7pNq42FHtiwUHhy2FkpxbkJxLgT3uMJopqJy3dU8tlf3nRqGQbm1eNZsf+uWLxgmd7Eq5rsywZjwjbsq1oIeCGzEq4k6WNCbMi3O1RIkKmJ6eR1q8pZcmLT6sEGJUlO3PfkD7ONcO4Ta48zCi7Rsi1PNJouGyNK8NrD34pbEKwu9MTsYTyNzKHCScDjt8QQne6NB+3ODQM26/6SAUM5gd9WmzZMByW6gFyKmkXhRxHsWDlNN5SJDbdd5w4r7+guqnLo/31hZSC2GZLSbQzrmz5FMKoriSuSxmZITQMV5yMp1IaYzJGxTECyl2V5g89aiOLqhehlM6c4uDfkPYZtZlmPX1JVfTTTy7dUeu08VUQqzvU2qdJV4g2rKJQtMw7py4B4a8E0+ShQgpp/Zi6yvKDxlzx9oZC+Gjtegg7TEsOx4kiefzSr+s3Vy/5puBza1vFBG51ZygyDb+p/ptCrmwUClY9qqR7bm+Wd9uRsG41XxReI5WXyZt1t/GZT0x5EkYQ5tn1DKQMc33G1f11yYTSZinwbbO49qL5xw0ZCSUB5AKTBye+b3rHTNKIhkd16P3+rkUN5fjMgUgEo0ojhh99PmwzszVJYdZQdliyHXbn1PJNMa4BLebmcH8PP6uzz8IDaMLrhHkFGTlkTQY+DoMPCb5FXztth3+FVry/Z2AdFDKogB7rXFfWeGWfQ4F+nZnvcqzasZTL9vWLGiFYCovra29ul5pHU5xLeTxi6FSC5naoT2yj0KY2jaRyPc4MKhb5T6DU/K/Wgj/0TNIS0TL/sbReprFtU0f/Kj6z/tzsIucBb0hN9QFIlOBzDfS0dz5xYoMlJ4Es22iMELiNhvF/zv6+j7IE0QdxhfcnJbYZAA9/ehL2osABkSCOBwUH8dkC1CSAvjgYB/WZSGAWpQhrARWTIJiwEYeMMh1+lRmR9qk4OrWzzJrgLvKOrYTjeAMmXZrRFt8vGQ5I7jiJN2VwET4zqm8pppY4eptK9Uaac2sEunGoxg0eBhuWY6dYgDeW6RMa3kK4wJ3DafJLlhmrhpxULEI8Owo8SzJjHpR+UrhrK3hPBw/Zy30El6MCIJ6pJNgeETpF4naK/EZqqKzrxQ8uSAwLDIucVVtOEdV+4lIcISPV1jza2O4eMu/1W39jSs6sA1ORb8H/taSkYvO80iygERCcYCxNBHZEW3mWRzGGWwojpQjmKaALCHYxprmXdKaL8aDoV+43V+90UO++gfamW8kWxzVeV7R/VoyhQQ1R+tem5eGZSsRpMEL7k1p7YIwyg3Yxt3bha22DEDf0UUzzOwakpnK09gzCnxH3RUSSNnutEkTSw9I22IZXJRkrHydARauj7S0Fd9MDRPgBRloiELVNM2uVNyCdFtMheg8q0wlF+GKLvWyzQ=="

#Decrypting the encrypted script and executing it
echo "${encrypted_script}" | openssl enc -d -aes-256-cbc -a -A | sh -

[morgan@linux_server_01 ~]$

 

As you can see above, when encrypting the shell script, you will have to enter an encryption password. This is NOT the password contained in the original shell script. This is a new password that you define and that you will need to remember because without it, you will NOT be able to execute it properly. Also, you can see that the file “encrypted_test_script.sh” contains the variable “encrypted_script”. This variable is the encrypted string representing the original shell script.

/!\ Please note that if you replace “sh -” at the end of the file with “cat” for example, then upon execution, you will see the content of the original shell script. That suppose that you know the password to decrypt it, of course, so that’s still secure. However, it would be easy for someone with bad intentions to change the file encrypted_script.sh so that when you execute it and provide the right password, it in fact send it via email or something like that. I will not describe it but it would be possible to protect you against that by using signatures for example so you are sure the content of the shell script is the one you generated and it hasn’t been tampered.

So like I said before, no perfect solutions… Or at least no easy solutions.

 

To execute the encrypted script, enter the encryption password and then the script is executed automatically:

[morgan@linux_server_01 ~]$ ./encrypted_test_script.sh
enter aes-256-cbc decryption password:

INFO - This file is a test script to test the encryption solutions.
INFO - Entering the correct password will return an exit code of 0.
WARN - Entering the wrong password will return an exit code of 1.


  ----> Please enter the password to execute this script: Password1
WARN - The password entered isn't the correct one. Please try again.

  ----> Please enter the password to execute this script: TestPassw0rd
OK - The password entered is the correct one.
[morgan@linux_server_01 ~]$

 

Complicated topic, isn’t it? I’m not a security expert but I like these kind of subjects, so… If you have other ideas or thoughts, don’t hesitate to share!

 

 

Cet article Encryption of shell scripts est apparu en premier sur Blog dbi services.

Database links without specifying password (using Oracle Wallet)

Tom Kyte - Sat, 2017-08-05 05:46
Is it possible to create a database link without specifying the password (say somehow using a oracle wallet)? As of now we use passwords for everything - JDBC connection database connections (languages other than Java) Creating database links ...
Categories: DBA Blogs

listagg gives ORA-01427: single-row subquery returns more than one row

Tom Kyte - Sat, 2017-08-05 05:46
I need to concatenate row field into one field and I'm trying to use LISTAGG, but I need values to be distinct in the list. I was able to do almost everything with regexp_replace as alternative, but when I have too many orders for a customer I would...
Categories: DBA Blogs

How to run a update query without commit at the end , inside my pl/sql block multiple times without waiting for lock ?

Tom Kyte - Sat, 2017-08-05 05:46
I have an update query inside a pl/SQL block. The pl/SQL block is optimised to execute within 800 ms.I have tested the code and it executes fine. However, if my code is put to test on regression it is taking huge time to complete. My code is bein...
Categories: DBA Blogs

Reports - web.show_document userid password contains character .#

Tom Kyte - Sat, 2017-08-05 05:46
We use web.show_document to view reports and there are users who have a # character in their password. For these users comes a message rep-0501, for the other users who do not have that character in their password everything works fine. Here I show t...
Categories: DBA Blogs

Alternative for CLOB data type in oracle 12g

Tom Kyte - Sat, 2017-08-05 05:46
I wanted to know what is the best alternative data type for CLOB? My current database have a few CLOB data type. Does CLOB Data type is going to be deprecated in newer version? I tried to search around and seems like varchar2 will be the alte...
Categories: DBA Blogs

Is there a UTL_MAIL connection limit?

Tom Kyte - Sat, 2017-08-05 05:46
Hi, We recently encountered a connection limit on UTL_SMTP of 16 open connections. This is not because connections are being left open its just that we have reached a threshold of the number of applications utilising the UTL_SMTP package on our o...
Categories: DBA Blogs

LiveSQL: Accepting Input From User

Tom Kyte - Sat, 2017-08-05 05:46
I am not able to accept input from user on Live SQL Platform I have tried & and : both but i am not able to accept the input from user. Please suggest me the syntax for the same. Thanks in Advance
Categories: DBA Blogs

Documentum – Unable to install xCP 2.3 on a CS 7.3

Yann Neuhaus - Sat, 2017-08-05 02:53

Beginning of this year, we were doing our first silent installations of the new Documentum stack. I already created a few blogs to talk about some issues with CS 7.3 and xPlore 1.6. This time, I will talk about xCP 2.3 and in particular the installation on a CS 7.3. The Patch of xCP as well as the patch for the CS 7.3 doesn’t matter since all versions are affected. Please just note that the first supported patch on a CS 7.3 is xCP 2.3 P03 so you shouldn’t be installing a previous patch on 7.3.

So, when installing an xCP 2.3 on a Content Server 7.3, you will get a pop-up in the installer with the following error message: “Installation of DARs failed”. You will only have an “OK” button on this pop-up which will close the installer. Ok so there is an issue with the installation of the DARs but what’s the issue exactly?

 

On the installation log file, we can see the following:

[dmadmin@content_server_01 ProcessEngine]$ cat logs/install.log
13:44:45,356  INFO [Thread-8] com.documentum.install.pe.installanywhere.actions.PEInitializeSharedLibrary - Done InitializeSharedLibrary ...
13:44:45,395  INFO [Thread-10] com.documentum.install.appserver.jboss.JbossApplicationServer - setApplicationServer sharedDfcLibDir is:$DOCUMENTUM_SHARED/dfc
13:44:45,396  INFO [Thread-10] com.documentum.install.appserver.jboss.JbossApplicationServer - getFileFromResource for templates/appserver.properties
13:44:45,532  WARN [Thread-10] com.documentum.install.pe.installanywhere.actions.DiWAPeInitialize - init-param tags found in Method Server webapp:

<init-param>
      <param-name>docbase_install_owner_name</param-name>
      <param-value>dmadmin</param-value>
</init-param>
<init-param>
      <param-name>docbase-GR_DOCBASE</param-name>
      <param-value>GR_DOCBASE</param-value>
</init-param>
<init-param>
      <param-name>docbase-DocBase1</param-name>
      <param-value>DocBase1</param-value>
</init-param>
<init-param>
      <param-name>docbase-DocBase2</param-name>
      <param-value>DocBase2</param-value>
</init-param>
13:44:58,771  INFO [AWT-EventQueue-0] com.documentum.install.pe.ui.panels.DiWPPELicenseAgreementPanel - UserSelection: "I accept the terms of the license agreement."
13:46:13,398  INFO [AWT-EventQueue-0] com.documentum.install.appserver.jboss.JbossApplicationServer - The batch file: $DOCUMENTUM_SHARED/temp/installer/wildfly/dctm_tmpcmd0.sh exist? false
13:46:13,399  INFO [AWT-EventQueue-0] com.documentum.install.appserver.jboss.JbossApplicationServer - The user home is : /home/dmadmin
13:46:13,405  INFO [AWT-EventQueue-0] com.documentum.install.appserver.jboss.JbossApplicationServer - Executing temporary batch file: $DOCUMENTUM_SHARED/temp/installer/wildfly/dctm_tmpcmd0.sh for running: $DOCUMENTUM_SHARED/java64/1.8.0_77/bin/java -cp $DOCUMENTUM_SHARED/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/dfc.jar:$DOCUMENTUM_SHARED/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/aspectjrt.jar:$DOCUMENTUM_SHARED/wildfly9.0.1/modules/system/layers/base/emc/documentum/security/main/DctmUtils.jar com.documentum.install.appserver.utils.DctmAppServerAuthenticationString $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer jboss
13:46:42,320  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeInstallActions - starting DctmActions
13:46:42,724  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - user name = admin
13:46:42,724  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - Server DctmServer_MethodServer already exists!
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - Deploying to Group MethodServer... bpm (bpm.ear): does not exist!
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/log4j.properties
13:46:42,725  INFO [installer] com.documentum.install.appserver.jboss.JbossApplicationServer - resolving $DOCUMENTUM_SHARED/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/bpm.war/WEB-INF/web.xml
13:46:42,727  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeInstallActions - Finished DctmActions.
13:46:44,885  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Start to deploy dars for docbase: DocBase2
13:52:20,931  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - End to deploy dars for repository: DocBase2
13:52:20,932  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Start to deploy dars for docbase: DocBase1
13:57:59,510  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - End to deploy dars for repository: DocBase1
13:57:59,511  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Start to deploy dars for docbase: GR_DOCBASE
14:04:03,231  INFO [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - End to deploy dars for repository: GR_DOCBASE
14:04:03,268 ERROR [installer] com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars - Installation of DARs failed
com.documentum.install.shared.common.error.DiException: 3 DAR(s) failed to install.
        at com.documentum.install.shared.common.services.dar.DiDocAppFailureList.report(DiDocAppFailureList.java:39)
        at com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars.deployDars(DiPAPeProcessDars.java:123)
        at com.documentum.install.pe.installanywhere.actions.DiPAPeProcessDars.setup(DiPAPeProcessDars.java:71)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:75)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
        at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
        at com.zerog.ia.installer.Installer.install(Unknown Source)
        at com.zerog.ia.installer.actions.InstallProgressAction.ae(Unknown Source)
        at com.zerog.ia.installer.actions.ProgressPanelAction$1.run(Unknown Source)
14:04:03,269  INFO [installer]  - The INSTALLER_UI value is SWING
14:04:03,269  INFO [installer]  - The env PATH value is: /usr/xpg4/bin:$DOCUMENTUM_SHARED/java64/JAVA_LINK/bin:$DOCUMENTUM/product/7.3/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DOCUMENTUM_SHARED/java64/JAVA_LINK/bin:$DOCUMENTUM/product/7.3/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DOCUMENTUM/product/7.3/bin:$ORACLE_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
[dmadmin@content_server_01 ProcessEngine]$

 

It is mentioned that three DARs failed to be installed but since there are three docbases here, that’s actually one DAR per docbase. The only interesting information we can find from the install log file is that some DARs were installed properly so it’s not a generic issue but more likely an issue with one specific DAR. The next step is therefore to check the log file of the DAR installation:

[dmadmin@content_server_01 ProcessEngine]$ grep -i ERROR logs/dar_logs/GR_DOCBASE/peDars.log | grep -v "^\[INFO\].*ERROR"
[INFO]  dmbasic.exe output : dmbasic: Error 35 in line 585: Sub or Function not defined
[ERROR]  Unable to install dar file $DOCUMENTUM/product/7.3/install/DARsInternal/BPM.dar
com.emc.ide.installer.InstallException: Error handling controllable object Status = New; IsInstalled = true; com.emc.ide.artifact.bpm.model.bpm.impl.ActivityImpl@5e020dd1 (objectTypeName: null) (objectName: DB Inbound - Initiate, title: , subject: , authors: [], keywords: [], applicationType: , isHidden: false, compoundArchitecture: , componentLabel: [], resolutionLabel: , contentType: xml, versionLabel: [1.0, CURRENT], specialApp: DB-IN-IN.GIF, languageCode: , creatorName: null, archive: false, category: , controllingApp: , effectiveDate: [], effectiveFlag: [], effectiveLabel: [], expirationDate: [], extendedProperties: [], fullText: true, isSigned: false, isTemplate: false, lastReviewDate: null, linkResolved: false, publishFormats: [], retentionDate: null, status: , rootObject: true) (isPrivate: false, definitionState: installed, triggerThreshold: 0, triggerEvent: , execType: manual, execSubType: inbound_initiate, execMethodName: null, preTimer: 0, preTimerCalendarFlag: notusebusinesscal, preTimerRepeatLast: 0, postTimer: 0, postTimerCalendarFlag: notusebusinesscal, postTimerRepeatLast: 0, repeatableInvoke: true, execSaveResults: false, execTimeOut: 0, execErrHandling: stopAfterFailure, signOffRequired: false, resolveType: normal, resolvePkgName: , controlFlag: taskAssignedtoSupervisor, taskName: null, taskSubject: , performerType: user, performerFlag: noDeligationOrExtention, transitionMaxOutputCnt: 0, transitionEvalCnt: trigAllSelOutputLinks, transitionFlag: trigAllSelOutputLinks, transitionType: prescribed, execRetryMax: 0, execRetryInterval: 0, groupFlag: 0, template: true, artifactVersion: D65SP1);  Object ID = 4c0f123450002b1e;
Caused by: DfException:: THREAD: main; MSG: Error while making activity uneditable: com.emc.ide.artifactmanager.model.artifact.impl.ArtifactImpl@4bbc02ef (urn: urnd:com.emc.ide.artifact.bpm.activity/DB+Inbound+-+Initiate?location=%2FTemp%2FIntegration&name=DB+Inbound+-+Initiate, locale: null, repoLocation: null, categoryId: com.emc.ide.artifact.bpm.activity, implicitlyCreated: false, modifiedByUser: true); ERRORCODE: ff; NEXT: null
Caused by: DfException:: THREAD: main; MSG: [DM_WORKFLOW_E_NAME_NOT_EXIST]error:  "The dm_user object by the name 'dm_bps_inbound_user' specified in attribute performer_name does not exist."; ERRORCODE: 100; NEXT: null
[ERROR]  Failed to install DAR
Caused by: com.emc.ide.installer.InstallException: Error handling controllable object Status = New; IsInstalled = true; com.emc.ide.artifact.bpm.model.bpm.impl.ActivityImpl@5e020dd1 (objectTypeName: null) (objectName: DB Inbound - Initiate, title: , subject: , authors: [], keywords: [], applicationType: , isHidden: false, compoundArchitecture: , componentLabel: [], resolutionLabel: , contentType: xml, versionLabel: [1.0, CURRENT], specialApp: DB-IN-IN.GIF, languageCode: , creatorName: null, archive: false, category: , controllingApp: , effectiveDate: [], effectiveFlag: [], effectiveLabel: [], expirationDate: [], extendedProperties: [], fullText: true, isSigned: false, isTemplate: false, lastReviewDate: null, linkResolved: false, publishFormats: [], retentionDate: null, status: , rootObject: true) (isPrivate: false, definitionState: installed, triggerThreshold: 0, triggerEvent: , execType: manual, execSubType: inbound_initiate, execMethodName: null, preTimer: 0, preTimerCalendarFlag: notusebusinesscal, preTimerRepeatLast: 0, postTimer: 0, postTimerCalendarFlag: notusebusinesscal, postTimerRepeatLast: 0, repeatableInvoke: true, execSaveResults: false, execTimeOut: 0, execErrHandling: stopAfterFailure, signOffRequired: false, resolveType: normal, resolvePkgName: , controlFlag: taskAssignedtoSupervisor, taskName: null, taskSubject: , performerType: user, performerFlag: noDeligationOrExtention, transitionMaxOutputCnt: 0, transitionEvalCnt: trigAllSelOutputLinks, transitionFlag: trigAllSelOutputLinks, transitionType: prescribed, execRetryMax: 0, execRetryInterval: 0, groupFlag: 0, template: true, artifactVersion: D65SP1);  Object ID = 4c0f123450002b1e;
Caused by: DfException:: THREAD: main; MSG: Error while making activity uneditable: com.emc.ide.artifactmanager.model.artifact.impl.ArtifactImpl@4bbc02ef (urn: urnd:com.emc.ide.artifact.bpm.activity/DB+Inbound+-+Initiate?location=%2FTemp%2FIntegration&name=DB+Inbound+-+Initiate, locale: null, repoLocation: null, categoryId: com.emc.ide.artifact.bpm.activity, implicitlyCreated: false, modifiedByUser: true); ERRORCODE: ff; NEXT: null
Caused by: DfException:: THREAD: main; MSG: [DM_WORKFLOW_E_NAME_NOT_EXIST]error:  "The dm_user object by the name 'dm_bps_inbound_user' specified in attribute performer_name does not exist."; ERRORCODE: 100; NEXT: null
[dmadmin@content_server_01 ProcessEngine]$

 

With the above, we know that the only failed DAR is the BPM.dar and it looks like we have the reason for this: the DAR needs a user named “dm_bps_inbound_user” to proceed with the installation but couldn’t find it and therefore the installation failed. But actually that’s not the root cause, it’s only a consequence. The real reason why the DAR installation failed is displayed in the first line above.

[INFO]  dmbasic.exe output : dmbasic: Error 35 in line 585: Sub or Function not defined

 

For some reason, a function couldn’t be executed because not defined properly. This function is the one that is supposed to create the “dm_bps_inbound_user” user but with a CS 7.3 this function cannot be executed properly. As a result, the user isn’t created and then the DAR installation fail. For more information, you can refer to the BPM-11223.

 

This issue will – according to EMC – not be fixed in any patch of the xCP 2.3, even if this issue has been spotted quickly after the release of the xCP 2.3. Therefore, if you want to avoid this issue, you will have to wait several months for the xCP 2.4 to be released (not really realistic ;)) or you will need to create this user manually before installing the xCP 2.3 on a CS 7.3. You don’t need special permissions for this user and you don’t need to know its password so it’s rather simple to create it for all installed docbases in a few simple commands:

[dmadmin@content_server_01 ProcessEngine]$ echo "?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';" > create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "create,c,dm_user" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "set,c,l,user_name" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "dm_bps_inbound_user" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "set,c,l,user_login_name" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "dm_bps_inbound_user" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "save,c,l" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$ echo "?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';" >> create_user.api
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$ cat create_user.api
?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';
create,c,dm_user
set,c,l,user_name
dm_bps_inbound_user
set,c,l,user_login_name
dm_bps_inbound_user
save,c,l
?,c,select r_object_id, user_name, user_login_name from dm_user where user_login_name like 'dm_bps%';
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$ sep="***********************"
[dmadmin@content_server_01 ProcessEngine]$ for docbase in `cd $DOCUMENTUM/dba/config; ls`;do echo;echo "$sep";echo "Create User: ${docbase}";echo "$sep";iapi ${docbase} -Udmadmin -Pxxx -Rcreate_user.api;done

***********************
Create User: GR_DOCBASE
***********************


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0000.0205


Connecting to Server using docbase GR_DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 010f12345001c734 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
Session id is s0
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------

(0 row affected)

API> ...
110f12345000093c
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------
110f12345000093c     dm_bps_inbound_user   dm_bps_inbound_user
(1 row affected)

API> Bye

***********************
Create User: DocBase1
***********************


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0000.0205


Connecting to Server using docbase DocBase1
[DM_SESSION_I_SESSION_START]info:  "Session 010f234560052632 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
Session id is s0
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------

(0 row affected)

API> ...
110f234560001532
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------
110f234560001532     dm_bps_inbound_user   dm_bps_inbound_user                                                                                                                                                            
(1 row affected)

API> Bye

***********************
Create User: DocBase2
***********************


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0000.0205


Connecting to Server using docbase DocBase2
[DM_SESSION_I_SESSION_START]info:  "Session 010f345670052632 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0000.0214  Linux64.Oracle
Session id is s0
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------

(0 row affected)

API> ...
110f345670001532
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> r_object_id     user_name             user_login_name                                                                                                                                                             
-------------------  --------------------- ---------------------
110f345670001532     dm_bps_inbound_user   dm_bps_inbound_user                                                                                                                                                            
(1 row affected)

API> Bye
[dmadmin@content_server_01 ProcessEngine]$
[dmadmin@content_server_01 ProcessEngine]$ rm create_user.api
[dmadmin@content_server_01 ProcessEngine]$

 

The users have been created properly in all docbases so just restart the xCP installer and this time the BPM.dar installation will succeed.

 

 

Cet article Documentum – Unable to install xCP 2.3 on a CS 7.3 est apparu en premier sur Blog dbi services.

Documentum – Using DA with Self-Signed SSL Certificate

Yann Neuhaus - Sat, 2017-08-05 01:58

A few years ago, I was working on a Documentum project and one of the tasks was to setup all components in SSL. I already published a lot of blogs on this subject but there is one I wanted to do but never really took the time to publish it. In this blog, I will therefore talk about Documentum Administrator in SSL using a Self-Sign SSL Certificate. Recently, a colleague of mine had the same issue at another customer so I provided him the full procedure that I will describe below. However, since the process below requires the signature of a jar file and since this isn’t available for all companies, you might want to check out my colleague’s blog too.

A lot of companies are working with their own SSL Trust Chain, meaning that they provide/create their own SSL Certificate (Self-Signed) including their Root and Intermediate SSL Certificate for the trust. End-users will not really notice the difference but they are actually using Self-Sign SSL Certificate. This has some repercussions when working with Documentum since you need to import the SSL Trust Chain on the various Application Servers (JMS, WebLogic, Dsearch, aso…). This is pretty simple but there is one thing that is a little bit trickier and this is related to Documentum Administrator.

Below, I will use a DA 7.2 P16 (that is therefore pretty recent) but the same applies to all patches of DA 7.2 and 7.3. For information, we didn’t face this issue with DA 7.1 so something most probably changed between DA 7.1 and 7.2. If you are seeing the same thing with a DA 7.1, feel free to put a comment below, I would love to know! When you are accessing DA for the first time, you will actually download a JRE which will be put under C:\Users\<user_name>\Documentum\ucf\<machine_name>, by default. This JRE is used for various stuff including the transfer of files (UCF), display of DA preferences, aso… DA isn’t taking the JRE from the website of Oracle, it is, in fact, taking it from the da.war file. The DA war file always contains two or three different JREs versions. Now if you want to use DA in HTTPS, these JREs will also need to contain your custom SSL Trust Chain. So how can you do that?

Well a simple answer would be: just like for the JMS or WebLogic, just import the custom SSL Trust Chain in the “cacerts” of these JREs. That will actually not work for a very vicious reason: EMC is now signing all the files provided and that also include the JREs inside da.war (well actually they are signing the checksums of the JREs, not the JREs themselves). Because of this signature, if you edit the cacerts file of the JREs, DA will say something like that: “Invalid checksum for the file ‘win-jre1.8.0_91.zip'”. This checksum ensures that the JREs and all the files you are using on your local workstation that have been downloaded from the da.war are the one provided by EMC. This is good from a security point of view since it prevents intruders to exchanges the files during transfer or directly on your workstation but that also prevents you from updating the JREs with your custom SSL Trust Chain.

 

So what I will do below to update the Java cacerts AND still keep a valid signature is:

  1. Extract the JREs and ucfinit.jar file from da.war
  2. Update the cacerts of each JREs with a custom SSL Trust Chain (Root + Intermediate)
  3. Repackage the JREs
  4. Calculate the checksum of the JREs using the ComputeChecksum java class
  5. Extract the old checksum files from ucfinit.jar
  6. Replace the old checksum files for the JREs with the new one generated on step 4
  7. Remove .RSA and .SF files from the META-INF folder and clean the MANIFEST to remove Documentum’s digital signature
  8. Recreate the file ucfinit.jar with the clean manifest and all other files
  9. Ask the company’s dedicated team to sign the new jar file
  10. Repackage da.war with the updated JREs and the updated/signed ucfinit.jar

 

I will use below generic commands that do not specify any version of the JREs or DA because there will be two or three different JREs and the versions will change depending on your DA Patch level, so better stay generic. I will also use my custom SSL Trust Chain which I put under /tmp.

In this first part, I will create a working folder to avoid messing with the deployed applications. Then I will extract the needed files and finally remove all files and folders that I don’t need. That’s the step 1:

[weblogic@weblogic_server_01 ~]$ mkdir /tmp/workspace; cd /tmp/workspace
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ cp $WLS_APPLICATIONS/da.war .
[weblogic@weblogic_server_01 workspace]$ ls
da.war
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ jar -xvf da.war wdk/system/ucfinit.jar wdk/contentXfer/
  created: wdk/contentXfer/
 inflated: wdk/contentXfer/All-MB.jar
 ...
 inflated: wdk/contentXfer/Web/Emc.Documentum.Ucf.Client.Impl.application
 inflated: wdk/contentXfer/win-jre1.7.0_71.zip
 inflated: wdk/contentXfer/win-jre1.7.0_72.zip
 inflated: wdk/contentXfer/win-jre1.8.0_91.zip
 inflated: wdk/system/ucfinit.jar
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ cd ./wdk/contentXfer/
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
All-MB.jar                                    jacob.dll                 libUCFSolarisGNOME.so   ucf-client-installer.zip  win-jre1.8.0_91.zip
Application Files                             jacob.jar                 libUCFSolarisJNI.so     ucf.installer.config.xml
Emc.Documentum.Ucf.Client.Impl.application    libMacOSXForkerIO.jnilib  licenses                UCFWin32JNI.dll
ES1_MRE.msi                                   libUCFLinuxGNOME.so       MacOSXForker.jar        Web
ExJNIAPI.dll                                  libUCFLinuxJNI.so         mac_utilities.jar       win-jre1.7.0_71.zip
ExJNIAPIGateway.jar                           libUCFLinuxKDE.so         ucf-ca-office-auto.jar  win-jre1.7.0_72.zip
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ for i in `ls | grep -v 'win-jre'`; do rm -rf "./${i}"; done
[weblogic@weblogic_server_01 contentXfer]$ rm -rf ./*/
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
win-jre1.7.0_71.zip  win-jre1.7.0_72.zip  win-jre1.8.0_91.zip
[weblogic@weblogic_server_01 contentXfer]$

 

At this point, only the JREs are present in the current folder (wdk/contentXfer) and I also have another file in another folder (wdk/system/ucfinit.jar). Once that is done, I’m creating a list of the JREs available that I will use for the whole blog and I’m also performing the steps 2 and 3, to extract the cacerts from the JREs, update them and finally repackage them (this is where I use the custom SSL Trust Chain):

[weblogic@weblogic_server_01 contentXfer]$ ls win-jre* | sed -e 's/.*win-//' -e 's/.zip//' > /tmp/list_jre.txt
[weblogic@weblogic_server_01 contentXfer]$ cat /tmp/list_jre.txt
jre1.7.0_71
jre1.7.0_72
jre1.8.0_91
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do unzip -x win-${line}.zip ${line}/lib/security/cacerts; done < /tmp/list_jre.txt
Archive:  win-jre1.7.0_71.zip
  inflating: jre1.7.0_71/lib/security/cacerts
Archive:  win-jre1.7.0_72.zip
  inflating: jre1.7.0_72/lib/security/cacerts
Archive:  win-jre1.8.0_91.zip
  inflating: jre1.8.0_91/lib/security/cacerts
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do keytool -import -noprompt -trustcacerts -alias custom_root_ca -keystore ${line}/lib/security/cacerts -file /tmp/Company_Root_CA.cer -storepass changeit; done < /tmp/list_jre.txt
Certificate was added to keystore
Certificate was added to keystore
Certificate was added to keystore
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do keytool -import -noprompt -trustcacerts -alias custom_int_ca -keystore ${line}/lib/security/cacerts -file /tmp/Company_Intermediate_CA.cer -storepass changeit; done < /tmp/list_jre.txt
Certificate was added to keystore
Certificate was added to keystore
Certificate was added to keystore
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do zip -u win-${line}.zip ${line}/lib/security/cacerts; done < /tmp/list_jre.txt
updating: jre1.7.0_71/lib/security/cacerts (deflated 35%)
updating: jre1.7.0_72/lib/security/cacerts (deflated 35%)
updating: jre1.8.0_91/lib/security/cacerts (deflated 33%)
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ while read line; do rm -rf ./${line}; done < /tmp/list_jre.txt
[weblogic@weblogic_server_01 contentXfer]$

 

At this point, the JREs have been updated with a new “cacerts” and therefore its checksum changed. It doesn’t match the signed checksum anymore so if you try to deploy DA at this point, you will get the error message I put above. So, let’s perform the steps 4, 5 and 6. For that purpose, I will use the file /tmp/ComputeChecksum.class that was provided by EMC. This class is needed in order to recalculate the new checksum of the JREs:

[weblogic@weblogic_server_01 contentXfer]$ pwd
/tmp/workspace/wdk/contentXfer
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ cp /tmp/ComputeChecksum.class .
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
ComputeChecksum.class  win-jre1.7.0_71.zip  win-jre1.7.0_72.zip  win-jre1.8.0_91.zip
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ java ComputeChecksum .
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ ls
ComputeChecksum.class           win-jre1.7.0_71.zip           win-jre1.7.0_72.zip           win-jre1.8.0_91.zip
ComputeChecksum.class.checksum  win-jre1.7.0_71.zip.checksum  win-jre1.7.0_72.zip.checksum  win-jre1.8.0_91.zip.checksum
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ rm ComputeChecksum.class*
[weblogic@weblogic_server_01 contentXfer]$
[weblogic@weblogic_server_01 contentXfer]$ cd /tmp/workspace/wdk/system/
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ pwd
/tmp/workspace/wdk/system
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ ls
ucfinit.jar
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ jar -xvf ucfinit.jar
 inflated: META-INF/MANIFEST.MF
 inflated: META-INF/COMPANY.SF
 inflated: META-INF/COMPANY.RSA
  created: META-INF/
 inflated: All-MB.jar.checksum
  created: com/
  created: com/documentum/
  ...
 inflated: UCFWin32JNI.dll.checksum
 inflated: win-jre1.7.0_71.zip.checksum
 inflated: win-jre1.7.0_72.zip.checksum
 inflated: win-jre1.8.0_91.zip.checksum
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ mv /tmp/workspace/wdk/contentXfer/win-jre*.checksum .
[weblogic@weblogic_server_01 system]$

 

With this last command, the new checksum have replaced the old ones. The next step is now to remove the old signatures (.RSA and .SF files + content of the manifest) and the repack the ucfinit.jar file (step 7 and 8):

[weblogic@weblogic_server_01 system]$ rm ucfinit.jar META-INF/*.SF META-INF/*.RSA
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ sed -i -e '/^Name:/d' -e '/^SHA/d' -e '/^ /d' -e '/^[[:space:]]*$/d' META-INF/MANIFEST.MF
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ cat META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.8.4
Title: Documentum Client File Selector Applet
Bundle-Version: 7.2.0160.0058
Application-Name: Documentum
Built-By: dmadmin
Build-Version: 7.2.0160.0058
Permissions: all-permissions
Created-By: 1.6.0_30-b12 (Sun Microsystems Inc.)
Copyright: Documentum Inc. 2001, 2004
Caller-Allowable-Codebase: *
Build-Date: August 16 2016 06:35 AM
Codebase: *
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ vi META-INF/MANIFEST.MF
    => Add a new empty line at the end of this file with vi, vim, nano or whatever... The file must always end with an empty line.
    => Do NOT use the command "echo '' >> META-INF/MANIFEST.MF" because it will change the fileformat of the file which complicate the signature (usually the FF is DOS...)
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ jar -cmvf META-INF/MANIFEST.MF ucfinit.jar *
added manifest
adding: All-MB.jar.checksum(in = 28) (out= 30)(deflated -7%)
adding: com/(in = 0) (out= 0)(stored 0%)
adding: com/documentum/(in = 0) (out= 0)(stored 0%)
adding: com/documentum/ucf/(in = 0) (out= 0)(stored 0%)
...
adding: UCFWin32JNI.dll.checksum(in = 28) (out= 30)(deflated -7%)
adding: win-jre1.7.0_71.zip.checksum(in = 28) (out= 30)(deflated -7%)
adding: win-jre1.7.0_72.zip.checksum(in = 28) (out= 30)(deflated -7%)
adding: win-jre1.8.0_91.zip.checksum(in = 28) (out= 30)(deflated -7%)
[weblogic@weblogic_server_01 system]$

 

At this point, the file ucfinit.jar has been recreated with an “empty” manifest, without signature but with all the new checksum files. Therefore, it is now time to send this file (ucfinit.jar) to your code signing team (step 9). This is out of scope for this blog but basically what will be done by your signature team is the creation of the .RSA and .SF files inside the folder META-INF as well as the repopulation of the manifest. The .SF and the manifest will contain more or less the same thing: the different files of the ucfinit.jar files will have their entries in these files with a pair filename/signature. At this point, we therefore have re-signed the checksum of the JREs.

 

The last step is now to repack the da.war with the new ucfinit.jar file which has been signed. I put the new signed file under /tmp:

[weblogic@weblogic_server_01 system]$ pwd
/tmp/workspace/wdk/system
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ rm -rf *
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ ll
total 0
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ cp /tmp/ucfinit.jar .
[weblogic@weblogic_server_01 system]$
[weblogic@weblogic_server_01 system]$ cd /tmp/workspace/
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ ls wdk/*
wdk/contentXfer:
win-jre1.7.0_71.zip  win-jre1.7.0_72.zip  win-jre1.8.0_91.zip

wdk/system:
ucfinit.jar
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ jar -uvf da.war wdk
adding: wdk/(in = 0) (out= 0)(stored 0%)
adding: wdk/contentXfer/(in = 0) (out= 0)(stored 0%)
adding: wdk/contentXfer/win-jre1.7.0_71.zip(in = 41373620) (out= 41205241)(deflated 0%)
adding: wdk/contentXfer/win-jre1.7.0_72.zip(in = 41318962) (out= 41137924)(deflated 0%)
adding: wdk/contentXfer/win-jre1.8.0_91.zip(in = 62424686) (out= 62229724)(deflated 0%)
adding: wdk/system/(in = 0) (out= 0)(stored 0%)
adding: wdk/system/ucfinit.jar(in = 317133) (out= 273564)(deflated 13%)
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ mv $WLS_APPLICATIONS/da.war $WLS_APPLICATIONS/da.war_bck_beforeSignature
[weblogic@weblogic_server_01 workspace]$
[weblogic@weblogic_server_01 workspace]$ mv da.war $WLS_APPLICATIONS/
[weblogic@weblogic_server_01 workspace]$

 

Once this has been done, simply redeploy the Documentum Administrator and the next time you will access it in HTTPS, you will be able to transfer files, view the DA preferences, aso… The JREs are now trusted automatically because the checksum of the JRE is now signed properly.

 

 

Cet article Documentum – Using DA with Self-Signed SSL Certificate est apparu en premier sur Blog dbi services.

Developer GUI tools for PostgreSQL

Yann Neuhaus - Fri, 2017-08-04 13:33

There was a recent thread on the PostgreSQL general mailing list asking for GUI tools for PostgreSQL. This is question we get asked often at customers so I though it might be good idea to summarize some of them in a blog post. When you know other tools than the ones listed here which look promising, let me know so I can add them. There is a list of tools in the PostgreSQL Wiki as well.

Name Linux Windows MacOS Free Screenshot pgAdmin Y Y Y Y pg_gui_pgadmin DBeaver Y Y Y Y pg_gui_dbeaver EMS SQL Manager for PostgreSQL N Y N N pg_gui_ems_sql_manager JET BRAINS DataCrip Y Y Y N pg_gui_datagrip PostgreSQL Studio Y Y Y Y pg_gui_pgstudio Navicat for PostgreSQL Y Y Y N pg_gui_navicat execute Query Y Y Y Y pg_gui_executequery SQuirreL SQL Client Y Y Y Y pg_gui_aquirrel pgModeler Y Y Y Y pg_gui_pgmodeler DbSchema Y Y Y N pg_gui_dbschema Oracle SQL Developer Y Y Y Y pg_gui_sqldeveloper PostgreSQL Maestro N Y N N pg_gui_sqlmaestro SQL workbench Y Y Y Y pg_gui_sqlworkbench Nucleon Database Master N Y N N pg_gui_databasemaster Razor SQL Y Y Y N pg_gui_razorsql Database Workbench N Y N N pg_gui_databaseworkbench  

Cet article Developer GUI tools for PostgreSQL est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator