Feed aggregator

Changes in BI Server Cache Behaviour in OBIEE 12c : OBIS_REFRESH_CACHE

Rittman Mead Consulting - Tue, 2016-05-31 10:36

The OBIEE BI Server cache can be a great way of providing a performance boost to response times for end users – so long as it’s implemented carefully. Done wrong, and you’re papering over the cracks and heading for doom; done right, and it’s the ‘icing on the cake’. You can read more about how to use it properly here, and watch a video I did about it here. In this article we’ll see how the BI Server cache has changed in OBIEE 12c in a way that could prove somewhat perplexing to developers used to OBIEE 11g.

The BI Server cache works by inspecting queries as they are sent to the BI Server, and deciding if an existing cache entry can be used to provide the data. This can include direct hits (i.e. the same query being run again), or more advanced cases, where a subset or aggregation of an existing cache entry could be used. If a cache entry is used then a trip to the database is avoided and response times will typically be better – particularly if more than one database query would have been involved, or lots of additional post-processing on the BI Server.

When an analysis or dashboard is run, Presentation Services generates the necessary Logical SQL to return the data needed, and sends this to the BI Server. It’s at this point that the cache will, or won’t, kick in. The BI Server will accept Logical SQL from other sources than Presentation Services – in fact, any JDBC or ODBC client. This is useful as it enables us to validate behaviour that we’re observing and see how it can apply elsewhere.

When you build an Analysis in OBIEE 11g (and before), the cache will be used if applicable. Each time you add a column, or hit refresh, you’ll get an entry back from the cache if one exists. This has benefits – speed – but disadvantages too. When the data in the database changes, you will still get a cache hit, regardless. The only way to force OBIEE to show you the latest version of the data is to purge the cache first. You can target cache purges based on databases, tables, or even specific queries – but you do need to purge it.

What’s changed in OBIEE 12c is that when you click “Refresh” on an Analysis or Dashboard, the query is re-run against the source and the cache re-populated. Even if you have an existing cache entry, and even if the underlying data has not changed, if you hit Refresh, the cache will not be used. Which kind of makes sense, since “refresh” probably should indeed mean that.

Digging into OBIEE Cache Behaviour

Let’s prove this out. I’ve got SampleApp v506 (OBIEE 11.1.1.9), and SampleApp v511 (OBIEE 12.2.1). First off, I’ll clear the cache on each, using call saPurgeAllCache();, run via Issue SQL:

Then I can use another BI Server procedure call to view the current cache contents (new in 11.1.1.9), call NQS_GetAllCacheEntries(). For this one particularly make sure you’ve un-ticked “Use Oracle BI Presentation Services Cache”. This is different from the BI Server cache which is the subject of this article, and as the name implies is a cache that Presentation Services keeps.

I’ve confirmed that the BI Server cache is enabled on both servers, in NQSConfig.INI

###############################################################################
#
#  Query Result Cache Section
#
###############################################################################


[CACHE]

ENABLE = YES;  # This Configuration setting is managed by Oracle Enterprise Manager Fusion Middleware Control

Now I create a very simple analysis in both 11g and 12c, showing a list of Airline Carriers and their Codes:

After clicking Results, a cache entry is inserted on each respective system:

Of particular interest is the create time, last used time, and number of times used:

If I now click Refresh in the Analysis window:

We see this happen to the caches:

In OBIEE 11g the cache entry is used – but in OBIEE 12c it’s not. The CreatedTime is evidently not populated correctly, so instead let’s dive over to the query log (nqquery/obis1-query in 11g/12c respectively). In OBIEE 11g we’ve got:

-- SQL Request, logical request hash:
7c365697
SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

-- Cache Hit on query: [[
Matching Query: SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

Whereas 12c is:

-- SQL Request, logical request hash:
d53f813c
SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT
   0 s_0,
   "X - Airlines Delay"."Carrier"."Carrier Code" s_1,
   "X - Airlines Delay"."Carrier"."Carrier" s_2
FROM "X - Airlines Delay"
ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY

-- Sending query to database named X0 - Airlines Demo Dbs (ORCL) (id: <<320369>>), connection pool named Aggr Connection, logical request hash d53f813c, physical request hash a46c069c: [[
WITH
SAWITH0 AS (select T243.CODE as c1,
     T243.DESCRIPTION as c2
from
     BI_AIRLINES.UNIQUE_CARRIERS T243 /* 30 UNIQUE_CARRIERS */ )
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3 from ( select 0 as c1,
     D1.c1 as c2,
     D1.c2 as c3
from
     SAWITH0 D1
order by c3, c2 ) D1 where rownum <= 5000001

-- Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736117_52416_27.TBL'.

Looking closely at the 12c output shows three things:

  1. OBIEE has run a database query for this request, and not hit the cache
  2. A cache entry has clearly been created again as a result of this query
  3. The Logical SQL has a request variable set: OBIS_REFRESH_CACHE=1

    This is evidently added it by Presentation Services at runtime, since the Advanced tab of the analysis shows no such variable being set:

Let’s save the analysis, and experiment further. Evidently, the cache is being deliberately bypassed when the Refresh button is clicked when building an analysis – but what about when it is opened from the Catalog? We should see a cache hit here too:

Nope, no hit.

15239r

But, in the BI Server query log, no entry either – and the same on 11g. The reason being …. Presentation Service’s cache. D’oh!

From Administration > Manage Sessions I select Close All Cursors which forces a purge of the Presentation Services cache. When I reopen the analysis from the Catalog view, now I get a cache hit, in both 11g and 12c:

The same happens (successful cache hit) for the analysis used in a Dashboard being opened, having purged the Presentation Services cache first.

So at this point, we can say that OBIEE 11g and 12c both behave the same with the cache when opening analyses/dashboards, but differ when refreshing the analysis. In OBIEE 12c when an analysis is refreshed the cache is deliberately bypassed. Let’s check on refreshing a dashboard:

Same behaviour as with analyses – in 11g the cache is hit, in 12c the cache is bypassed and repopulated

To round this off, let’s doublecheck the behaviour of the new request variable that we’ve found, OBIS_REFRESH_CACHE. Since it appears that Presentation Services is adding it in at runtime, let’s step over to a more basic way of interfacing with the BI Server – nqcmd. Whilst we could probably use Issue SQL (as we did above for querying the cache) I want to avoid any more behind-the-scenes funny business from Presentation Services.

In OBIEE 12c, I run nqcmd:

/app/oracle/biee/user_projects/domains/bi/bitools/bin/nqcmd.sh -d AnalyticsWeb -u prodney -p Admin123

Enter Q to enter a query, as follows:

SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY

In `obis1-query.log’ there’s the cache bypass and populate:

Query Result Cache: [59124] The query for user 'prodney' was inserted into the query result cache. The filename is '/app/oracle/biee/user_projects/domains/bi/servers/obis1/cache/NQS__736117_53779_29.TBL'.

If I run it again without the OBIS_REFRESH_CACHE variable:

SET VARIABLE QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY

We get the cache hit as expected:

-------------------- Cache Hit on query: [[
Matching Query: SET VARIABLE OBIS_REFRESH_CACHE=1,QUERY_SRC_CD='Report',PREFERRED_CURRENCY='USD';SELECT    0 s_0,    "X - Airlines Delay"."Carrier"."Carrier Code" s_1,    "X - Airlines Delay"."Carrier"."Carrier" s_2 FROM "X - Airlines Delay" ORDER BY 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 5000001 ROWS ONLY
Created by:     prodney

Out of interest I ran the same two tests on 11g — both resulted in a cache hit, since it presumably ignores the unrecognised variable.

Summary

In OBIEE 12c, if you click “Refresh” on an analysis or dashboard, OBIEE Presentation Services forces a cache-bypass and cache-reseed, ensuring that you really do see the latest version of the data from source. It does this using the request variable, new in OBIEE 12c, OBIS_REFRESH_CACHE.

Footnote

Courtesy of Steve Fitzgerald:

As per the presentation server xsd, you can also revert the behavior (not sure why one would, but you can) in the instanceconfig.xml

<Cache>
<Query>
<RefreshIncludeBIServerCache>false</RefreshIncludeBIServerCache>
</Query>
</Cache>

The post Changes in BI Server Cache Behaviour in OBIEE 12c : OBIS_REFRESH_CACHE appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

resizing Amazon RDS Oracle EE

Pat Shuff - Tue, 2016-05-31 09:38
Yesterday we looked at connecting to DBaaS with the Oracle platform as a service option. We wanted to extend the database table size because we expect to grow the tablespace beyond current storage capacity. We are going to do the same thing today for Amazon RDS running an Oracle Enterprise Edition 12c instance. To start our journey, we need to create a database instance in Amazon. This is done by going to the Amazon AWS Console and click on RDS to launch a database instance. We then click on the Launch DB Instance button and click on the Oracle tab. We select Enterprise Edition then the Dev/Test for our example. We accept the defaults, define our database with ORACLE_SID=pri, username of oracle, no multi-AZ replication, and select the processor and memory size.

While the database is creating we need to change the default network configuration. By default port 1521 is open but open to an ip address range. We need to open this up to everyone so that we can connect to the database from our desktop instance. We are using a Windows 2012 instance in the Oracle IaaS cloud so the default mapping back to the desktop we used to create the database does not work. Note that since we do not have permission to connect via ssh connecting with a tunnel is not an option. We must open up port 1521 to the world and can not use a tunnel to connect these two instances. The only security option that we have is ip white listing, vpn, opening up port 1521 to the world. This is done by going into the security groups definition on the detail page of the instance. We change the default inbound rule from an ip address to anywhere.

We could have alternatively defined a security group that links the ip address of our IaaS instance as well as our desktop prior to creation of this database to keep security a little tighter. Once the database is finished creating we can connect to it. We get the connection string (DNS address) and open up SQL Developer. We create a new database connection using the sid of pri, username of oracle, and port 1521. Once we connect we can define the DBA view to allow us to manage parts of the database since we do not have access using Enterprise Manager.

It looks like the table space will autoextend into the available space, all we should have to do is extend the /rdsdata partition. This is done by modifying the RDS instance from the console. We change the storage from the 20 that we created to 40, turn on advanced monitoring (not necessary for this exercise), and check the apply immediately button. This reconfigures the database and extends the storage. Note the resize command that happens for us. This is a sysdba level command that is executed on our behalf since we do not have sys rights outside the console.

We can look at the new instance and see that the size has grown and we have more space to expand into.

We can see the changes from the monitoring console

In summary, we are able to easily scale storage in an Amazon RDS instance even though we do not have sys access to the system. We do need to use the AWS Console to make this happen and can not do this through Enterprise Manager because we can't add the agent to the instance. It is important to note that some of the options that are available from the console and some are available with altered command line options that give you elevated admin privileges without giving you system access. Look at the new command structures and decide if forking your admin tools just to run on RDS is something worth doing or too much effort. These changes effectively lock you into running your Oracle database on Amazon RDS. For example to change the tablespace definition in Oracle you would typically type

alter database default tablespace users2;

but with Amazon RDS you would need to type

exec rdsadmin.rdsadmin_util.alter_default_tablespace('users2');

Is this a show stopper? It could be or it might be trivial. It could stop some pre-packaged applications from working in Amazon RDS and you are forced to go with EC2 and S3. The upgrade storage example that we just went through would be a manual process with the involvement of an operating system administrator thus incurring additional cost and time. Again, this blog is not about A is better than B. This blog is about showing things that are hidden and helping you decide which is better for you. Our recommendation is to play with Amazon RDS and Oracle PaaS and see which fits your needs best.

TEAM's Informatics Webcast: Don't Regret - Redact & Protect Sensitive Information

WebCenter Team - Tue, 2016-05-31 08:21

spacer ; background-position-y: 30%;">

Greetings!

Sensitive data is extremely important to protect for your organization. Release of public records, data breaches, and simple sharing between parties can cause unnecessary distress and costs when names, addresses, credit card numbers, etc. become public. Personally Identifiable Information, or PII, is any data that could potentially enable identification of a specific individual. This data being made public can dramatically increase the potential for identity theft and removes the level of anonymity necessary in public disclosures.
With a single disclosure breach reaching an estimated cost as high as seven million dollars ($7m) and affecting all manner of agencies, corporations, and hospitals across the globe every year, why regret not securing your documents at the point of capture?
TEAM's Redaction Engine utilizes multiple technologies to scan for and identify sensitive data and then enables automatic redaction of that information in subsequently generated PDFs.
Join this Webinar to secure your organization's future and protect from unnecessary litigation due to a preventable data breach.

Features & Benefits
  • Ability to handle both scanned and electronic data
  • High accuracy in field recognition with confidence threshold
  • Maintain compliance protocols such as HIPPA, PCI, and many more
  • Integrated into Oracle WebCenter Content 11g & 12c
  • Reporting via search analytics
  • Pattern Recognition (regex and basic) redacts information such as:
    • SS number
    • Date of Birth
    • Home Address
    • Identification number/medical forms
    • Credit Card numbers

TEAM's Redaction Engine Webinar
June 23rd, 2016 - 1:00PM CST 



Stay up to date with TEAM:

NoSQL for SQL Developers

Gerger Consulting - Tue, 2016-05-31 06:00
Oracle Developers! Want to learn more about NoSQL, but don't know where to start? Look no further.

Attend our free webinar by the venerable Pramod Sadalage. He'll answer all the questions you have about NoSQL but were too afraid to ask. :-) 

After all, he wrote the book.


 Sign up at this link. 
Categories: Development

BPEL Chapter 1: Hello World BPEL Project

Darwin IT - Mon, 2016-05-30 08:50
Long ago, back in 2004/2005 when Oracle released Oracle BPEL 10.1.2 (and its predecessor the global available release of the rebranded Collaxa product) and in 2006 with the release of the first SOASuite 10.1.3, you had a Project per BPEL process. Each project was setup around the BPEL process file. Since 11g BPEL is a component in the Software Component Architecture (SCA) and a project can contain multiple BPEL components together with other components as Mediator, Rules, etc. In a later section I'll elaborate on the SCA setup, but for now I'll focus on the BPEL component. When I do I probably edit this introduction.

I had to make this introduction because using 12c we have to create a SOA Application with a SOA Project for our first BPEL process.

I assume you have installed the SOA QuickStart. If not do so, for instance with the use of the silent install. That article describes the installation of the BPM QuickStart, but for SOA QuickStart it works exactly the same, but you might want to change the home folder from c:\oracle\JDeveloper\12210_BPMQS to c:\oracle\JDeveloper\12210_SOAQS.

Then start JDeveloper from the JDeveloper home. If you used the silent install option, then you can start JDeveloper via c:\oracle\JDeveloper\12210_BPMQS\jdeveloper\jdev\bin\jdev64W.exe.
Create SOA ApplicationAfter having JDeveloper started, click on the 'New Application' link, top right. You can also use the New icon () or the File->New menu, option Application or From Gallery:
You'll get the following screen:



Choose SOA Application and click OK. This results in the following first page of the Create SOA Application wizard:
Name the application 'HelloWorld' and provide a Directory for the root folder for the application. For instance 'c:\Data\Source\HelloWorld'. And click Next.
Then provide a name for the project. For now you can use the same name as the application. The Project Directory should be adapted automatically. If not make sure it's a subfolder in the application's directory, and click Next:

In the last step of the wizard you can choose a Standard Composite (with a start component) or a Template for the composite. You could choose for a composite with a BPEL Process, but for now, let's choose an Empty Composite, since we need to create a schema first.

JDeveloper SOA Application Screen OverviewWhen you click finished your JDeveloper screen looks more or less like:

Top left you'll find the Project Navigator that shows the generated artefacts that you'll find in the project folder. In the middle of the left panes you'll find a Resource Navigator which enables you to edit some specific property files in the SOA Application. But we'll leave them for the moment. Bottom left you'll find two tabs, one of which is the structure pane. This one comes in handy occasionally, but we'll get into that later as well.

In the middle you'll find the Composite Designer. We'll cover it later more extensively, but for now this is the start of assembling our application. Top Right you'll find a collapsed pane for the Components. You can expand it by clicking on it:
 
Top right you'll find a  button to dock the pane. This will make the pane visible permanently. This is the default. This is the default, but to free up space and have a larger portion of your screen available for the designer(s), you can collapse it using the minus-icon on the same place as the Dock-button:


To add components to the application you simply drag and drop than on the appropriate pane.
Create an XML SchemaA BPEL process is exposed as a service in most cases. To do so you have to create a WSDL (WebService Definition Language) document. This is an interface contract for a service described in XML. It defines the input and output of a service and the possible operations on the service. Each possible operation needs refer to it's input and possible output message. And each message refers to the definition of that message. The message definition is done as an element in an XML Schema Definition (XSD). Although you can have it auto-generated, it's recommended to do a bottom-up definition of the contract. So let's start with the XSD.

Using the File->New->From Gallery menu choose 'XML Schema' and click OK:
 
Enter the details of the file: name it 'HelloWorld.xsd'. The Directory is per default the 'Schemas' folder within the project. Namespaces are important in a xsd and the wsdl, so give a proper namespace, like 'http://www.darwin-it.nl/xsd/HelloWorld' and provide a convenient prefix for it, like 'hwd':

This generates the following  xsd:
 
The xsd should contain an element for the request as well as the response  message. First rename the 'exampleElement' to 'helloWorldRequestMessage'. If you click on the element the name of the elememt should be highlighted. Then you can change the name by starting to type the new name. You can also bring up the properties pane and change the name there:


Change the name to helloWorldRequestMessage; I like to have elements have a proper name and in this case it's defining a request message for the HelloWorld service. I use the Java Standard Camel Case notition where elements have a lowercase first character.

It's time to extend the XSD. Since we're in the XSD editor, the component pallette is changed and contains the possible components to use in an XSD:

Drag and drop a new element from the pallette to under the helloWorldRequestMessage, until you see a line under the element and a plus-sign to the mouse-pointer:



Name the new element helloWorldResponseMessage.
In the same way add a complexType and name it HelloWorldRequestMessageType. I don't like implicit nameless complexTypes within elements. So I tend to always create named comlextypes, that I name the same as the corresponding elements, except for a UpperCase first character and the suffix 'Type'.

Then add a sequence on the helloWorldRequestMessage:

Do the same for complexType HelloWorldResponseMessageType
Then in the HelloWorldRequestMessageType add a sequence:

And in the sequence drag and drop an element called 'name'.
Do the same for HelloWorldResponseMessageType, except name the element 'greeting'.

Now we need to set a type for the elements. Right-click on the name element and choose 'Set Type' from the context menu:


In the dropdown box choose xsd:string:

You can also set the type from the properties:

Set also the greeting element to xsd:string.

Then set the helloWorldRequestMessage to the type hwd:HelloWorldRequestMessageType:



And the helloWorldResponseMessage to the type hwd:HelloWorldResponseMessageType accordingly.
After saving it, the resulting xsd shood look like:


Create the WSDLNow we can create a WSDL based on the xsd. JDeveloper has a nice WSDL generator and editor.
The easiest way to generate or kickstart a wsdl is to drag and drop a SOAP service from the component palette:

to the Exposed Services lane on the canvas:
This opens the dialog for defining the SOAP Service. Name it HelloWorld. Then you can choose a WSDL or generate/define it from scratch based on the xsd. To do so click on the cog:

The name of the wsdl is pre-defined on the SOAP service, but you can leave the suggestion. Do yourself a favor and provide a sensible namespace. I choose 'http://xmlns.darwin-it.local/xsd/HelloWorld/HelloWorldWS'. Think of a good convention where you have a company base url (I'll get back to it later, but for now it's common convention to use an URI, that does not need, in most cases even don't, refer an actual existing internet address).
Set the the Interface type to 'Synchronous Interface' and click the green plus icon to define a message for the Input:

There you can give the message part a name, for this you can also define a convention, but there is actually no real point in doing so. So you can leave it just to the suggested 'part1'. Click on the magnifier icon and look for the HelloWorld.xsd under Project Schema Files. Select the 'helloWorldRequestMessage' and click OK:

Back in the Add Message Part dialog, click OK again:
Back in the Create WSDL dialog, you can leave the names for Port Type and Operation, to the proposed values. For this service execute is a quite proper name. But in the future it is wise to think of a proper verb that gives a proper indication of what the operation means. Later when I'll discuss WSDL's you'll find that a WSDL can hold several operations and porttypes.

For now click OK again:

And also in the Create Web Service dialog click OK again:




What results in a HelloWorldWS Service in the Composite definition:

Create the HelloWorld BPEL processAt this point we're ready to add a BPEL Process. In the component palette look for the BPEL Proces Icon:

and drag&drop it on the Components lane in the Composite canvas:




This will bring up the following dialog.



Fill in the folowing properties like:
Property
Value
NameHelloWorldBPELProcessNamespacehttp://xmlns.darwin-it.nl/bpel/HelloWorld/HelloWorldBPELProcess
The namespace is not really important here, you could leave it at the suggested value. It's only used internally, not exposed since we're using a pre-defined wsdl to expose the service. However you can change it to a proper value like here to identify it as a BPEL created by you/your company. This can be sensible when using tools like Oracle Enterprise Repository. But especially when you generate the wsdl by exposing the BPEL process as a SOAP service.

Click on the import 'Find Existing WSDLs' icon to bring up the following WSDL search dialog:

Here you see that you can browse for WSDLs through various channels: look into an Application Server for already deployed services, search in the SOA-MDS (more on that later) and so on. But since we have the WSDL in our project, choose for the File System

Within the project navigate to the WSDLs folder and choose the HelloWorldWS.wsdl, and click OK.
This will bring us back to the Create BPEL Process dialog. Important here is to deselect the 'Expose as a SOAP service' checkbox, since we want to reuse the already defined SOAP service, instead of having a new service generated. In the case you missed this, and a new service is generated, you can delete it easily.

Now the only thing to do, before implementing the process, is to 'wire' the HelloWorldWS SOAP Service to the BPEL Process. Do so by picking up the '>' icon and drag it over to the top  '>' icon in the BPEL Process:


You'll see that a green line appears between the start icon, that gets a green circle around it, and the possible target icons.

Implement the BPEL processOpen the BPEL designer by right clicking on the BPEL component and choose  'Edit' or  double-clicking it:





This opens the BPEL Designer with a skeleton BPEL process:




It shows a so-called 'Partner link'  to the left. There are two 'Partner Links' Lanes one on each side of the process. They're equal, the designer picks out a lane for each new Partner Link as it finds appropriate, but you can drag and drop them to the other side when it suites the drawing better. But the so-called client Partner Link is always shown left at start, and it is common to have it left.

Then you'll see two activities in the process in the main-process-scope: receiveInput  and replyOutput. The first receives the call and initiates the BPEL Process.

If you edit this activity, you'll be able to see and edit the name of the activity. It shows the input variable where the request document is stored. And very important the checkbox 'Create Instance' is checked. This means that this is the input activity that initiates a process instance. It is very important that this is the very first Receive or Pick activity in the BPEL process.

The Reply activity is the counterpart of the first Receive and is only applicable for a synchronous BPEL process. It responses to the calling Service Consumer with the contents of the outputVariable.

So in between we need to build up the contents of the outputVariable based up on the provided input.

Since we're in the BPEL Designer, you could have noticed that the Component Palette has changed. It shows the BPEL Constructs and Activities, and below those some other extension categories that I'll probably cover in a more advanced chapter:



Take the Assign activity:



and drag and drop it between the 'receiveInput' and 'replyOutput' activities:

When dragging the Assign activity, you'll notice the green add-bullets denoting the valid places where you could drop the activity. This results in:



You can right-click-and-edit or double-click the activity:



Which results in the Assign Editor. It opens on the Copy Rules tab, but first click on the General tab to edit the Name of the Assign:


Switch back to the Copy Rules Tab. On the right, locate the outputVariable and 'Expand All Child Nodes', by right-clicking it and choose the appropriate option:



Then drop the Expresion icon (the most right in the button bar) on the greeting element of the variable:


This brings up the Expression Builder:



Here choose the concat() function under the category 'String Functions' under Functions. Place the cursor between the brackets and locate the name element under the helloWorldRequestMessage element under the inputVariable.part1. Edit the expression to prefix the name with 'Hello ' and suffix it with an exclamation mark. resulting in the expression:
concat( 'Hello ', $inputVariable.part1/ns2:name,'!')

This results in:




 Here you see that an element of the inputVariable is used in a function to build up the greeting. For more complex XSD's you can add several copy rules, as many as needed, just by drag and dropping elements to their target-elements. For now, the assign is finished, so click OK.



Your process now looks like the example above. Save the process by clicking the Save or Save All, that look like the old-school floppy disks.
Fire up the Integrated WeblogicTo be able to test the BPEL process, we'll need a SOA Server. At a real-live project, a proper server installation will be provided to you and your team (preferably and presumably). This is recommended even for development to be able to do integration tests for a complex chain of services and to be able to work against enterprise databases and information systems.

But for relatively small unit tests as this Oracle provided an integrated weblogic in the SOA QuickStart topology of JDeveloper with a complete SOASuite and Service Bus. To start it choose the menu option Run->Start Server Instance (IntegratedWeblogicServer):



The first time you do this, you'll have to provide a user id for the administrator (usually weblogic) and a corresponding password (often welcome1). Provide the credentials to your liking (and remember them):


When clicking OK, a log viewer pops up at the bottom that shows the server output log of the integrated weblogic server:




Wait until the message appears:

SOA Platform is running and accepting requests. Start up took 92300 ms
IntegratedWebLogicServer startup time: 363761 ms.
[IntegratedWebLogicServer started.]

Now the SOA Suite Server is available to receive the HelloWorld Composite and do some tests.

Deploy ...Deploying a SOA Composite is a kind of special way of deployment that differs from deploying a regular Web Application. Although you can find a deploy menu option in the Application menu of JDeveloper, this one is not suitable for SOA Composite deployments. For SOA Composite projects, you need to right-click on the project name and choose Deploy and then the name of the Project.


This starts a wizard with on the first page two options, Deploy to Application Server and Generate SAR File:

A SAR file is a so-called SOA-Archive and is used to be able to have the project deployed by an administrator to a target environment without the need of JDeveloper. Choose the option Deploy to Application Server and click Next.

In the next page, select the Overwrite any existing composite with the same revision ID checkbox, but deselect the Keep running instances after redeployment. Although this is the first deployment (so there are no existing running instances yet), after this deployment you'll find that this sort of profile is added to the deployment context menu, and not checking the Overwrite... option will cause a failure in the next deployments. But there's no need to keep existing instances running. Keep the Revision ID. Click Next.

Select the pre-defined IntegratedWeblogicServer and click Finish.


The log pane is opened on the SOA tab:

If you've done everything ok, then a BUILD SUCCESSFUL message is shown. By the way, alhough it might not be clear to you, the integrated build-tool Apache ANT is used for this.

When the BUILD SUCCESSFUL message is shown, you can switch to the Deployment tab. After building the SAR it is send to the weblogic server. If this finishes without errors you'll be able to test it.

... and test the BPEL processTesting and monitoring the SOA Composite instances can be done using the Enterprise Manager - Fusion Middleware Control.

To open it, start a Internet Browser and navigate to http://localhost:7101/em (or the server:port where your admin-server runs extended with the URI '../em'). This should bring you to the Login page:



Use the credentials you used to start the IntegratedWeblogicServer for the first time to login.
Then the Enterprise Manager - Fusion Middleware Control landing page is opened for the Weblogic Domain

Click on the Navigator Icon to open up the Target Navigation and navigate to the composite under SOA->soa-infra->default:



Here the SOA node contains both service-bus projects and SOA Suite (soa-infra) Composite Deployments. The node Default is actually the Default partition. You can create several partitions, to catalog your deployments.

Open the HelloWorld[1.0] composite by clicking on it. Click on the Test button:




You can select a Service, Port and Operation as defined in the composite. Expand the nodes under SOAP Body, to navigate to the name input field. Fill in a nice value for name and click on Test Web Service:


After a (very) short while (dependent on the complexity of the service) the response will be shown:




Only for Synchronous services, a response message will be shown. But you can (practically) always click on the Launch Flow Trace  button to review the running, faulted or completed instance:




The flowtrace shows the different initiated components in the flow in a hierarchical way.
Here you can click on the particular components that you want to introspect. Clicking on the HelloWorldBPELProcess will open up the BPEL Flow:




If you click on the the assign activity this will show:




It might be that in your case you'll not be able to open this, or viewing the message contents. This is caused by the Audit Level of the SOA Suite that by default is set to 'Production'. Even on the SOA QuickStart...

To resolve this you'll need to go to the Common Properties of the SOA Infrastructure. You can use the Target Navigation browser to navigate to the soa-infra. But you can also use the SOA Composite to navigate to the SOA Infrastructure Common Properties:




Then use the SOA Infrastructure menu to navigate to SOA Administration -> Common Properties:



Here you can set the Audit Level to Development. Click on the Apply button to effectuate the change.




Do another test and check-out the BPEL Flow and review the message contents.

ConclusionSo this concludes the first part of my series BPEL. And if you bravely followed my step-by-step instructions you should be able to do some basic navigation through JDeveloper and the Enterprise Manager Fusion Middleware Control. I hope you find this entertaining, although you might already be an experienced SOA Developer. But then you probably didn't get this far...

In next articles I hope to slowly increase the level of the subjects. Coming up: invoking other services and using scopes. But probably I need to explain something about SoapUI to provide mockservices. And to deliver a more convenient way to test your services.

Paris and Netherlands Trip

Tim Hall - Mon, 2016-05-30 06:16

Early tomorrow I start a small series of events in Paris and the Netherlands.

Tomorrow I fly to Paris to speak at the Paris Province Oracle Meetup. It will be my first time in Paris, so I’ll try to hit a few of the sites in the city centre if I can. I was originally hoping to fly out the same night as I have an event the following day, but the flights for that didn’t work out. Instead I’m staying in a hotel at the airport. I’ve got an early flight out of Charles de Gualle airport, so I figured I’d take the easy (but boring) approach of staying at the airport.

The next morning I fly to Amsterdam, take a train, then a taxi to the hotel, followed by a taxi to the OTN Cloud Developer Challenge (formerly known as the Hackaton). It started life as an extra day tagged on to an AMIS event by the Oracle ACE Program, like an ACE Briefing, but became a hackathon. I’m not sure if other people are allowed into it, but a few of us have got into teams and we are going to try and develop stuff with Oracle Cloud services. It’s going to be a bit of a magical mystery tour as most of the people in my team are database people and know nothing about the developer and mobile cloud services. If we are able to produce something with them, it will prove they are easy.

Best Practices for the cloud !!

Tim Dexter - Mon, 2016-05-30 05:19

Greetings!!

Wish everyone a very happy Memorial Day !! 

Last week we published in OTN a white paper with title "Oracle BI Publisher Best Practices for SaaS Environments". As the title suggests, this white paper is a compilation of all the best practices for BI Publisher that are relevant to a Cloud Platform, especially for Fusion Applications. You can find the link to this white paper here. We will soon have an updated best practices guide for on-premise installations, so stay tuned.

Have a nice day !! 

Categories: BI & Warehousing

Adversarial analytics and other topics

DBMS2 - Mon, 2016-05-30 05:15

Five years ago, in a taxonomy of analytic business benefits, I wrote:

A large fraction of all analytic efforts ultimately serve one or more of three purposes:

  • Marketing
  • Problem and anomaly detection and diagnosis
  • Planning and optimization

That continues to be true today. Now let’s add a bit of spin.

1. A large fraction of analytics is adversarial. In particular:

  • Many of the analytics companies I talk with tell me that they have important use cases in security, anti-fraud or both.
  • Click fraud steals a large fraction of the revenue in online advertising and other promotion. Combating it is a major application need.
  • Spam is another huge, ongoing fight.
    • When Google et al. fight web spammers — which is of course a great part of what web search engine developers do — they’re engaged in adversarial information retrieval.
    • Blog comment spam is still a problem, even though the vast majority of instances can now be caught.
    • Ditto for email.
  • There’s an adversarial aspect to algorithmic trading. You’re trying to beat other investors. What’s more, they’re trying to identify your trading activity, so you’re trying to obscure it. Etc.
  • Unfortunately, unfree countries can deploy analytics to identify attempts to evade censorship. I plan to post much more on that point soon.
  • Similarly, de-anonymization can be adversarial.
  • Analytics supporting national security often have an adversarial aspect.
  • Banks deploy analytics to combat money-laundering.

Adversarial analytics are inherently difficult, because your adversary actively wants you to get the wrong answer. Approaches to overcome the difficulties include:

  • Deploying lots of data. Email spam was only defeated by large providers who processed lots of email and hence could see when substantially the same email was sent to many victims at once. (By the way, that’s why “spear-phishing” still works. Malicious email sent to only one or a few victims still can’t be stopped.)
  • Using unusual analytic approaches. For example, graph analytics are used heavily in adversarial situations, even though they have lighter adoption otherwise.
  • Using many analytic tests. For example, Google famously has 100s (at least) of sub-algorithms contributing to its search rankings. The idea here is that even the cleverest adversary might find it hard to perfectly simulate innocent behavior.

2. I was long a skeptic of “real-time” analytics, although I always made exceptions for a few use cases. (Indeed, I actually used a form of real-time business intelligence when I entered the private sector in 1981, namely stock quote machines.) Recently, however, the stuff has gotten more-or-less real. And so, in a post focused on data models, I highlighted some use cases, including:

  • It is increasingly common for predictive decisions to be made at [real-timeish] speeds. (That’s what recommenders and personalizers do.) Ideally, such decisions can be based on fresh and historical data alike.
  • The long-standing desire for business intelligence to operate on super-fresh data is, increasingly, making sense, as we get ever more stuff to monitor. However …
  • … most such analysis should look at historical data as well.
  • Streaming technology is supplying ever more fresh data.

Let’s now tie those comments into the analytic use case trichotomy above. From the standpoint of mainstream (or early-life/future-mainstream) analytic technologies, I think much of the low-latency action is in two areas:

  • Recommenders/personalizers.
  • Monitoring and troubleshooting networked equipment. This is generally an exercise in anomaly detection and interpretation.

Beyond that:

  • At sufficiently large online companies, there’s a role for low-latency marketing decision support.
  • Low-latency marketing-oriented BI can also help highlight system malfunctions.
  • Investments/trading has a huge low-latency aspect, but that’s somewhat apart from the analytic mainstream. (And it doesn’t fit well into my trichotomy anyway.)
  • Also not in the analytic mainstream are the use cases for low-latency (re)planning and optimization.

Related links

My April, 2015 post Which analytic technology problems are important to solve for whom? has a round-up of possibly relevant links.

Categories: Other

4 years and counting…

Tim Hall - Mon, 2016-05-30 05:12

prisoner-296515_640It’s 4 years since I started at my current company.

No job is ever perfect. You just have to try and find a place where the pros outweigh the cons. Your judgement about what is a pro and what is a con will vary drastically throughout your life. What suits you today may not tomorrow.

Let’s see if I can make it to 5 years…

Oracle AppsUnlimited - Migrate from On-Premise to Oracle Cloud ( Lift and Shift )

Senthil Rajendran - Mon, 2016-05-30 04:26

Before you read : Please note the documentation and procedures for Lifting and Shifting On-Premise to Oracle Cloud is constantly evolving. Instruction here are high level and recommend to review the latest documentation.

Link : Migrating an Existing Oracle E-Business Suite Installation to the Oracle Compute Cloud Service

Migrate from On-Premise to Oracle Cloud ( Lift and Shift )

On-Premise Environment 12.1.3 or 12.2 can be moved to cloud , please review the "What Do You Need ?" section to understand the detailed requirement.

Here are the high level steps

  • Identify the EBS environment for lift and shift. Make sure that it meets the requirement for migration.
  • Create the compute infrastructure. Log into the compute service console and create the storage, virtual machines with the computing power required. Format the storage volumes and mount then to the requirement
  • Download the cloning utility Patch 22336899
  • Open the ports between the Source and the Target hosts
  • Update the cln.props which is available inside the patch , ensure that properties MODE and BACKUP_TYPE are commented out as they are used for EBS with DBaaS provisioning.
  • Run the EBS Cloud Clone Utility perl ./ebsclone.pl
  • Run post configuration steps as necessary.

What happens in the backend ?
  • Pre-Clone is run on the DB and MT
  • Necessary files are copied , tarred and compressed
  • Files are moved from On-Premise to Cloud 
  • Config clone is run on the Target Cloud IaaS VMs
Hope this helps.

Restore and Recovery from Incremental Backups : Video

Hemant K Chitale - Mon, 2016-05-30 03:04
A Youtube Video on Restore and Recovery from Incremental Backups.


Categories: DBA Blogs

resizing database as a service with Oracle

Pat Shuff - Mon, 2016-05-30 02:07
Historically, what happens to a database months after deployment has always been an issue and problem. If we go out and purchase a computer and disk storage then deploy a database onto the server. If we oversize the hardware and storage, we wasted budget. If we undersize the hardware and storage we had to purchase a new computer or new storage and get an operating system expert to reconfigure everything on the new server and get a database administrator to reconfigure the database installation to run on the new server or new storage. For example, if we purchased a 1 TB disk drive and allocated it all to /u02 the database had a ton of space to grow into. We put the DATA area there and put the RECO area into /u03. Our database service suddenly grows wildly and we have a record number of transactions and increase the offerings in our product catalog and our tablespace suddenly grows to over 800 GB. Disk performance starts to suffer and we want to grow our 1 TB to 2 TB. To do this we have to shut down our database, shut down the operating system, attach the new disk, format and mount it as /u05, copy the data from /u02 to /u05, remount /u05 as /u02, and reboot the system. We could have backed up the database from /u02 and reformatted /u02 and /u05 as a logical volume to allow us to dynamically grow the disk and allow us to purchase a 1 TB for our /u05 disk rather than a 2 TB disk and reduce our cost. We successfully grew our tablespace by purchasing more hardware, involving an operating system admin, and our database administrator. We were only down for a day or half day while we copied all of our data and modified the disk layout.

Disk vendors attacked this problem early by offering network or fiber attached storage rather that direct attached storage. They allow you to add disks dynamically keeping you from having to go out and purchase new disks. You can attach your disk as a logical unit number and add spindles as desired. This now requires you to get a storage admin involved to update your storage layout and grow your logical unit space from 1 TB to 2 TB. You then need to get your operating system admin to grow the file system that is on your /u02 logical unit mount to allow your database admin to grow the tablespace beyond the 1 TB boundary. Yes, this solves the problem of having to bring down the server, touch the hardware, add new cables and spindles to the computer. It allows data centers to be remote and configurations to be done dynamically with remote management tools. It also addresses the issue of disk failures much easier and quicker by pushing the problem to the storage admin to monitor and fix single disk issues. It solves a problem but there are better ways today to address this issue.

With infrastructure as a service we hide these issues by treating storage in the cloud as dynamic storage. With Amazon we can provision our database in EC2 and storage in S3. If we need to grow our S3, we allocate more storage to our bucket and grow the file system in EC2. The database admin then needs to go in and grow the tablespace to fill the new storage area. We got rid of the need for a storage admin, reduced our storage cost, and eliminated a step in our process. We still need an operating system admin to grow the file system and a database admin to grow the tablespace. The same is true if we use Azure compute or Oracle IaaS.

Let's go through how to attach and grow storage to a generic compute instance. We have a CentOS image running in IaaS on the Oracle Cloud. We can see that the instance has 9 GB allocated to it as the root operating system. We would like to add a 20 GB disk then grow the disk to 40 GB as a second test. At first we notice that our instance is provisioned and we the 9 GB disk labeled CentOS7 allocated to our instance as /dev/xvdb. We then create a root partition /dev/xvdb1, provision an operating system onto it using the xfs file system, and mount it as the root filesystem.

To add a 20 GB disk, we go into the Compute management screen, and create a new storage volume. This is easy because we just create a new volume and allocate 20 GB to it.

Given that this disk is relatively small, we don't have to wait long and can then attach it to our CentOS7 instance by clicking on the hamburger menu to the right of our new 20 GB disk and attaching it to our CentOS7 instance.

It is important to note that we did not need to reboot the instance but suddenly the disk appears as /dev/xvdc. We can then partition the disk with fdisk, create a file system with mkfs, and mount the disk by creating a new /u02 mount point and mounting /dev/xvdc1 on /u02.

The real exercise here is to grow this 20 GB mounted disk to 40 GB. We can go into the Volume storage and Update the storage to a larger size. This is simple and does not require a reboot or much work. We go to the Storage console, Update the disk, grow it to 40 GB, and go back to the operating system and notice that our 20 GB disk is now 40 GB. We can create a new partition /dev/xvdc2 and allocate it to our storage.

Note that we selected poorly when we made our file system selection. We selected to lay out an ext3 file system onto our /dev/xvdc1 partition. We can't grow the ext3 filesystem. We should have selected ext4. We did this on purpose to prove a point. The file selection is critical and if you make the wrong choice there is no turning back. The only way to correct this is to get a backup of our /u02 mount and restore it onto the ext4 newly formatted partition. We also made a second wrong choice of laying the file system directly on the raw partition. We really should have created a logical partition from this one disk and put the file system on the logical partition. This would allow us to take our new /dev/xvdc2, create a new physical partition, add the physical partition to our logical partition, and grow the ext4 file system. Again, we did this on purpose to prove a point. You need to plan on expansion when you first lay out a system. To solve this problem we need to unmount the /u02 disk, delete the /dev/xvdc1 and /dev/xvdc2 partitions, create a physical partition with logical volume manager, create a logical partition, and lay an ext4 file system onto this new volume. We then restore our data from the backup and can simply grow the partition much easier in the future. We are not going to go through these steps because the exercise is to show you that it is much easier with platform as a service and not how to do it on infrastructure as a service.

If we look at a database as a service disk layout we notice that we have /dev/xvdc1 as /u01 which represents the ORACLE_HOME, /dev/mapper/dataVolGroup-lvol0 as /u02 which represents the tablespace area for the database, /dev/mapper/fraVolGroup-lvol0 which represents the fast recovery area (where RMAN dumps backups), and /dev/mapper/redoVolGroup-lvol0 which represents the redo log area (where DataGuard dumps the transactions logs). The file systems are logical volumes and created by default for us. The file systems are ext4 which can be seen by looking at the /etc/fstab file. If we need to grow the /u02 partition we can do this by using the scale up option for the database. We can add 20 GB and extend the data partition or the fra partition. We also have the option of attaching the storage as /u05 and manually growing partitions as desired. It is important to note that scaling up the database does require a reboot and restart of the database. When we try to scale up this database instance we get a warning that there is a Java service that depends upon the database and it must be stopped before we can add the storage desired.

In summary, we can use IaaS to host a database. It does get rid of the need for a storage administrator. It does not get rid of the need for an operating system administrator. We still have to know the file system and operating system commands. If we use PaaS to host a database, we can add storage as a database administrator and not need to mess with the logical volume or file system commands. We can grow the file system and add table extents quickly and easily. If we undersize our storage, correcting for this mistake is much easier than it was years ago. We don't need to overpurchase storage anymore because we can allocate it on demand and pay for the storage as we use it. We can easily remove one of the headaches that has been an issue for years and no longer need to triple our storage estimates and go with realistic estimates and control budget better and easier.

Storing Date Values As Characters Part II (A Better Future)

Richard Foote - Sun, 2016-05-29 18:35
In the previous post, I discussed how storing date values within a character data type is a really really bad idea and illustrated how the CBO can easily get its costings totally wrong as a result. A function-based date index helped the CBO get the correct costings and protect the integrity of the date data. During […]
Categories: DBA Blogs

Using Oracle on OS X? Instant Client 12.1 is here

Christopher Jones - Sun, 2016-05-29 07:45

Oracle Instant Client 12.1 for OS X was just released and is now available for free download from OTN for 32-bit and 64-bit applications. Update: the bundles were re-released 14 June 2016 with a connectivity fix.

Instant Client provides libraries and tools for connecting to Oracle Database. Among other uses, languages such as C, Python, PHP, Ruby, Perl and Node.js can use Instant Client for database connectivity.

In addition to having Oracle 12.1 client features like auto-tuning, new in this release is an ODBC driver.

The install instructions have been updated to reflect the resolution of the linking issues caused by the OS X El Capitan changes with SIP to ignore DYLD_LIBRARY_PATH in sub processes. The ~/lib location required for Instant Client 11.2 on El Capitan is no longer needed with Instant Client 12.1. Note if you are creating your own apps, you should link with -rpath.

This release of Instant Client supports Mavericks, Yosemite, and El Capitan. Applications can connect to Oracle Database 10.2 or more recent. You should continue using the older 11.2 client if you need to connect to Oracle Database 9.2.

Update: Official installation doc and release notes are now on the doc portal: Oracle Database Online Documentation 12c Release 1 (12.1).

Questions and comments can be posted to the OTN forum for whichever component or tool you are using. General questions about Instant Client are best posted to the OCI Forum.

If you are interested in running Oracle Database itself on OS X, see my earlier post The Easiest Way to Install Oracle Database on Mac OS X.

Oracle JET Handling ADF BC 12.2.1 REST Validation Event

Andrejus Baranovski - Sat, 2016-05-28 11:32
I already had a post about how to handle ADF BC validation messages in Oracle JET - Handling ADF BC 12.2.1 REST Validation in Oracle JET. There is an improvement I would like to share. JET submits data to ADF BC REST and there happens validation. In JET perspective data is correct, however it may fail validation in ADF BC and we need to reset values back to original in JET. Previously I was re-executing entire JET collection, drawback of this approach - current selected row was lost and control was returned to the first page in the table configured with pagination.

With improved solution, current row remains selected one, even if ADF BC returns failure for validation event. As you can see in the example below:


Callback method for JET model save API is set to parse ADF BC validation messages and at the end to refresh current row model. Current row is refreshed by re-fetching row data, only one row data will be re-fetched from REST. Previously entire collection was refreshed, causing current row reset:


Download sample application - JETCRUDApp_v9.zip (you must run ADF BC REST application in JDEV 12.2.1 and JET in NetBeans 8).

Oracle Partners Cloud Connection: Are you connected?

OPN Cloud connection was launched back in Oracle Open World 2014, as a new community for partners to engage, explore and learn about the Oracle Cloud business opportunities. Since then it...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Encrypting sensitive data in puppet using hiera-eyaml

Pythian Group - Fri, 2016-05-27 13:36

Puppet manifests can hold a lot of sensitive information. Sensitive information like passwords or certificates are used in the configuration of many applications. Exposing them in a puppet manifest is not ideal and may conflict with an organization’s compliance policies. That is why data separation is very important aspect of secure puppet code.

Hiera is a pluggable Hierarchical Database. Hiera can help by keeping data out of puppet manifests. Puppet classes can look for data in hiera and hiera would search hierarchically and provide the first instance of value.

Although Hiera is able to provide data separation, it cannot ensure security of sensitive information. Anyone with access to the Hiera data store will be able to see the data.

Enter Hiera-eyaml. Hiera-eyaml is a backend for Hiera that provides per-value encryption of sensitive data within yaml files to be used by Puppet.

The following puppet module can be used to manage hiera with eyaml support.

https://forge.puppetlabs.com/hunner/hiera puppet module.

The module class can be used like below,

modules/profile/manifests/hieraconf.ppclass profile::hieraconf {
# hiera configuration
class { ‘hiera’:
hierarchy => [
‘%{environment}/%{calling_class}’,
‘%{environment}’,
‘%{fqdn}’,
‘common’,
‘accounts’,
‘dev’
],
}
}

The /etc/hiera.conf would look like following after the puppet run,

/etc/puppet/hiera.yaml# managed by puppet

:backends:
– yaml
:logger: console
:hierarchy:
– “%{environment}/%{calling_class}”
– “%{environment}”
– “%{fqdn}”
– common
– accounts
– dev
:yaml:
:datadir: /etc/puppet/hieradata

Moving data to Hiera
In following example, diamond collector for Mongodb does have data like, hosts, user and password. The collector is only enabled for grafana.pythian.com host.

modules/profile/manifests/diamond_coll.pp
[..] diamond::collector { ‘MongoDBCollector’:
options => {
enabled => $fqdn ? { /grafana.pythian.com/ =>True, default => false },
hosts => ‘abc.pythian.com,xyz.pythian.com’,
user => ‘grafana’,
passwd => ‘xxxx’,
}
}

To move the data to hiera, create_resources function can be used in the manifest.

modules/profile/manifests/diamond_coll.ppclass profile::diamond_coll{
[..] $mycollectors = hiera(‘diamond::collectors’, {})
create_resources(‘diamond::collector’, $mycollectors)
[..] }

Then a new yaml file can be created and diamond::collectors code for MongoDBCollector can be abstracted like below,

hieradata/grafana.pythian.com.yaml

diamond::collectors:
MongoDBCollector:
options:
enabled: True
hosts: abc.pythian.com,xyz.pythian.com
user: grafana
passwd: xxxx

Moving data to Hiera-eyaml
Hiera puppet code can be changed to following to enable eyaml.

class profile::hieraconf {
# hiera configuration
class { ‘hiera’:
hierarchy => [
‘%{environment}/%{calling_class}’,
‘%{environment}’,
‘%{fqdn}’,
‘common’,
‘accounts’,
‘dev’
],
eyaml => true,
eyaml_datadir => ‘/etc/puppet/hieradata’,
eyaml_extension => ‘eyaml’,
}
}

This will add eyaml backend to puppet after a puppet run on puppet server. Puppet modules does following to achieve this.

Update
1. The hiera-eyaml gem will be installed.
2. Following keys will be created for hiera-eyaml using ‘eyaml createkeys’.

/etc/puppet/keys/private_key.pkcs7.pem
/etc/puppet/keys/public_key.pkcs7.pem

3. Update /etc/hiera.conf.

The /etc/hiera.conf would look like following after the puppet run,

/etc/puppet/hiera.yaml# managed by puppet

:backends:
– eyaml
– yaml
:logger: console
:hierarchy:
– “%{environment}/%{calling_class}”
– “%{environment}”
– “%{fqdn}”
– common
– accounts
– dev
:yaml:
:datadir: /etc/puppet/hieradata
:eyaml:
:datadir: /etc/puppet/hieradata
:extension: eyaml
:pkcs7_private_key: /etc/puppet/keys/private_key.pkcs7.pem
:pkcs7_public_key: /etc/puppet/keys/public_key.pkcs7.pem

Puppetmaster need to be restarted after this as changes to hiera.conf would need a restart to apply.

Using eyaml command line

Eyaml commands need to be used in a directory with keys directory(In this example /etc/puppet). Following command can be used to encrypt a password. The command would give us two options, string and block.

# eyaml encrypt -p
Enter password: ****
string: ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ] OR
block: >
ENC[PKCS7,MIIBeQYJKoZIhvcNAQcDoIIBajCCAWYCAQAxggEhMIIBHQIBADAFMAACAQEw
DQYJKoZIhvcNAQEBBQAEggEAQMo0dyWRmBC30TVDVxEOoClgUsxtzDSXmrJL
pz3ydhvG0Ll96L6F2WbclmGtpaqksbpc98p08Ri8seNmSp4sIoZWaZs6B2Jk
BLOIJBZfSIcHH8tAdeV4gBS1067OD7I+ucpCnjUDTjYAp+PdupyeaBzkoWMA
X/TrmBXW39ndAATsgeymwwG/EcvaAMKm4b4qH0sqIKAWj2qeVJTlrBbkuqCj
qjOO/kc47gKlCarJkeJH/rtErpjJ0Et+6SJdbDxeSbJ2QhieXKGAj/ERCoqh
hXJiOofFuAloAAUfWUfPKnSZQEhHCPDkeyhgDHwc8akWjC4l0eeorZgLPcs1
1oQJqTA8BgkqhkiG9w0BBwEwHQYJYIZIAWUDBAEqBBC+JDHdj2M2++mFu+pv
ORXmgBA/Ng596hsGFg3jAmdlCLbQ]

To decrypt following command can be used.

# eyaml decrypt -s ‘ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ]’
test

The encrypted string or block can be used in hiera. While using our previous example, the hiera file would look like following. We also have to rename the file to .eyaml from .yaml.

hieradata/grafana.pythian.com.eyaml

diamond::collectors:
MongoDBCollector:
options:
enabled: True
hosts: abc.pythian.com,xyz.pythian.com
user: grafana
passwd: ENC[PKCS7,MIIBeQYJKoZIhvcN[..]Fg3jAmdlCLbQ]

Encrypting certificates
Following is a standard file resource used to copy an ssl certificate..

environments/production/manifests/logstash.pythian.com.ppfile { ‘/etc/logstash/certs/logstash-forwarder.crt’:
ensure => present,
mode => ‘0644’,
owner => ‘root’,
group => ‘root’,
source => ‘puppet:///modules/logstash/logstash-forwarder.crt’,
}

The file resource can be moved to hiera using hiera_hash.

environments/production/manifests/logstash.pythian.com.pp$fileslog = hiera_hash(‘fileslog’)
create_resources ( file, $fileslog )

The data can be added to a yaml file.

hieradata/common.yaml—
files:
‘/etc/logstash-forwarder/certs/logstash-forwarder.crt’:
ensure : present
mode : ‘0644’
owner : ‘root’
group : ‘root’
source : ‘puppet:///modules/logstash/logstash-forwarder.key’

To encrypt data, following command can be used.

# eyaml encrypt -f modules/logstash/files/logstash-forwarder.crt

The returned string value can be added using content parameter of file resource.

hieradata/common.eyaml
[..] files:
‘/etc/logstash-forwarder/certs/logstash-forwarder.crt’:
ensure : present
mode : ‘0644’
owner : ‘root’
group : ‘root’
content : ‘ENC[PKCS7,MIIB+wYJKoZI[..]C609Oc2QUvxARaw==]’

The above examples covers encrypting strings and files, which constitutes most of the sensitive data used in puppet code. Incorporating hiera-eyaml into puppet work-flow will ensure compliance and security of sensitive data.

Categories: DBA Blogs

sketchnote of Ed Wilson’s talk on ‘Operations Management Suite’

Matt Penny - Fri, 2016-05-27 11:17

Wilson, Ed - Microsoft Operations Management Suite

The OMS blog is: https://blogs.technet.microsoft.com/msoms/
The Scripting Guy blog is: https://blogs.technet.microsoft.com/heyscriptingguy/
Ed Wilson is on twitter here: https://twitter.com/ScriptingGuys
The London Powershell Users Group tweets here: https://twitter.com/lonpsug
The UK Powershell Users Group is http://www.get-psuguk.org/

Keywords: OMS, Powershell


Categories: DBA Blogs

At last APEX_PAGE.GET_URL!

Tony Andrews - Fri, 2016-05-27 04:51
I have just discovered (thanks to a tweet by @jeffreykemp) that in APEX 5.0 there is now a function called APEX_PAGE.GET_URL: About time!  I've been using this home-made version for years: So if I want to create a URL that redirects back to the same page with a request I can just write:     return my_apex_utils.fp (p_request=>'MYREQUEST'); One difference is that the one I use has a Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2016/05/at-last-apexpagegeturl.html

Pages

Subscribe to Oracle FAQ aggregator