Skip navigation.

Feed aggregator

Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Pythian Group - Wed, 2015-02-25 12:10

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Categories: DBA Blogs

Log Buffer #411, A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2015-02-25 12:00

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.

MySQL:

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

Categories: DBA Blogs

SQL Server 2014 Cumulative Update 6

Pythian Group - Wed, 2015-02-25 11:59

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here: https://support.microsoft.com/kb/3031047/en-us?wa=wsignin1.0

 

 

Categories: DBA Blogs

Microsoft addresses bug that impacts all Windows iterations [VIDEO]

Chris Foot - Wed, 2015-02-25 10:33

Transcript

Hi, welcome to RDX! Microsoft finally addressed a bug that could have enabled hackers to initiate man-in-the-middle attacks. The vulnerability was discovered nearly 15 years ago by JAS Global Advisors, a security firm from Chicago.

The flaw, which was dubbed Jasburg, posed grievous concerns for many organizations. In some cases, Jasburg could allow a figure to install malware, manipulate data or create administrator accounts. All of these actions were conducted through a business’s Active Directory.

Why did it take so long to address? Silicon Angle noted Jasburg was rooted in the Windows OS design. This meant that Microsoft had to re-engineer central components of its flagship OS and add a list of new functions. The patch should be implemented once users reboot their systems.

Thanks for watching!

The post Microsoft addresses bug that impacts all Windows iterations [VIDEO] appeared first on Remote DBA Experts.

Introducing Oracle Big Data Discovery Part 2: Data Transformation, Wrangling and Exploration

Rittman Mead Consulting - Wed, 2015-02-25 09:51

In yesterday’s post I looked at Oracle Big Data Discovery and how it brought the search and analytic capabilities of Endeca to Hadoop. We looked at how the Oracle Endeca Information Discovery Studio application works with a version of the Endeca Server engine to analyse and visualise sample sets of data from the Hadoop cluster, and how it uses Apache Spark to retrieve data from Hadoop and then transform that data to make it more suitable for data discovery and data analysis applications. Oracle Big Data Discovery is designed to work alongside ODI and GoldenGate for Big Data once you’ve decided on your main data flows, and Oracle Big Data SQL for BI tool and application access to the entire “data reservoir”. So how does Big Data Discovery work, and what role does it play in the overall big data project workflow?

The best way to think of Big Data Discovery, to my mind, is “Endeca on Hadoop”. Endeca Information Discovery had three main parts to it; the data loading part performed using Endeca Information Discovery Integrator and more recently, the personal data upload feature in Endeca Information Discovery Studio. Data was then ingested into the Endeca Server engine and stored in a key/value-store NoSQL database, indexed, parsed and enriched, and then analyzed using the graphical user interface provided by Studio. As I explained in more detail in my first post in the series yesterday, Big Data Discovery runs the Studio and DGraph (Endeca Server) elements on one or more dedicated nodes, and then reads data in from Hadoop and then writes it back in transformed states using Apache Spark, as shown in the diagram below:

NewImage

As the data discovery and analysis features in Big Data Discovery rely on getting data into the DGraph (Endeca Server) engine first of all, this implies two things; first, we’ll need to take a subset or sample of the entire Hadoop dataset and load just that into the DGraph engine, and second we’ll need some means of transforming and “massaging” that data so it works well as a data discovery set, and then writing those changes back to the full Hadoop dataset if we want to use it with some other tool – OBIEE or Big Data SQL, for example. To see how this process works, let’s use the same Rittman Mead Apache webserver logs that I’ve used in my previous examples, and bring that data and some additional reference data into Big Data Discovery.

The log data from the RM webserver is in Apache Combined Log Format and a sample of the rows looks like this:

NewImage

For data to be eligible to be ingested into Big Data Discovery, it has to be registered in the Hive Metastore and with the metadata available to use by external tools using the HCatalog service. This means that you already need to have created a Hive table over each datasource, either pointing this table to regular fixed-width or delimited files, or using a SerDe to translate another file format – say a compressed/column-store format like Parquet – into a format that Hive can understand. In our case I can use the RegEx SerDe that I first used in this blog post a while ago to create a Hive table over the log file and split out the various log file elements, with the resulting DDL looking like this:

CREATE EXTERNAL TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\]) 
([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\"
[^\"]*\") ([^ \"]*|\"[^\"]*\"))?",
"output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s"
)
STORED AS TEXTFILE
LOCATION '/user/oracle/rm_logs';

If I then register the SerDe with Big Data Discovery I could ingest the table and file at this point, or I can use a Hive CTAS statement to remove the dependency on the SerDe and ingest into BDD without any further configuration.

create table access_logs as
select * 
from apachelog;

At this point, if you’ve got the BDD Hive Table Detector running, it should pick up the presence of the new hive table and ingest it into BDD (you can whitelist table names, and restrict it to certain Hive databases if needed). Or, you can manually trigger the ingestion from the Data Processing CLI on the BDD node, like this:

[oracle@bddnode1 ~]$ cd /home/oracle/Middleware/BDD1.0/dataprocessing/edp_cli
[oracle@bddnode1 edp_cli]$ ./data_processing_CLI -t access_logs;

The data processing process then creates an Apache Oozie job to sample a statistically relevant sample set of data into Apache Spark – with a 1% sample providing 95% sample accuracy – that is the profiled, enriched and then loaded into the Big Data Discovery DGraph engine for further transformation, then exploration and analysis within Big Data Discovery Studio.

NewImage

The profiling step in this process scans the incoming data and helps BDD determine the datatype of each Hive table column, the distribution of values within the column and so on, whilst the enrichment part identifies key words and phrases and other key lexical facts about the dataset. A key concept here also is that BDD typically works with a representative sample of your Hive table contents, not the whole contents, as all the data you analyse has to fit within the memory space with the DGraph engine, just like it used to with Endeca Server. At some point its likely that the functionality of the DGraph engine will be unbundled from the Endeca Server and run natively across the actual Hadoop cluster, but for now you have to separately ingest data into the DGraph engine (which can run clustered on BDD nodes) and analyse it there – however the rules of sampling are that if you’ve got a sufficiently big sample – say, 1m rows – regardless of the actual main dataset size this sample set is considered sufficiently representative – 95% in this case – as to make loading a bigger sample set not really worth the effort. But bear in mind when working with a BDD dataset that you’re working a sample, not the full set, so if a value you’re looking for is missing it might be because it’s not in this particular sample.

Once you’ve ingested the new dataset into BDD, you see it listed amongst the others that have previously been ingested, like this:

NewImage

At this point you can explore the dataset, to take an initial look at the patterns and values in the dataset in its raw form.

NewImage

Unfortunately, in this raw form the data in the access_logs table isn’t all that useful – details of the page request URL are mixed in with the HTTP protocol and method, for example; dates are in strings; details of the person accession the site are in IP address format rather than a geographical location, and so on. In previous examples on this blog I’ve looked at various methods to cleanse, transform and enhance the data in log file tables like this, using tools and techniques such as Hive table transformations, Pig and Apache Spark scripts, and ODI mappings but all of these typically require some IT invovement whereas one of the hallmarks of recent versions of Endeca Information Discovery Studio was giving power-users the ability to transform and enrich data themselves. Big Data Discovery provides tools to cleanse, transform and enrich data, with menu items for common transformations and a Groovy script editor for more complex ones, including deriving sentiment values from textual data and stripping out HTML and formatting characters from text.

NewImage

Once you’ve finished transforming and enriching the dataset, you can either save (commit) the changes back to the sample dataset in the BDD DGraph engine, or you can use the transformation rules you’ve defined to apply those transformations to the entire Hive table contents back on Hadoop, with the transformation work being done using Apache Spark. Datasets are loaded into “projects” and each project can have its own transformed view of the raw data, with copies of the dataset being kept in the BDD DGraph engine to represent each team’s specific view onto the raw datasets.

NewImage

In practice I found this didn’t, at the current product state, completely replace the need for a Hadoop developer or R data analyst – you need to get your data files into Hive and HCatalog at the start which involves parsing and interpreting semi-structured data files, and I often did some transformations in BDD, then applied the transformations to the whole Hive dataset and then re-imported the results back into BDD to start from a simple known state. But it certainly made tasks such as turning IP addresses into countries and cities, splitting our URLs and removing HTML tags much easier and I got the data cleansing process done in a matter of hours compared to the days with manual Hive, Pig and Spark scripting.

Now the data in my log file dataset is much more usable and easy to understand, with URLs split out, status codes grouped into high-level descriptors, and other descriptive and formatting changes made.

NewImage

I can also at this point bring in additional datasets, either created manually outside of BDD and ingested into the DGraph from Hive, or manually uploaded using the Studio interface. These dataset uploads then live in the BDD DGraph engine, and are then written back to Hive for long-term persistence or for sharing with other tools and processes.

NewImage

These datasets can then be joined to the main dataset on matching dataset columns, giving you a table-join interface not unlike OBIEE’s physical model editor.

NewImage

So now we’re in a position where our datasets have been ingested into BDD, and we’ve cleansed, transformed and joined them into a combined web activity dataset. In tomorrow’s final post I’ll look at the data visualisation part of Big Data Discovery and see how it brings the capabilities of Endeca Information Discovery Studio to Hadoop.

Categories: BI & Warehousing

DBMS_INMEMORY_ADVISOR

Marco Gralike - Wed, 2015-02-25 08:39
When you follow the Oracle in-memory / optimizer team, then you have probably seen this…

PeopleTools 8.54: Oracle Resource Manager

David Kurtz - Wed, 2015-02-25 04:11
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.
PeopleTools FeatureThis feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

  • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
SELECT DISTINCT group_or_subplan, type
FROM dba_rsrc_plan_directives
WHERE plan = 'MIXED_WORKLOAD_PLAN'
ORDER BY 2 DESC,1
/

GROUP_OR_SUBPLAN TYPE
------------------------------ --------------
ORA$AUTOTASK_SUB_PLAN PLAN
BATCH_GROUP CONSUMER_GROUP
INTERACTIVE_GROUP CONSUMER_GROUP
ORA$DIAGNOSTICS CONSUMER_GROUP
OTHER_GROUPS CONSUMER_GROUP
SYS_GROUP CONSUMER_GROUP
If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE
  • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
SELECT PT_ORA_CONSUMR_GRP 
FROM PS_PT_ORA_RESOURCE
, PS_PTEXEC2RESOURCE
WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
AND PT_ORA_CONSUMR_GRP <> ' '
AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

PT_ORA_CONSUMR_GRP
------------------------
INTERACTIVE_GROUP

And then the PeopleSoft process explicitly switches its group, thus:
DECLARE 
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
END;
Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.
SetupThe PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.
BEGIN
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
END;

BEGIN
FOR i IN(
SELECT DISTINCT r.pt_ora_consumr_grp
FROM sysadm.ps_pt_ora_resource r
WHERE r.pt_ora_consumr_grp != ' '
AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
) LOOP
dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
(GRANTEE_NAME => 'SYSADM'
,CONSUMER_GROUP => i.pt_ora_consumr_grp
,GRANT_OPTION => FALSE);
END LOOP;
END;
/

The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.
NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------
resource_manager_plan string MIXED_WORKLOAD_PLAN

I question one or two of the mappings on PS_PTEXEC2RESOURCE.
SELECT * FROM PS_PTEXEC2RESOURCE …

PT_EXECUTABLE_NAME PT_RESOURCE_NAME
-------------------------------- -----------------

PSAPPSRV APPLICATION SERVE
PSQED MISCELLANEOUS
PSQRYSRV QUERY SERVER

  • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
    • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
  • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.
Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group MappingsI would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'MODULE_NAME'
,value => 'PROCESSMONITOR'
,consumer_group => 'SYS_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings
exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
, module, action, resource_consumer_group
FROM v$session
WHERE module IN('PROCESSMONITOR','WIBBLE')
ORDER BY program, module, action
/

So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.
PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 
exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.
DECLARE 
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
END;
/

PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.
PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.
UPDATE ps_ptexec2resource 
SET pt_resource_name = 'DO_NOT_USE'
WHERE pt_executable_name = 'PSAPPSRV'
AND pt_resource_name = 'APPLICATION SERVER'
/
COMMIT
/
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
END;
/
BEGIN
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'CLIENT_PROGRAM'
,value => 'PSAPPSRV'
,consumer_group => 'INTERACTIVE_GROUP');

DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'MODULE_NAME'
,value => 'PROCESSMONITOR'
,consumer_group => 'SYS_GROUP');

DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
explicit => 1,
oracle_user => 2,
service_name => 3,
module_name_action => 4, --note higher than just module
module_name => 5, --note higher than program
service_module => 6,
service_module_action => 7,
client_os_user => 8,
client_program => 9, --note lower than module
client_machine => 10
);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

ConclusionI have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.
  • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
  • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
    • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
    • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.
Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules©David Kurtz, Go-Faster Consultancy Ltd.

Partner Webcast – Oracle Business Process Management 12c : The Game Changer for your Business

The Oracle Business Process Management Suite 12c (BPM) is of one the most complete BPM suites in the market and the most feature rich BPM suite offerings. There have been a wide variety of changes...

We share our skills to maximize your revenue!
Categories: DBA Blogs

BPM12c Quickstart "invalid oramds url" solved; recommended patches

Darwin IT - Wed, 2015-02-25 01:53
A few weeks ago I mentioned that I ran into a bug relating an invalid MDS url while trying to create an XSL based on the Case.xsd.

Just this morning got a notification on my Service Request that there is a patch for an invalid MDS url while creating an XSLT based on the Case management XSD: 19775314. There are two versions of the patch: one for the  12.1.3.0.0 and one for 12.1.3.0.1 for homes that have the bundle patch applied.


I tried the second one on my BPM QuickStart home that was patched with the bundle patch.

So currently the recommended patches on the BPM QuickStart are:
  • 20163149: (dataobject assignment lost after dehydration), this is mentioned as a prerequesite to the bundle patch, as alternative to the wrongly mentioned patch in the Readme.
  • 19707784: SOA Bundle Patch 12.1.3.0.1
  • 20440332: Initiator task form does not shown up on workspace and task is auto approved
  • 19775314: java.lang.IllegalStateException: Invalid url ERROR WHEN CREATING TRANSFORMATION (Patch) 
  • 19706799: db adapter wizard mappings and xsd file creation does not trigger event in win (Patch); choose the 12.1.3.0.1 version.

Annonce : Oracle Database In-Memory Advisor

Jean-Philippe Pinte - Wed, 2015-02-25 01:27
Oracle Database In-Memory Advisor est maintenant disponible.
Pour utiliser cet assistant, le "Database Tuning Pack" est nécessaire.

Plus d'information :
  • Page OTN
  • Note MOS 1965342.1

Speaking at Delphix User Group Webex March 11

Bobby Durrett's DBA Blog - Tue, 2015-02-24 17:42

On March 11 at 10 am California time I will be speaking in a Delphix User Group Webex session.

Here is the sign up url: WebEx sign up.

Adam Leventhal, the Delphix CTO, will also be on the call previewing the new Delphix 4.2 features.

I will describe our experience with Delphix and the lessons we have learned.  It is a technical talk so it should have enough details to have value to a technical audience.  Hopefully I have put enough effort into the talk to make it useful to other people who have or are considering getting Delphix.

There will time for questions and answers in addition to our talks.

I really enjoy doing user group presentations.  I had submitted an abstract for this talk to the Collaborate 2015 Oracle user group conference but it was not accepted so I wont get a chance to give it there.  But, this WebEx event gives me a chance to present the same material, so I’m happy to have this opportunity.

If you have an interest in hearing about Delphix join the call.  It is free and there will be some good technical content.

– Bobby

P.S. If this talk interests you I also have some earlier blog posts that relate to some of the material I will be covering:

Delphix first month

Delphix direct i/o and direct path reads

Delphix table recovery

Also, I plan to post the slides after the talk.

Categories: DBA Blogs

Oracle Database In-Memory Advisor Released

Asif Momen - Tue, 2015-02-24 16:22
Oracle Database In-Memory option was released with Oracle Database 12c (12.1.0.2) and the In-Memory Advisor (IMA) has been much awaited since then. The Oracle Database In-Memory is designed to achieve the following goals:
  1.  Speed up analytical queries
  2.  Speed up OLTP transactions
  3.  NO application changes


Without the In-Memory Advisor, a DBA has to manually identify the tables to be placed in the In-Memory Column Store (IMCS). This manual task is no more required as the IMA, analyzes the analytical workload of the database and produces a recommendation report (which includes SQL commands to place the tables in IMCS).

For more information on IMA please refer to MOS: 1965343.1 and you may also download the best practices white paper from here.



How will the BI industry progress in 2015? [VIDEO]

Chris Foot - Tue, 2015-02-24 14:38

Transcript

Hi, welcome to RDX! Nowadays, almost every company uses business intelligence tools. Whether measuring return on investment or identifying your most popular products, BI can be an integral part of your operation.

But how will the technology progress in 2015? For one thing, it’s likely that new iterations of relational databases will receive integrated analytics functions. SQL Server is one particular solution that has become more compatible with Power BI, Microsoft’s signature BI application.

Mobile analytics has garnered much attention, but, in general, most implementations aren’t as flashy as some users would like them to be. However, many companies are engineering their apps to perform data analysis on the backend. This means servers running SQL databases will do the heavy lifting.

Thanks for watching! If you want to know how BI tools can be integrated into your databases, consult a team of DBAs.

The post How will the BI industry progress in 2015? [VIDEO] appeared first on Remote DBA Experts.

Oracle Linux and Database Smart Flash Cache

Wim Coekaerts - Tue, 2015-02-24 14:07
One, sometimes overlooked, cool feature of the Oracle Database running on Oracle Linux is called Database Smart Flash Cache.

You can find an overview of the feature in the Oracle Database Administrator's Guide. Basically, if you have flash devices attached to your server, you can use this flash memory to increase the size of the buffer cache. So instead of aging blocks out of the buffer cache and having to go back to reading them from disk, they move to the much, much faster flash storage as a secondary fast buffer cache (for reads, not writes).

Some scenarios where this is very useful : you have huge tables and huge amounts of data, a very, very large database with tons of query activity (let's say many TB) and your server is limited to a relatively small amount of main RAM - (let's say 128 or 256G). In this case, if you were to purchase and add a flash storage device of 256G or 512G (example), you can attach this device to the database with the Database Smart Flash Cache feature and increase the buffercache of your database from like 100G or 200G to 300-700G on that same server. In a good number of cases this will give you a significant performance improvement without having to purchase a new server that handles more memory or purchase flash storage that can handle your many TB of storage to live in flash instead of rotational storage.

It is also incredibly easy to configure.

-1 install Oracle Linux (I installed Oracle Linux 6 with UEK3)
-2 install Oracle Database 12c (this would also work with 11g - I installed 12.1.0.2.0 EE)
-3 add a flash device to your system (for the example I just added a 1GB device showing up as /dev/sdb)
-4 attach the storage to the database in sqlplus
Done.

$ ls /dev/sdb
/dev/sdb

$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Feb 24 05:46:08 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  alter system set db_flash_cache_file='/dev/sdb' scope=spfile;

System altered.

SQL> alter system set db_flash_cache_size=1G scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 4932501504 bytes
Fixed Size		    2934456 bytes
Variable Size		 1023412552 bytes
Database Buffers	 3892314112 bytes
Redo Buffers		   13840384 bytes
Database mounted.
Database opened.

SQL> show parameters flash

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_flash_cache_file		     string	 /dev/sdb
db_flash_cache_size		     big integer 1G
db_flashback_retention_target	     integer	 1440

SQL> select * from v$flashfilestat; 

FLASHFILE#
----------
NAME
--------------------------------------------------------------------------------
     BYTES    ENABLED SINGLEBLKRDS SINGLEBLKRDTIM_MICRO     CON_ID
---------- ---------- ------------ -------------------- ----------
	 1
/dev/sdb
1073741824	    1		 0		      0 	 0

You can get more information on configuration and guidelines/tuning here. If you want selective control of which tables can use or will use the Database Smart Flash Cache, you can use the ALTER TABLE command. See here. Specifically the STORAGE clause. By default, the tables are aged out into the flash cache but if you don't want certain tables to be cached you can use the NONE option.

alter table foo storage (flash_cache none);
This feature can really make a big difference in a number of database environments and I highly recommend taking a look at how Oracle Linux and Oracle Database 12c can help you enhance your setup. It's included with the database running on Oracle Linux.

Here is a link to a white paper that gives a bit of a performance overview.

MAF 2.1 Alta Mobile UI - Running On iPad Device

Andrejus Baranovski - Tue, 2015-02-24 14:03
I have installed MAF application (described and available for download here: MAF 2.1 Alta Mobile UI and Oracle Mobile Suite) on iPad device, iOS 8 and would like to share couple of tips and tricks about it. I was installing on iPhone/iPad previous MAF versions (it was called ADF Mobile) - ADF Mobile - Live on iPhone Device. It is always forth to read Oracle Developer guide for MAF - 27.4.2 How to Deploy an Application to an iOS-Powered Device.

You would need to get Apple Development Provisioning Profile (this costs around $100), in order to be able to install MAF application on iPad device for testing. Provisioning Profile creation process is streamlined in iOS 8 and is simple to follow. Here is the example of our Apple Development Provisioning Profile entry, this can be download and installed on Mac OS with one click:


Sample MAF application I'm going to deploy is connecting to the REST service. Make sure to set proper IP address for the REST connection entry in MAF. IP must point to the Service Bus service with published REST connection:


JDeveloper 12c is fetching Provisioning Profile information automatically. You only need to copy paste Common Name from iOS development certificate into Signing Identity field (created and registered during Provisioning Profile creation process):


Make sure to specify the same Application Bundle Id prefix as the one registered in Provisioning Profile. Documentation states you can test MAF application on iPhone/iPad device only in Debug mode, however this is not true - it works fine in Release mode as well:


Thats it with configuration. Choose to deploy MAF application into IPA distribution package:


IPA distribution package file must be generated in deploy folder. Double click on it and it will get installed into iTunes:


Open iPhone/iPad section in iTunes and go to the App category. You should see your new MAF application listed there. Click on Install button and then press Synch - this will install application into the device:


Application is successfully loaded and dashboard screen is displayed. Service Bus provides REST data to the MAF application running on the iPad, data is rendered in Tree Map graph (MAF component):


User could switch to Employees screen:


Alta UI look and feel - we could search for employees and browse through a list with shortcuts:


Switch to cards view, instead of default list view:


Select employee who is a manager:


Pie graph with compensation of managed employees is displayed:


List of managed employees is also present:


I have tested AirPlay and connected iPad with Mac. This is useful to display iPad screen on projector, when you want to demonstrate your app to the audience. AirPlay synchronisation works pretty well, without configuration headache (you may require additional utility application for this). You must enable mirroring on your iPad device:


We are getting iPad screen view on the Mac. This is pretty useful for the presentations and demos:

Obat Bius Serbuk Soporific Powder

Kristian Jones - Tue, 2015-02-24 12:33
Obat Bius Serbuk - Di indonesia telah banyak sekali beredar berbagai jenis obat bius mulai dari "berbentuk cair" yang di pakai dengan percampuran obat bius tersebut dengan air, "berbentuk semprot" dipakai dengan sistem semprot, dan "obat bius hirup" dipakai dengan sistem hirup. Dan ini ada lagi obat bius dimana sedikit berbeda yaitu berbentuk bubuk bernama soporific powder. Obat bius ini seringkali di gunakan orang untuk mengobati penyakit insomnia atau gangguan dalam atifitas tidurnya.


Bagi mereka yang mengidap penyakit insomnia akan merasakan sulit ketika ingin tidur, bahkan sudah mencoba mengkonsumsi berbagai macam obat tidur namun tidak ada yang berfungsi. Jika anda adalah pelakunya sudahkan anda mencoba obat bius serbuk ini ? Cara Penggunaan Obat Bius SerbukUntuk penggunaan obat bius jenis serbuk ini terbilang cukup mudah, cukup campurkan soporific powder dengan berbagai macam minuman ataupun makanan, lalu biarkan hingga 3-5 menit dan biarkan obat ini bereaksi. Anda akan mulai merasakan kantuk yang sangat dahsyat sehingga membuat anda tertidur dengan sangat pulsa. Ingat ikuti aturan penggunaan ketika mengkonsumsi obat ini karena bila terlalu banyak mengkonsumsinya akan mengakibatkan overdosis.Kelebihan Obat Bius SerbukObat bius serbuk ini memiliki beberapa kelebihan diantaranya :
  • Penggunaan yang cukup mudah, bisa digunakan di makanan ataupun minuman.
  • Reaksi terbilang cepat, berkisar diantara 3-5 menit dan obat ini akan segera bereaksi dan membuat target tertidur pulas.
  • Target akan tertidur pulas sehingga apapun yang terjadi padanya tidak akan diketahuinya ketika ia bangun.
  • Harga relatif lebih murah dibandingkan obat bius lainnya.
  • Memiliki reaksi yang berbeda dimana obat bius pada umumnya akan membuat target pingsan, namun pada obat ini hanya akan merasakan tidur yang teramat sangat pulas.
  • Dan masih banyak kelebihan lainnya yang tidak bisa saya sebutkan satu persatu.
Dalam 1 bungkus obat ini memiliki isi lebih kurang 5 ml dimana dapat digunakan dalam 5x penggunaan. Jadi bila di akumulasikan hanya dengan 1 bungkus obat ini anda bisa merasakan tidur hingga 3-4 hari lamanya, tentu harus dibagi bagi ya jangan di konsumsi sekaligus, dengan mengkonsumsi obat bius serbuk ini diharapkan dapat memberikan kesegaran serta kebugaran jasmani untuk anda penderita insomnia.

Thirsty 'Tuesday' – Are You Ready for SharePoint 2016?

WebCenter Team - Tue, 2015-02-24 11:42

Most organizations now have SharePoint in one form or another within their organization, but 63% are somewhat stalled in their adoption and progress. Top biggest ongoing issues are persuading staff to use it, poor governance, and lack of internal expertise. It can’t just be left to IT, it requires the business managers and information workers to get involved to maximize the value.



Join your peers from organizations like Target, 3M, Medtronic and US Bank at this free 90 min meetup to learn how to avoid common mistakes and how to ensure success with SharePoint. Connect with local SharePoint experts and customers, and get the latest AIIM (Association for Information and Image Management) research from 400+ SharePoint deployments. Bring your tough questions and ask your colleagues and SharePoint experts for their advice and assistance.

Don't miss this opportunity to meet local SharePoint experts and customers. 

  • Identify the best way to get user adoption, governance, and business value
  • Discuss how to best re-energize a stalled implementation 
  • Plan the role of SharePoint vs. 3rd party extensions and applications
  • Describe best practices for upgrading and migrating to latest version

"If you work with your organization’s information or collaboration resources and technologies, you’ll surely find AIIM a treasure trove of resources."- Andrew McAfee, Professor and author, Enterprise 2.0 and Race Against the Machine

"I find AIIM one of the very best resources for my job." - Larry Sanders, Supervisor at Woodmen of the World Life Insurance Society

“The range of information that AIIM is providing to our industry is nothing short of impressive and the Professional Membership sits at the heart of it.” - Hanns Köhler-Krüner, Research Director at Gartner

Register now to secure your spot - don't miss this free opportunity for education and networking!

Tuesday, March 3, 2015, 3:30-5:30PM CST

Location Tin Whiskers Brewery
125 East 9th Street
Saint Paul, MN 55101
Event Image

cannot set user id: Resource temporarily unavailable or Fork: Retry: Resource Temporarily Unavailable

Vikram Das - Tue, 2015-02-24 10:01
Amjad reported this error while trying to login to the server:

cannot set user id: Resource temporarily unavailable

In the past he had reported this error:

Fork: Retry: Resource Temporarily Unavailable

This is due to the fact that the user has run out of free stacks.  In OEL 6.x , the stack setting is not done in /etc/security/limits.conf but in the file:

/etc/security/limits.d/90-nproc.conf

The default content in the file is:

cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     1024
root       soft    nproc     unlimited

I changed this to:
After$ cat /etc/security/limits.d/90-nproc.conf# Default limit for number of user's processes to prevent# accidental fork bombs.# See rhbz #432903 for reasoning.
*          soft    nproc     16384root       soft    nproc     unlimited$
As soon as this change was made, Amjad was able to login.

Categories: APPS Blogs

Mobile My Oracle Support: Learn More!

Joshua Solomin - Tue, 2015-02-24 09:05
Untitled Document Mobile My Oracle Support (MMOS) allows access to support information whenever needed, right from a smartphone.
Access Service Requests, knowledge documents, and bugs.
View and update Service Requests.
Search for Service Requests using Advanced Search or saved searches.
Manage, schedule and approve Change Requests (RFCs) for Managed Cloud Service customers.
Search the Knowledge Base, bugs, and the Oracle System Handbook.
Explore content about Accreditation, Advisor Webcasts, Social Media, Instrumentation, and other proactive services.
User Administrators (CUAs) can manage pending users.

Watch the video below for more information.